Using Python and LangChain

Invoke an LLM

You will use Python and LangChain to get responses from an LLM.

Open and run the 2-llm-rag-python-langchain\llm_invoke.py program.

python
llm_invoke.py
Unresolved directive in lesson.adoc - include::{repository-raw}/main/2-llm-rag-python-langchain/llm_invoke.py[]

This program will use LangChain to invoke (call) the OpenAI LLM and print the response.

The phrase "What is Neo4j?" is passed to the LLM and the response is printed.

You should see something similar to:

"Neo4j is a highly scalable, native graph database that is designed to store, process, and query large networks of highly connected data. It is based on the property graph model, which allows for the representation of complex relationships between data entities. Neo4j is known for its fast performance and ability to handle complex queries efficiently, making it a popular choice for applications that require real-time data processing and analysis. It is commonly used for a variety of use cases, such as social networks, fraud detection, recommendation engines, and network and IT operations."

Try changing the phrase and rerun the program.

Prompts

Prompt templates allow you to create reusable instructions or questions.

This prompt template would give context to the LLM and instruct it to respond as a cockney fruit and vegetable seller.

python
"""
You are a cockney fruit and vegetable seller.
Your role is to assist your customer with their fruit and vegetable needs.
Respond using cockney rhyming slang.

Tell me about the following fruit: {fruit}
"""

You can define parameters within the template using braces {} e.g. {fruit}. These parameters will be replaced with values when the prompt is formatted.

Create a Prompt Template

Open the 2-llm-rag-python-langchain\llm_prompt.py file.

python
llm_prompt.py
Unresolved directive in lesson.adoc - include::{repository-raw}/main/2-llm-rag-python-langchain/llm_prompt.py[]

Modify the program to create a prompt template:

python
Unresolved directive in lesson.adoc - include::{repository-raw}/main/2-llm-rag-python-langchain/solutions/llm_prompt.py[tag=import_prompt]

Unresolved directive in lesson.adoc - include::{repository-raw}/main/2-llm-rag-python-langchain/solutions/llm_prompt.py[tag=template]

Call the LLM, passing the formatted prompt template as the input:

python
Unresolved directive in lesson.adoc - include::{repository-raw}/main/2-llm-rag-python-langchain/solutions/llm_prompt.py[tag=invoke]

You use the format method to pass the parameters to the prompt e.g. fruit="apple".

The input variables will be validated when the prompt is formatted, and a KeyError will be raised if any variables are missing from the input.

Click to view the complete code
Unresolved directive in lesson.adoc - include::{repository-raw}/main/2-llm-rag-python-langchain/solutions/llm_prompt.py[tag=**]

PromptTemplate.format

When the program is run, the prompt is formatted using the parameters:

You are a cockney fruit and vegetable seller.
Your role is to assist your customer with their fruit and vegetable needs
Respond using cockney rhyming slang.
Tell me about the following fruit: apple

The LLM will then create a response based on the formatted prompt:

Well, apples is a right corker - they come in all shapes and sizes from
Granny Smiths to Royal Galas. Got 'em right 'ere, two a penny - come and
grab a pick of the barrel!

Differing Results

If you run the program multiple times, you will notice you get different responses because the LLM is generating the answer each time.

Create a new prompt template and use it to get a response from the LLM.

Chains

Chains are reusable components that allow you to combine language models with different data sources and third-party APIs. You can combine a prompt and llm into a chain to create a reusable component.

The simplest chain combines a prompt template with an LLM and returns a response.

You can create a chain using LangChain Expression Language (LCEL). LCEL is a declarative way to chain LangChain components together.

Components are chained together using the | operator.

python
chain = prompt | llm

Create a Chain

Open the 2-llm-rag-python-langchain\llm_chain.py file

Review the code and run the program.

python
llm_chain.py
Unresolved directive in lesson.adoc - include::{repository-raw}/main/2-llm-rag-python-langchain/llm_chain.py[]

The output from the chain is typically a string, and you can specify an output parser to parse the output.

Adding a StrOutputParser to the chain would ensure a string is returned.

python
Unresolved directive in lesson.adoc - include::{repository-raw}/main/2-llm-rag-python-langchain/solutions/llm_chain_string.py[tag=import_str_parser]

Unresolved directive in lesson.adoc - include::{repository-raw}/main/2-llm-rag-python-langchain/solutions/llm_chain_string.py[tag=llm_chain]

Output parsers

You can change the prompt to instruct the LLM to return a specific output type.

For example, specifying Output JSON in a specific format:

python
Unresolved directive in lesson.adoc - include::{repository-raw}/main/2-llm-rag-python-langchain/solutions/llm_chain_json.py[tag=prompt]

You can ensure LangChain parses the response as JSON by specifying SimpleJsonOutputParser as the output_parser:

python
Unresolved directive in lesson.adoc - include::{repository-raw}/main/2-llm-rag-python-langchain/solutions/llm_chain_json.py[tag=import_json_parser]

Unresolved directive in lesson.adoc - include::{repository-raw}/main/2-llm-rag-python-langchain/solutions/llm_chain_json.py[tag=llm_chain]

Experiment with the prompt to see how the response changes.

Lesson Summary

You learned how to invoke an LLM using Python and LangChain.

Next, you will learn about strategies for grounding an LLM.

Chatbot

How can I help you today?