Using Python and LangChain

You will use Python and LangChain to get responses from an LLM.

Invoke an LLM

Open the 2-llm-rag-python-langchain\llm_invoke.py file.

You will see the following code:

python
llm_invoke.py
import os
from dotenv import load_dotenv
load_dotenv()

from langchain_openai import OpenAI

llm = OpenAI(openai_api_key=os.getenv('OPENAI_API_KEY'))

response = llm.invoke("What is Neo4j?")

print(response)

This program will use LangChain to invoke (call) the OpenAI LLM and print the response.

The phrase "What is Neo4j?" is passed to the LLM. Run the program and observe the response.

You should see something similar to:

"Neo4j is a highly scalable, native graph database that is designed to store, process, and query large networks of highly connected data. It is based on the property graph model, which allows for the representation of complex relationships between data entities. Neo4j is known for its fast performance and ability to handle complex queries efficiently, making it a popular choice for applications that require real-time data processing and analysis. It is commonly used for a variety of use cases, such as social networks, fraud detection, recommendation engines, and network and IT operations."

Try changing the phrase and rerun the program.

Prompts

Prompt templates allow you to create reusable instructions or questions.

Below is an example of a prompt template:

python
"""
You are a cockney fruit and vegetable seller.
Your role is to assist your customer with their fruit and vegetable needs.
Respond using cockney rhyming slang.

Tell me about the following fruit: {fruit}
"""

This prompt template would give context to the LLM and instruct it to respond as a cockney fruit and vegetable seller.

You can define parameters within the template using braces {} e.g. {fruit}. These parameters will be replaced with values when the prompt is formatted.

Open the 2-llm-rag-python-langchain\llm_prompt.py file.

python
llm_prompt.py
import os
from dotenv import load_dotenv
load_dotenv()

from langchain_openai import OpenAI
from langchain.prompts import PromptTemplate

llm = OpenAI(openai_api_key=os.getenv('OPENAI_API_KEY'))

# Create prompt template
# template =

# Invoke the llm using the prompt template
# response = 

Modify the program to create a prompt template:

python
from langchain.prompts import PromptTemplate

template = PromptTemplate(template="""
You are a cockney fruit and vegetable seller.
Your role is to assist your customer with their fruit and vegetable needs.
Respond using cockney rhyming slang.

Tell me about the following fruit: {fruit}
""", input_variables=["fruit"])

Call the LLM, passing the formatted prompt template as the input:

python
response = llm.invoke(template.format(fruit="apple"))

print(response)
Click to view the complete code
import os
from dotenv import load_dotenv
load_dotenv()

from langchain_openai import OpenAI
from langchain.prompts import PromptTemplate

llm = OpenAI(openai_api_key=os.getenv('OPENAI_API_KEY'))

template = PromptTemplate(template="""
You are a cockney fruit and vegetable seller.
Your role is to assist your customer with their fruit and vegetable needs.
Respond using cockney rhyming slang.

Tell me about the following fruit: {fruit}
""", input_variables=["fruit"])

response = llm.invoke(template.format(fruit="apple"))

print(response)

You use the format method to pass the parameters to the prompt e.g. fruit="apple". The input variables will be validated when the prompt is formatted, and a KeyError will be raised if any variables are missing from the input.

The prompt will be formatted as follows:

You are a cockney fruit and vegetable seller.
Your role is to assist your customer with their fruit and vegetable needs
Respond using cockney rhyming slang.
Tell me about the following fruit: apple

When running the program, you should see a response similar to:

Well, apples is a right corker - they come in all shapes and sizes from Granny Smiths to Royal Galas. Got 'em right 'ere, two a penny - come and grab a pick of the barrel!

Differing Results

If you run the program multiple times, you will notice you get different responses because the LLM is generating the answer each time.

Before moving on, create a new prompt template and use it to get a response from the LLM.

Creating PromptTemplates

You can create a prompt from a string by calling the PromptTemplate.from_template() static method or load a prompt from a file using the PromptTemplate.from_file() static method.

Chains

Chains are reusable components that allow you to combine language models with different data sources and third-party APIs. You can combine a prompt and llm into a chain to create a reusable component.

The simplest chain combines a prompt template with an LLM and returns a response.

You can create a chain using LangChain Expression Language (LCEL). LCEL is a declarative way to chain Langchain components together.

Components are chained together using the | operator.

python
chain = prompt | llm

Open the 2-llm-rag-python-langchain\llm_chain.py file.

python
llm_chain.py
import os
from dotenv import load_dotenv
load_dotenv()

from langchain_openai import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain

llm = OpenAI(openai_api_key=os.getenv('OPENAI_API_KEY'))

template = PromptTemplate.from_template("""
You are a cockney fruit and vegetable seller.
Your role is to assist your customer with their fruit and vegetable needs.
Respond using cockney rhyming slang.

Tell me about the following fruit: {fruit}
""")

llm_chain = template | llm

response = llm_chain.invoke({"fruit": "apple"})

print(response)

Run the program.

The output from the chain is typically a string, and you can specify an output parser to parse the output.

Adding a StrOutputParser to the chain would ensure a string.

python
from langchain.schema import StrOutputParser

llm_chain = template | llm | StrOutputParser()

You can change the prompt to instruct the LLM to return a specific output type. For example, return JSON by specifying Output JSON and give a format in the prompt:

python
template = PromptTemplate.from_template("""
You are a cockney fruit and vegetable seller.
Your role is to assist your customer with their fruit and vegetable needs.
Respond using cockney rhyming slang.

Output JSON as {{"description": "your response here"}}

Tell me about the following fruit: {fruit}
""")

You can ensure LangChain parses the response as JSON by specifying SimpleJsonOutputParser as the output_parser:

python
from langchain.output_parsers.json import SimpleJsonOutputParser

llm_chain = template | llm | SimpleJsonOutputParser()

Experiment with the prompt to see how the response changes.

Continue

When you are ready, you can move on to the next task.

Lesson Summary

You learned how to invoke an LLM using Python and LangChain.

Next, you will learn about strategies for grounding an LLM.