Throughout this course, you will be adapting a simple LangChain agent to interact with Neo4j.
You will update the agent to query a Neo4j graph database, retrieve information using RAG and GraphRAG, and dynamically generate Cypher queries based on user input.
In this lesson, you will review the agents code to understand how it works.
Agent
Open the genai-integration-langchain/simple_agent.py
file.
from dotenv import load_dotenv
load_dotenv()
from langchain.chat_models import init_chat_model
from langgraph.graph import START, StateGraph
from langchain_core.prompts import PromptTemplate
from typing_extensions import List, TypedDict
# Initialize the LLM
model = init_chat_model("gpt-4o", model_provider="openai")
# Create a prompt
template = """Use the following pieces of context to answer the question at the end.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
{context}
Question: {question}
Answer:"""
prompt = PromptTemplate.from_template(template)
# Define state for application
class State(TypedDict):
question: str
context: List[dict]
answer: str
# Define functions for each step in the application
# Retrieve context
def retrieve(state: State):
context = [
{"location": "London", "weather": "Cloudy, sunny skies later"},
{"location": "San Francisco", "weather": "Sunny skies, raining overnight."},
]
return {"context": context}
# Generate the answer based on the question and context
def generate(state: State):
messages = prompt.invoke({"question": state["question"], "context": state["context"]})
response = model.invoke(messages)
return {"answer": response.content}
# Define application steps
workflow = StateGraph(State).add_sequence([retrieve, generate])
workflow.add_edge(START, "retrieve")
app = workflow.compile()
# Run the application
question = "What is the weather in San Francisco?"
response = app.invoke({"question": question})
print("Answer:", response["answer"])
Review the code and try to answer the following questions:
-
What is the agents purpose?
-
What
context
is added to the agents prompt? -
What do you think the final
answer
will be?
Run the agent to see what it does.
Click to reveal the answers
The agent is designed to answer questions about the information that is provided in the context
.
The context contains weather information about London and San Francisco, so:
-
When passed the question "What is the weather in San Francisco?".
-
The agent responds with "Sunny skies, raining overnight.".
Regardless of what data is in the context, the agent will provide an answer based on the information it has.
The application is a simple LangGraph agent that has 2 steps:
-
Retrieve information
-
Generate an answer based on the retrieved information
The code has 4 main sections:
-
Create an LLM and Prompt
-
Define the application state
-
Create the application workflow
-
Invoke the application
LLM and Prompt
The agent uses an OpenAI
LLM and a simple prompt to generate an answer based on the retrieved information.
# Initialize the LLM
model = init_chat_model("gpt-4o", model_provider="openai")
# Create a prompt
template = """Use the following pieces of context to answer the question at the end.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
{context}
Question: {question}
Answer:"""
prompt = PromptTemplate.from_template(template)
The prompt sets the instructions for the LLM to generate an answer based on the retrieved information.
The variables context
and question
are used to provide the information to the LLM.
Application State
The application state holds the information that is required to run the agent.
The state includes the original question
, the context
, and the generated answer
.
# Define state for application
class State(TypedDict):
question: str
context: List[dict]
answer: str
The context
can be any information that is relevant to the question being asked.
Application Workflow
The application workflow consists of:
-
Functions that will be executed to
retrieve
the context andgenerate
the answer:python# Define functions for each step in the application # Retrieve context def retrieve(state: State): context = [ {"location": "London", "weather": "Cloudy, sunny skies later"}, {"location": "San Francisco", "weather": "Sunny skies, raining overnight."}, ] return {"context": context} # Generate the answer based on the question and context def generate(state: State): messages = prompt.invoke({"question": state["question"], "context": state["context"]}) response = model.invoke(messages) return {"answer": response.content}
The application state will be updated with data returned by the functions.
-
The
workflow
defines the order the functions are executed:python# Define application steps workflow = StateGraph(State).add_sequence([retrieve, generate]) workflow.add_edge(START, "retrieve") app = workflow.compile()
The
retrieve
function is called at theSTART
of the workflow, before thegenerate
function.
Invoke
Finally, the application is invoked passing a question to the agent and printing the answer:
# Run the application
question = "What is the weather in San Francisco?"
response = app.invoke({"question": question})
print("Answer:", response["answer"])
Check your understanding
No Context
What will the agent most likely do if asked a question relating to a subject not in the context
?
-
❏ Raise an error
-
❏ Generate a response based on the LLM’s training data
-
❏ Ask for more information
-
✓ Respond with "I don’t know"
Hint
Review the prompt and consider how it will influence the agent when questions that are not covered by the provided context.
Solution
The prompt gives specific instructions to the LLM to answer questions based solely on the provided context
. If the context does not contain relevant information, the agent will likely respond with "I don’t know".
Lesson Summary
In this lesson, you review a simple LangChain agent that generates an answer based on a provided context.
In the next lesson, you will modify the agent to retrieve the schema from a Neo4j database and use it to answer questions about a graph.