Your app will interact with the LLM through an Agent.
Agents are objects that use an LLM to identify and execute actions in response to a user’s input. You can learn more in the Agents lesson in the Neo4j & LLM Fundamentals course.
In this lesson, you will create and integrate a new agent into the chatbot.
You will need to:
-
Create a chat chain as a tool for the agent.
-
Create a new agent.
-
Create a handler function that instructs the agent to handle messages.
-
Call the new handler function from
bot.py
Open the agent.py
file.
The code already imports the llm
and graph
instances you created in previous lessons.
from llm import llm
from graph import graph
# Create a movie chat chain
# Create a set of tools
# Create chat history callback
# Create the agent
# Create a handler to call the agent
Create a tool
Tools are components that the Agent can use to perform actions.
During this course, you will create multiple tools for the Agent to perform specific tasks. A tool is also required for "general chat" so the agent can respond to a user’s input when no other tool is available.
Create a new movie_chat
chain for the agent to use for general chat:
from langchain_core.prompts import ChatPromptTemplate
from langchain.schema import StrOutputParser
chat_prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a movie expert providing information about movies."),
("human", "{input}"),
]
)
movie_chat = chat_prompt | llm | StrOutputParser()
When you initialize the agent, you must pass a list of tools to the agent.
Add the movie_chat
tool to a tools
list:
from langchain.tools import Tool
tools = [
Tool.from_function(
name="General Chat",
description="For general movie chat not covered by other tools",
func=movie_chat.invoke,
)
]
When creating a tool, you specify three arguments:
-
The name of the tool, in this case,
General Chat
. -
A description that the agent LLM will use when deciding which tool to use for a particular task.
-
The function to call once the agent has selected the tool. This tool,
movie_chat.invoke
, will return a response from the LLM.
Conversation memory
The agent will need to be able to store and retrieve recent messages, allowing the agent to have a conversation, not simply respond to the last message.
You will use the Neo4jChatMessageHistory
class to store and retrieve messages from your Neo4j sandbox.
When initializing the agent, you must specify a callback function to return the memory component.
The agent will pass a session_id
to the callback, which you can use to retrieve the specific conversation for that session.
Add a callback get_memory
function that returns a Neo4jChatMessageHistory
object:
from langchain_community.chat_message_histories import Neo4jChatMessageHistory
def get_memory(session_id):
return Neo4jChatMessageHistory(session_id=session_id, graph=graph)
The graph
object you created in the last lesson connects the Neo4jChatMessageHistory
object to your Neo4j sandbox.
Initializing an Agent
Langchain provides functions for creating a new Agent.
There are different types of agents that you can create.
You will use the create_react_agent()
function to create a ReAct - Reasoning and Acting) agent type.
You run an agent using an AgentExecutor
object, which is responsible for executing the actions returned by the Agent.
Finally, you will wrap the agent in a RunnableWithMessageHistory
object to handle the conversation history.
Add the code to initialize the agent:
from langchain.agents import AgentExecutor, create_react_agent
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain import hub
agent_prompt = hub.pull("hwchase17/react-chat")
agent = create_react_agent(llm, tools, agent_prompt)
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=True
)
chat_agent = RunnableWithMessageHistory(
agent_executor,
get_memory,
input_messages_key="input",
history_messages_key="chat_history",
)
Agent prompt
An agent requires a prompt. You could create a prompt, but in this example, the program pulls a pre-existing prompt from the Langsmith Hub.
The hwcase17/react-chat
prompt instructs the model to provide an answer using the tools available in a specific format.
In the next lesson, you will modify the agent prompt.
Verbose Output
The verbose
argument of the AgentExecutor
is set to True
- this will output the agent’s reasoning to the console.
Verbose output is helpful when debugging the agent’s behavior and understanding how it makes decisions.
An example output
> Entering new AgentExecutor chain... Thought: Do I need to use a tool? Yes Action: General Chat Action Input: "Input": "tell me about a movie"Title: "Inception"
"Inception" is a 2010 sci-fi thriller directed by Christopher Nolan, known for his work on "The Dark Knight" trilogy and "Interstellar." The film features a star-studded cast including Leonardo DiCaprio, Joseph Gordon-Levitt, Ellen Page, Tom Hardy, and Marion Cotillard.
> Finished chain.
Add a Handler Function
The chat_agent
object is callable and will expect two parameters:
-
The user’s input
-
A
session_id
to identify the conversation
Streamlit will pass the user input from bot.py
.
You can use the get_session_id()
function in the util.py
module to retrieve a session ID from Streamlit.
Add a new generate_response()
function to the agent.py
file to handle the user’s input:
from utils import get_session_id
def generate_response(user_input):
"""
Create a handler that calls the Conversational agent
and returns a response to be rendered in the UI
"""
response = chat_agent.invoke(
{"input": user_input},
{"configurable": {"session_id": get_session_id()}},)
return response['output']
Review the code and note that:
-
The function takes a single string,
user_input
. -
The
user_input
is passed tochat_agent.invoke
method -
A
session_id
is retrieved using theget_session_id()
function -
The function returns a single string - the final response,
output
, from the LLM.
View the complete agent.py code
from llm import llm
from graph import graph
from langchain_core.prompts import ChatPromptTemplate
from langchain.schema import StrOutputParser
from langchain.tools import Tool
from langchain_community.chat_message_histories import Neo4jChatMessageHistory
from langchain.agents import AgentExecutor, create_react_agent
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain import hub
from utils import get_session_id
chat_prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a movie expert providing information about movies."),
("human", "{input}"),
]
)
movie_chat = chat_prompt | llm | StrOutputParser()
tools = [
Tool.from_function(
name="General Chat",
description="For general movie chat not covered by other tools",
func=movie_chat.invoke,
)
]
def get_memory(session_id):
return Neo4jChatMessageHistory(session_id=session_id, graph=graph)
agent_prompt = hub.pull("hwchase17/react-chat")
agent = create_react_agent(llm, tools, agent_prompt)
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=True
)
chat_agent = RunnableWithMessageHistory(
agent_executor,
get_memory,
input_messages_key="input",
history_messages_key="chat_history",
)
def generate_response(user_input):
"""
Create a handler that calls the Conversational agent
and returns a response to be rendered in the UI
"""
response = chat_agent.invoke(
{"input": user_input},
{"configurable": {"session_id": get_session_id()}},)
return response['output']
Calling the new Handler function
You can now update bot.py
to call the new generate_response()
function by modifying the handle_submit()
Open bot.py
and import the generate_response()
function from agent.py
:
from agent import generate_response
Modify the handle_submit()
function to call generate_response()
and write the response
:
# Submit handler
def handle_submit(message):
"""
Submit handler:
You will modify this method to talk with an LLM and provide
context using data from Neo4j.
"""
# Handle the response
with st.spinner('Thinking...'):
# Call the agent
response = generate_response(message)
write_message('assistant', response)
View the complete bot.py code
import streamlit as st
from utils import write_message
from agent import generate_response
# Page Config
st.set_page_config("Ebert", page_icon=":movie_camera:")
# Set up Session State
if "messages" not in st.session_state:
st.session_state.messages = [
{"role": "assistant", "content": "Hi, I'm the GraphAcademy Chatbot! How can I help you?"},
]
# Submit handler
def handle_submit(message):
"""
Submit handler:
You will modify this method to talk with an LLM and provide
context using data from Neo4j.
"""
# Handle the response
with st.spinner('Thinking...'):
# Call the agent
response = generate_response(message)
write_message('assistant', response)
# Display messages in Session State
for message in st.session_state.messages:
write_message(message['role'], message['content'], save=False)
# Handle any user input
if prompt := st.chat_input("What is up?"):
# Display user message in chat message container
write_message('user', prompt)
# Generate a response
handle_submit(prompt)
Receiving a Response
You now have the start of an intelligent LLM-integrated chatbot.
Run the Streamlit app and test the chatbot:
streamlit run bot.py
Once you have received a response from the LLM, click the button below to mark the challenge as completed.
Summary
In this lesson, you created a Conversation agent capable of communicating with an LLM. However, it is a good idea to specify what type of questions the LLM can respond to.
In the next lesson, you will define the agent’s scope and restrict the type of responses it provides.