For a chat model to be helpful, it needs to remember what messages have been sent and received.
Without the ability to remember the chat model will not be able to act according to the context of the conversation.
For example, without a memory the conversation may go in circles:
[user] Hi, my name is Martin
[chat model] Hi, nice to meet you Martin
[user] Do you have a name?
[chat model] I am the chat model. Nice to meet you. What is your name?
Conversation Memory
In the previous lesson, you created a chat model that responds to user’s questions about surf conditions.
Reveal the code
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain.schema import StrOutputParser
chat_llm = ChatOpenAI(openai_api_key="sk-...")
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a surfer dude, having a conversation about the surf conditions on the beach. Respond using surfer slang.",
),
( "system", "{context}" ),
( "human", "{question}" ),
]
)
chat_chain = prompt | chat_llm | StrOutputParser()
current_weather = """
{
"surf": [
{"beach": "Fistral", "conditions": "6ft waves and offshore winds"},
{"beach": "Polzeath", "conditions": "Flat and calm"},
{"beach": "Watergate Bay", "conditions": "3ft waves and onshore winds"}
]
}"""
response = chat_chain.invoke(
{
"context": current_weather,
"question": "What is the weather like on Watergate Bay?",
}
)
print(response)
In this lesson, you will add a memory to this program.
LangChain supports several memory components, which support different scenarios and storage solutions.
You are going to use the in-memory ChatMessageHistory
memory component to temporarily store the conversation history between you and the chat model.
In the next lesson, you will then use the Neo4j Chat Message History memory component to persist the conversation history in a Neo4j database.
Add History to the Prompt
As each call to the LLM is stateless, you need to include the chat history in every call to the LLM.
You can modify the prompt template to include the chat history as a list of messages using a MessagesPlaceholder
object.
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a surfer dude, having a conversation about the surf conditions on the beach. Respond using surfer slang.",
),
("system", "{context}"),
MessagesPlaceholder(variable_name="chat_history"),
("human", "{question}"),
]
)
The chat_history
variable will contain the conversation history.
Chat Message History
To keep the message history, you will need to wrap the chat_chain
in a Runnable.
Specifically, you will use the RunnableWithMessageHistory
runnable which will use the memory component to store and retrieve the conversation history.
First, you will need to create a ChatMessageHistory
memory component and a function that the RunnableWithMessageHistory
will use to get the memory component.
from langchain_community.chat_message_histories import ChatMessageHistory
memory = ChatMessageHistory()
def get_memory(session_id):
return memory
The get_memory
function will return the ChatMessageHistory
memory component.
Note how it expects a session_id
parameter, this would be used to identify the specific conversation (or session).
As there will only be one conversation in memory at a time, you can ignore this parameter.
You can now create a new chain using the RunnableWithMessageHistory
, passing the chat_chain
and the get_memory
function.
from langchain_core.runnables.history import RunnableWithMessageHistory
chat_chain = prompt | chat_llm | StrOutputParser()
chat_with_message_history = RunnableWithMessageHistory(
chat_chain,
get_memory,
input_messages_key="question",
history_messages_key="chat_history",
)
The input_messages_key
and history_messages_key
parameters are the keys in the prompt that will be populated with the user’s question and the chat history.
Invoke the Chat Model
When you call the chat_with_message_history
chain, the user’s question and the response will be stored in the ChatMessageHistory
memory component.
Every subsequent call to the chat_with_message_history
chain will include the chat history in the prompt.
When you ask the chat model multiple questions, the LLM will use the context from the previous questions when responding.
response = chat_with_message_history.invoke(
{
"context": current_weather,
"question": "Hi, I am at Watergate Bay. What is the surf like?"
},
config={"configurable": {"session_id": "none"}}
)
print(response)
response = chat_with_message_history.invoke(
{
"context": current_weather,
"question": "Where I am?"
},
config={"configurable": {"session_id": "none"}}
)
print(response)
[user] Hi, I am at Watergate Bay. What is the surf like?
[chat model] Dude, stoked you're at Watergate Bay! The surf is lookin' pretty chill, about 3ft waves rollin' in. But watch out for those onshore winds, they might mess with your flow.
[user] Where I am?
[chat model] You're at Watergate Bay, dude!
To invoke the chat model you need to specify a session_id
in the config
which will be passed to the get_memory
function.
As there is no need to store multiple conversations in memory, you can use the same session_id
for all conversations.
You will use the session_id
in the next lesson when storing the conversation history in Neo4j.
Try creating a simple loop and ask the chat model a few questions:
while True:
question = input("> ")
response = chat_with_message_history.invoke(
{
"context": current_weather,
"question": question,
},
config={
"configurable": {"session_id": "none"}
}
)
print(response)
Click to reveal the complete code.
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.schema import StrOutputParser
from langchain_community.chat_message_histories import ChatMessageHistory
from langchain_core.runnables.history import RunnableWithMessageHistory
chat_llm = ChatOpenAI(openai_api_key="sk-...")
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a surfer dude, having a conversation about the surf conditions on the beach. Respond using surfer slang.",
),
("system", "{context}"),
MessagesPlaceholder(variable_name="chat_history"),
("human", "{question}"),
]
)
memory = ChatMessageHistory()
def get_memory(session_id):
return memory
chat_chain = prompt | chat_llm | StrOutputParser()
chat_with_message_history = RunnableWithMessageHistory(
chat_chain,
get_memory,
input_messages_key="question",
history_messages_key="chat_history",
)
current_weather = """
{
"surf": [
{"beach": "Fistral", "conditions": "6ft waves and offshore winds"},
{"beach": "Bells", "conditions": "Flat and calm"},
{"beach": "Watergate Bay", "conditions": "3ft waves and onshore winds"}
]
}"""
while True:
question = input("> ")
response = chat_with_message_history.invoke(
{
"context": current_weather,
"question": question,
},
config={
"configurable": {"session_id": "none"}
}
)
print(response)
Check Your Understanding
Chains
True or False - Individual calls to an LLM are stateless. The LLM retains no information about the previous call.
-
✓ True
-
❏ False
Hint
Without memory chat models can’t remember the previous conversation.
Solution
The statement is True. Individual calls to an LLM are stateless. You need to give a chat model context about previous calls.
Lesson Summary
In this lesson, you learned how to use the ChatMessageHistory
to store the conversation history between you and the LLM.
In the next lesson, you will learn how to use Langchain to interact with Neo4j and store the conversation history in a graph database.