Now that you have a set of tools, you will need an agent to execute them.
The modules/agent/agent.ts file contains an async initAgent()
function that the route handler calls with an input
and sessionId
and expects a string to be returned.
export default async function initAgent(
llm: BaseChatModel,
embeddings: Embeddings,
graph: Neo4jGraph
) {
// TODO: Initiate tools
// const tools = ...
// TODO: Pull the prompt from the hub
// const prompt = ...
// TODO: Create an agent
// const agent = ...
// TODO: Create an agent executor
// const executor = ...
// TODO: Create a rephrase question chain
// const rephraseQuestionChain = ...
// TODO: Return a runnable passthrough
// return ...
}
The function should return a runnable sequence that:
-
Uses the conversation history to rephrase the input into a standalone question
-
Pass the rephrased question to an agent executor
-
Return the output as a string
agent.ts
→
Creating a new Agent
First, inside the initAgent()
function, use the the initTools()
function from the previous lesson to create an array of tools for the agent to use.
const tools = await initTools(llm, embeddings, graph);
Next, the agent will need a set of instructions to follow when processing the request.
You can use the pull()
function from @langchain/hub
package to pull a pre-written agent prompt from the LangChain Hub.
In this case, we can use the hwchase17/openai-functions-agent
prompt, a simple prompt designed to work with OpenAI Function agents.
const prompt = await pull<ChatPromptTemplate>(
"hwchase17/openai-functions-agent"
);
The llm
, tools
, and prompt
arguments can be passed to the createOpenAIFunctionsAgent()
function to create a new Agent instance.
const agent = await createOpenAIFunctionsAgent({
llm,
tools,
prompt,
});
OpenAI Functions Agent
The GPT 3.5-turbo and GPT-4 models are fine-tuned to select the appropriate tool from a list based on its metadata. As such, the OpenAI Functions Agent is an excellent choice for an agent with many tools or complex RAG requirements.
You can view a list of available agents in the LangChain documentation.
Agents are invoked through an instance of an Agent Executor.
Use the agent
and tools
variables to create a new AgentExecutor
instance.
const executor = new AgentExecutor({
agent,
tools,
verbose: true, // Verbose output logs the agents _thinking_
});
Rephrasing the question
The chain must generate a rephrased question before being passed to the agent executor. Luckily, you built the functionality in the Conversation History module.
Use the initRephraseChain()
function to create a new instance of the Rephrase Question Chain.
const rephraseQuestionChain = await initRephraseChain(llm);
Runnable Sequence
Now you have everything you need to build your sequence. It is time to bring everything together.
Get History
Use the RunnablePassthrough.assign()
method to get any conversation history from the database.
return (
RunnablePassthrough.assign<{ input: string; sessionId: string }, any>({
// Get Message History
history: async (_input, options) => {
const history = await getHistory(
options?.config.configurable.sessionId
);
return history;
},
})
Configurable Options
The second parameter provides a configuration the chain can access and utilize throughout its execution.
This function extracts the sessionId from the config.configurable
object, passed as the second argument.
Rephase the question
The chain input has both input
and history
keys, the expected inputs of the rephaseQuestionChain
.
Call .assign()
to assign the rephrased question to the rephrasedQuestion
key.
.assign({
// Use History to rephrase the question
rephrasedQuestion: (input: RephraseQuestionInput, config: any) =>
rephraseQuestionChain.invoke(input, config),
})
Execute the agent
The agent now has all the information needed to decide which tool to use and generate an output.
Use the .pipe()
method to pass the entire input and configuration to the executor.
// Pass to the executor
.pipe(executor)
Finally, the agent will return a structured output, including an input
field.
Use the .pick()
function to return the output
value.
.pick("output")
);
Completed function
If you have followed the steps, your initAgent()
implementation should resemble the following.
export default async function initAgent(
llm: BaseChatModel,
embeddings: Embeddings,
graph: Neo4jGraph
) {
const tools = await initTools(llm, embeddings, graph);
const prompt = await pull<ChatPromptTemplate>(
"hwchase17/openai-functions-agent"
);
const agent = await createOpenAIFunctionsAgent({
llm,
tools,
prompt,
});
const executor = new AgentExecutor({
agent,
tools,
verbose: true, // Verbose output logs the agents _thinking_
});
const rephraseQuestionChain = await initRephraseChain(llm);
return (
RunnablePassthrough.assign<{ input: string; sessionId: string }, any>({
// Get Message History
history: async (_input, options) => {
const history = await getHistory(
options?.config.configurable.sessionId
);
return history;
},
})
.assign({
// Use History to rephrase the question
rephrasedQuestion: (input: RephraseQuestionInput, config: any) =>
rephraseQuestionChain.invoke(input, config),
})
// Pass to the executor
.pipe(executor)
.pick("output")
);
}
Testing your changes
If you have followed the instructions, you should be able to run the following unit test to verify the response using the npm run test
command.
npm run test agent.test.ts
View Unit Test
import initAgent from "./agent";
import { config } from "dotenv";
import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";
import { Embeddings } from "langchain/embeddings/base";
import { BaseChatModel } from "langchain/chat_models/base";
import { Runnable } from "@langchain/core/runnables";
import { Neo4jGraph } from "@langchain/community/graphs/neo4j_graph";
describe("Langchain Agent", () => {
let llm: BaseChatModel;
let embeddings: Embeddings;
let graph: Neo4jGraph;
let executor: Runnable;
beforeAll(async () => {
config({ path: ".env.local" });
graph = await Neo4jGraph.initialize({
url: process.env.NEO4J_URI as string,
username: process.env.NEO4J_USERNAME as string,
password: process.env.NEO4J_PASSWORD as string,
database: process.env.NEO4J_DATABASE as string | undefined,
});
llm = new ChatOpenAI({
openAIApiKey: process.env.OPENAI_API_KEY,
modelName: "gpt-3.5-turbo",
temperature: 0,
configuration: {
baseURL: process.env.OPENAI_API_BASE,
},
});
embeddings = new OpenAIEmbeddings({
openAIApiKey: process.env.OPENAI_API_KEY as string,
configuration: {
baseURL: process.env.OPENAI_API_BASE,
},
});
executor = await initAgent(llm, embeddings, graph);
});
afterAll(() => graph.close());
describe("Vector Retrieval", () => {
it("should perform RAG using the neo4j vector retriever", async () => {
const sessionId = "agent-rag-1";
const input = "Recommend me a movie about ghosts";
const output = await executor.invoke(
{
input,
},
{
configurable: {
sessionId,
},
}
);
// Check database
const sessionRes = await graph.query(
`
MATCH (s:Session {id: $sessionId })-[:LAST_RESPONSE]->(r)
RETURN r.input AS input, r.output AS output, r.source AS source,
count { (r)-[:CONTEXT]->() } AS context,
[ (r)-[:CONTEXT]->(m) | m.title ] AS movies
`,
{ sessionId }
);
expect(sessionRes).toBeDefined();
if (sessionRes) {
expect(sessionRes.length).toBe(1);
expect(sessionRes[0].input).toBe(input);
let found = false;
for (const movie of sessionRes[0].movies) {
if (output.toLowerCase().includes(movie.toLowerCase())) {
found = true;
}
}
expect(found).toBe(true);
}
});
});
});
Verifying the Test
If every test in the test suite has passed, a new (:Session)
node with a .id
property of agent-rag-1
will have been created in your database.
The session should have atleast one (:Response)
node, linked with a :CONTEXT
relationship to a movie with the title Neo4j - Into the Graph
.
Click the Check Database button below to verify the tests have succeeded.
Hint
You can compare your code with the solution in src/solutions/modules/agent/agent.ts
and double-check that the conditions have been met in the test suite.
Solution
You can compare your code with the solution in src/solutions/modules/agent/agent.ts
and double-check that the conditions have been met in the test suite.
You can also run the following Cypher statement to double-check that the index has been created in your database.
MATCH (s:Session {id: 'agent-rag-1'})
RETURN s, [
(s)-[:HAS_RESPONSE]->(r) | [r,
[ (r) -[:CONTEXT]->(c) | c ]
]
]
Once you have verified your code and re-ran the tests, click Try again…* to complete the challenge.
Summary
In this challenge, you wrote the code to create a chain that rephrases a user input into a standalone question and passes it on to an agent instance that then acts on the question.
In the next lesson, you will integrate the agent into the front end.