In the Neo4j & LLM Fundamentals course, we used chains provided by LangChain to communicate with the LLM. This lesson will teach you how to create custom chains using LangChain Expression Language.
What is LCEL?
LangChain Expression Language, abbreviated to LCEL, is a declarative method for composing chains. LCEL provides an expressive syntax capable of handling simple tasks such as simple Prompt to LLM chains or complex combinations of steps.
LCEL provides the following benefits:
-
Streaming Support
-
Asynchronous Calls
-
Optimized parallel execution
-
Streaming Support
You can read more about LCEL in the LangChain documentation.
An example chain
In the Chains lesson in Neo4j & LLM Fundamentals, you learned about the LLMChain
.
The LLMChain
is an example of a simple chain that, when invoked, takes a user input, replaces the value inside the prompt and passes the prompt to an LLM and specifies the result.
The LLM chain code can be significantly simplified.
The chain should consist of:
-
A
PromptTemplate
containing instructions and placeholders. -
An LLM to act on the prompt.
-
An output parser to coerce the response into the correct format.
The Prompt
The prompt in the lesson instructs the LLM to act as a Cockney fruit and vegetable seller and provide information about fruit.
You can use the static fromTemplate()
method to construct a new PromptTemplate
.
import { PromptTemplate } from "@langchain/core/prompts";
const prompt = PromptTemplate.fromTemplate(`
You are a cockney fruit and vegetable seller.
Your role is to assist your customer with their fruit and vegetable needs.
Respond using cockney rhyming slang.
Tell me about the following fruit: {fruit}
`);
The LLM
The prompt will be passed to an LLM, in this case, the ChatOpenAI
model.
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
openAIApiKey: "sk-...",
});
Creating the Chain
In LangChain.js, chains are instances of the RunnableSequence
class.
To create a new chain, call the RunnableSequence.from
method, passing through an array of steps.
const chain = RunnableSequence.from([prompt, llm, parser]);
Invoke the chain
The RunnableSequence
instance has an invoke()
method.
The input that this function expects depends on the template variables contained in the prompt.
Because the prompt expects {fruit}
as an input, you call the .invoke()
method with an object containing a fruit
key.
const response = await chain.invoke({ fruit: "pineapple" });
console.log(response);
Type Safety
You can ensure type safety in your chains by defining input and output types on the .from()
method.
RunnableSequence.from<InputType, OutputType>
Check Your Understanding
Chains in LangChain.js
What method should you run to create a chain in Langchain.js?
-
❏
langchain.core.runnableSequence()
-
❏
run.Sequence()
-
✓
RunnableSequence.from()
-
❏
langchainCore.runnable.sequence()
Hint
Chains are runnable sequences constructed from arrays of individual steps.
Solution
The answer is RunnableSequence.from()
.
Invoking a sequence
Use the dropdown to complete the following code sample for invoking a runnable sequence.
const prompt = PromptTemplate.fromTemplate("What is the capital of {country}?")
const chain = RunnableSequence.from<{country: string}, string>([ prompt, llm, parser ])
const output = /*select:invoke({country: "Sweden"})*/
-
❏
langchain.invoke(chain, {country: 'Sweden'})
-
❏
chain({"country": "Sweden"})
-
✓
chain.invoke({"country": "Sweden"})
-
❏
chain.invoke("Sweden")
Hint
When invoking a simple chain like this, you must pass an object with key(s) that represent the placeholders in the prompt template.
Solution
The answer is chain.invoke({"country": "Sweden"})
.
Lesson Summary
In this lesson, you learned how to combine an array of steps into a single RunnableSequence
.
In the next lesson, you will use this knowledge to create a chain that will generate an answer based on a given context, a technique known as Retrieval Augmented Generation (RAG).