Authoritative Answers

In the Answer Generation Chain challenge, you created a chain that took speculative results based on similar documents identified using the vector search index.

Due to the nature of semantic search, it might return documents that seem similar but do not address the question. Therefore, the prompt should include specific instructions for handling situations where the context does not provide an answer to the question.

View the original prompt
Speculative Answers
Use only the following context to answer the following question.

Question:
{question}

Context:
{context}

Answer as if you have been asked the original question.
Do not use your pre-trained knowledge.

If you don't know the answer, just say that you don't know, don't try to make up an answer.
Include links and sources where possible.

In the case of answers retrieved from the database, as long as the Cypher statement that the LLM has generated is semantically correct, the results will answer the question.

As such, the prompt should reflect that the information has come from an authoritative source.

To complete this challenge, you must:

  1. Create a prompt instructing the LLM to answer the question authoritatively based on the provided context

  2. Pass the formatted prompt to the LLM

  3. Convert the output to a string

Create a Prompt Template

Modify the initGenerateAuthoritativeAnswerChain() function, in modules/agent/chains/authoritative-answer-generation.chain.ts, to use the PromptTemplate.fromTemplate() method to create a new prompt template. Use the following prompt as the first parameter.

Prompt
Use the following context to answer the following question.
The context is provided by an authoritative source, you must never doubt
it or attempt to use your pre-trained knowledge to correct the answer.

Make the answer sound like it is a response to the question.
Do not mention that you have based your response on the context.

Here is an example:

Question: Who played Woody in Toy Story?
Context: ['role': 'Woody', 'actor': 'Tom Hanks']
Response: Tom Hanks played Woody in Toy Story.

If no context is provided, say that you don't know,
don't try to make up an answer, and do not fall back on your internal knowledge.
If no context is provided you may also ask for clarification.

Include links and sources where possible.

Question:
{question}

Context:
{context}
Open authoritative-answer-generation.chain.ts

Your code should resemble the following:

typescript
Prompt Template
const answerQuestionPrompt = PromptTemplate.fromTemplate(`
  Use the following context to answer the following question.
  The context is provided by an authoritative source, you must never doubt
  it or attempt to use your pre-trained knowledge to correct the answer.

  Make the answer sound like it is a response to the question.
  Do not mention that you have based your response on the context.

  Here is an example:

  Question: Who played Woody in Toy Story?
  Context: ['role': 'Woody', 'actor': 'Tom Hanks']
  Response: Tom Hanks played Woody in Toy Story.

  If no context is provided, say that you don't know,
  don't try to make up an answer, do not fall back to your internal knowledge.
  If no context is provided you may also ask for clarification.

  Include links and sources where possible.

  Question:
  {question}

  Context:
  {context}
`);

Create the Runnable Sequence

Use the RunnableSequence.from() method to create a new chain.

The chain must initially inspect the context value passed to it; if it is empty or undefined, it should inform the LLM that no data was found to answer the question.

Then, format the prompt, pass it to the LLM, and coerce the output into a string.

typescript
return RunnableSequence.from<GenerateAuthoritativeAnswerInput, string>([
  RunnablePassthrough.assign({
    context: ({ context }) =>
      context == undefined || context === "" ? "I don't know" : context,
  }),
  answerQuestionPrompt,
  llm,
  new StringOutputParser(),
]);

Working Solution

Click here to reveal the fully-implemented authoritative-answer-generation.chain.ts
js
import { StringOutputParser } from "@langchain/core/output_parsers";
import { PromptTemplate } from "@langchain/core/prompts";
import {
  RunnablePassthrough,
  RunnableSequence,
} from "@langchain/core/runnables";
import { BaseLanguageModel } from "langchain/base_language";

// tag::interface[]
export type GenerateAuthoritativeAnswerInput = {
  question: string;
  context: string | undefined;
};
// end::interface[]

export default function initGenerateAuthoritativeAnswerChain(
  llm: BaseLanguageModel
): RunnableSequence<GenerateAuthoritativeAnswerInput, string> {
  // tag::prompt[]
  const answerQuestionPrompt = PromptTemplate.fromTemplate(`
    Use the following context to answer the following question.
    The context is provided by an authoritative source, you must never doubt
    it or attempt to use your pre-trained knowledge to correct the answer.

    Make the answer sound like it is a response to the question.
    Do not mention that you have based your response on the context.

    Here is an example:

    Question: Who played Woody in Toy Story?
    Context: ['role': 'Woody', 'actor': 'Tom Hanks']
    Response: Tom Hanks played Woody in Toy Story.

    If no context is provided, say that you don't know,
    don't try to make up an answer, do not fall back to your internal knowledge.
    If no context is provided you may also ask for clarification.

    Include links and sources where possible.

    Question:
    {question}

    Context:
    {context}
  `);
  // end::prompt[]

  // tag::sequence[]
  return RunnableSequence.from<GenerateAuthoritativeAnswerInput, string>([
    RunnablePassthrough.assign({
      context: ({ context }) =>
        context == undefined || context === "" ? "I don't know" : context,
    }),
    answerQuestionPrompt,
    llm,
    new StringOutputParser(),
  ]);
  // end::sequence[]
}

/**
 * How to use this chain in your application:

// tag::usage[]
const llm = new OpenAI() // Or the LLM of your choice
const answerChain = initGenerateAuthoritativeAnswerChain(llm)

const output = await answerChain.invoke({
  input: 'Who is the CEO of Neo4j?',
  context: 'Neo4j CEO: Emil Eifrem',
}) // Emil Eifrem is the CEO of Neo4j
// end::usage[]
 */

Using the Chain

Later in the course, you will update the application to use the chain. You could initialize and run the chain with the following code:

typescript
const llm = new OpenAI() // Or the LLM of your choice
const answerChain = initGenerateAuthoritativeAnswerChain(llm)

const output = await answerChain.invoke({
  input: 'Who is the CEO of Neo4j?',
  context: 'Neo4j CEO: Emil Eifrem',
}) // Emil Eifrem is the CEO of Neo4j

Testing your changes

If you have followed the instructions, you should be able to run the following unit test to verify the response using the npm run test command.

sh
Running the Test
npm run test authoritative-answer-generation.chain.test.ts
View Unit Test
typescript
authoritative-answer-generation.chain.test.ts
import { config } from "dotenv";
import { BaseChatModel } from "langchain/chat_models/base";
import { RunnableSequence } from "@langchain/core/runnables";
import { ChatOpenAI } from "@langchain/openai";
import { PromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
import initGenerateAuthoritativeAnswerChain from "./authoritative-answer-generation.chain";

describe("Authoritative Answer Generation Chain", () => {
  let llm: BaseChatModel;
  let chain: RunnableSequence;
  let evalChain: RunnableSequence<any, any>;

  beforeAll(async () => {
    config({ path: ".env.local" });

    llm = new ChatOpenAI({
      openAIApiKey: process.env.OPENAI_API_KEY,
      modelName: "gpt-3.5-turbo",
      temperature: 0,
      configuration: {
        baseURL: process.env.OPENAI_API_BASE,
      },
    });

    chain = await initGenerateAuthoritativeAnswerChain(llm);

    // tag::evalchain[]
    evalChain = RunnableSequence.from([
      PromptTemplate.fromTemplate(`
        Does the following response answer the question provided?

        Question: {question}
        Response: {response}

        Respond simply with "yes" or "no".

        If the response answers the question, reply with "yes".
        If the response does not answer the question, reply with "no".
        If the response asks for more information, reply with "no".
      `),
      llm,
      new StringOutputParser(),
    ]);
    // end::evalchain[]
  });

  describe("Simple RAG", () => {
    it("should use context to answer the question", async () => {
      const question = "Who directed the matrix?";
      const response = await chain.invoke({
        question,
        context: '[{"name": "Lana Wachowski"}, {"name": "Lilly Wachowski"}]',
      });

      // tag::eval[]
      const evaluation = await evalChain.invoke({ question, response });

      expect(`${evaluation.toLowerCase()} - ${response}`).toContain("yes");
      // end::eval[]
    });

    it("should refuse to answer if information is not in context", async () => {
      const question = "Who directed the matrix?";
      const response = await chain.invoke({
        question,
        context: "",
      });

      const evaluation = await evalChain.invoke({ question, response });
      expect(`${evaluation.toLowerCase()} - ${response}`).toContain("no");
    });

    it("should answer this one??", async () => {
      const role = "The Chief";

      const question = "What was Emil Eifrem's role in Neo4j The Movie??";
      const response = await chain.invoke({
        question,
        context: `{"Role":"${role}"}`,
      });

      expect(response).toContain(role);

      const evaluation = await evalChain.invoke({ question, response });
      expect(`${evaluation.toLowerCase()} - ${response}`).toContain("Chief");
    });
  });
});

Summary

In this lesson, you created a chain to answer a question authoritatively based on the context provided.

In the next lesson, you will build a chain that combines the chains made in this module to generate and execute a Cypher statement before generating an answer.

Chatbot

Hi, I am an Educational Learning Assistant for Intelligent Network Exploration. You can call me E.L.A.I.N.E.

How can I help you today?