Prompt chaining is a powerful technique for building advanced AI workflows. By linking the output of one AI prompt to the input of another, you can perform complex, multi-step reasoning, automate content creation, and build robust data pipelines. This tutorial provides a practical, hands-on guide to prompt chaining using Python, the OpenAI API, and the LangChain library.
Prerequisites
- Python: Version 3.8 or newer
- OpenAI API Key: Sign up here
- Basic Python knowledge
- Command line/terminal familiarity
- pip (Python package installer)
- LangChain library: Version 0.0.320 or newer
Step 1: Set Up Your Project Environment
-
Create and activate a virtual environment (optional but recommended):
python -m venv ai-prompt-chaining cd ai-prompt-chaining source bin/activate # On Windows use: .\Scripts\activate
-
Install required packages:
pip install openai langchain python-dotenv
-
Set your OpenAI API key as an environment variable:
- Create a file named
.envin your project directory:
echo "OPENAI_API_KEY=sk-..." > .env
- Replace
sk-...with your actual API key.
- Create a file named
Step 2: Understand the Prompt Chaining Concept
In prompt chaining, you use the output of one AI prompt as the input for the next. This enables complex, multi-step tasks, such as:
- Summarizing text, then generating questions from the summary
- Extracting entities from a document, then researching each entity
- Translating text, then analyzing sentiment in the translated version
We'll demonstrate prompt chaining by building a workflow that:
- Summarizes a news article
- Generates quiz questions from the summary
- Creates answers to those questions
Step 3: Basic Prompt Chaining with OpenAI API
-
Load your API key and set up the OpenAI client:
import os from dotenv import load_dotenv import openai load_dotenv() openai.api_key = os.getenv("OPENAI_API_KEY") -
Define a helper function to call the OpenAI API:
def ask_openai(prompt, model="gpt-3.5-turbo", temperature=0.7): response = openai.ChatCompletion.create( model=model, messages=[{"role": "user", "content": prompt}], temperature=temperature, max_tokens=512, ) return response.choices[0].message["content"].strip() -
Step 1: Summarize a news article
news_article = """ NASA's Mars helicopter Ingenuity has completed its 50th flight on the Red Planet. The helicopter, which arrived with the Perseverance rover, was originally designed for just five flights. Ingenuity has far exceeded expectations, capturing aerial images and assisting the rover in navigation. NASA engineers continue to push the boundaries of what the small helicopter can achieve in the challenging Martian environment. """ summary_prompt = f"Summarize this article in 3 sentences:\n{news_article}" summary = ask_openai(summary_prompt) print("Summary:\n", summary)Expected output (example):
NASA's Mars helicopter Ingenuity has completed its 50th flight, far surpassing its original five-flight mission. The helicopter has provided valuable aerial images and navigation support for the Perseverance rover. Engineers are continuing to explore the helicopter's capabilities in Mars' harsh environment. -
Step 2: Generate quiz questions from the summary
questions_prompt = f"Based on this summary, generate 3 quiz questions:\n{summary}" questions = ask_openai(questions_prompt) print("Quiz Questions:\n", questions)Example output:
1. How many flights has NASA's Mars helicopter Ingenuity completed? 2. What was the original mission goal for Ingenuity? 3. How has Ingenuity assisted the Perseverance rover? -
Step 3: Generate answers to the quiz questions
answers_prompt = f"Provide concise answers to these questions:\n{questions}\nBased on the summary:\n{summary}" answers = ask_openai(answers_prompt) print("Answers:\n", answers)Example output:
1. Ingenuity has completed 50 flights. 2. The original goal was just five flights. 3. Ingenuity has provided aerial images and navigation support.
Step 4: Advanced Prompt Chaining with LangChain
LangChain is a popular Python library for building composable AI workflows. It makes prompt chaining more robust and reusable.
-
Import LangChain components:
from langchain.llms import OpenAI from langchain.prompts import PromptTemplate from langchain.chains import LLMChain, SequentialChain -
Initialize the OpenAI LLM:
llm = OpenAI(openai_api_key=os.getenv("OPENAI_API_KEY"), temperature=0.7) -
Define prompt templates for each step:
summary_template = PromptTemplate( input_variables=["article"], template="Summarize this article in 3 sentences:\n{article}", ) questions_template = PromptTemplate( input_variables=["summary"], template="Based on this summary, generate 3 quiz questions:\n{summary}", ) answers_template = PromptTemplate( input_variables=["questions", "summary"], template="Provide concise answers to these questions:\n{questions}\nBased on the summary:\n{summary}", ) -
Create LangChain chains for each step:
summary_chain = LLMChain(llm=llm, prompt=summary_template, output_key="summary") questions_chain = LLMChain(llm=llm, prompt=questions_template, output_key="questions") answers_chain = LLMChain(llm=llm, prompt=answers_template, output_key="answers") -
Combine the chains into a sequential workflow:
overall_chain = SequentialChain( chains=[summary_chain, questions_chain, answers_chain], input_variables=["article"], output_variables=["summary", "questions", "answers"], verbose=True, ) -
Run the full prompt chaining workflow:
inputs = {"article": news_article} results = overall_chain(inputs) print("Summary:\n", results["summary"]) print("\nQuestions:\n", results["questions"]) print("\nAnswers:\n", results["answers"])Screenshot Description: The terminal displays the summary, followed by three quiz questions, and then concise answers, all generated automatically via chained prompts.
Step 5: Customizing and Expanding Prompt Chains
Prompt chaining is highly flexible. Here are ways to extend your workflow:
- Add translation: Insert a translation chain between summary and questions.
- Entity extraction: Extract key entities from the summary, then research or fact-check each one.
- Content rewriting: Rephrase or simplify the summary before generating questions.
Example: Adding a translation step (English to Spanish):
from langchain.prompts import PromptTemplate
translate_template = PromptTemplate(
input_variables=["summary"],
template="Translate this summary to Spanish:\n{summary}",
)
translate_chain = LLMChain(llm=llm, prompt=translate_template, output_key="spanish_summary")
overall_chain = SequentialChain(
chains=[summary_chain, translate_chain, questions_chain, answers_chain],
input_variables=["article"],
output_variables=["summary", "spanish_summary", "questions", "answers"],
verbose=True,
)
results = overall_chain({"article": news_article})
print("Spanish Summary:\n", results["spanish_summary"])
Common Issues & Troubleshooting
-
Issue: API key not found or authentication error
Solution: Ensure your.envfile containsOPENAI_API_KEYand that you're loading it withload_dotenv(). -
Issue: Rate limits or quota exceeded
Solution: Check your OpenAI account usage. Consider lowering the frequency or using smaller models (gpt-3.5-turbo). -
Issue: Output is empty or incomplete
Solution: Increasemax_tokensin your API call. Ensure prompts are clear and specific. -
Issue: LangChain version incompatibility
Solution: Runpip install --upgrade langchain
to get the latest version. -
Issue: Unexpected output or hallucinations
Solution: Adjust prompt wording, use lowertemperature, or add explicit instructions.
Next Steps
- Experiment with longer, more complex chains (e.g., multi-document summarization, fact-checking, or report generation).
- Integrate prompt chains into web apps or chatbots using frameworks like FastAPI or Streamlit.
- Explore prompt chaining with other LLM providers (Anthropic, Google Vertex AI, etc.).
- Read the LangChain documentation for advanced chaining patterns and integrations.
Prompt chaining empowers you to build sophisticated AI workflows with simple code. With practice, you’ll be able to automate research, content creation, and data analysis tasks that were once impossible. Happy chaining!