Prompt chaining is revolutionizing how developers and businesses automate complex, multi-step processes with AI. By linking the outputs of one AI prompt to the inputs of the next, you can build robust, dynamic workflows that handle everything from data extraction to content generation. In this deep dive, you’ll learn how to design, implement, and troubleshoot a prompt chaining workflow using Python and OpenAI’s GPT models, with clear, reproducible steps and actionable code.
For a broader context on prompt engineering and agentic workflows, see our parent pillar article on prompt engineering for agentic AI workflows.
Prerequisites
- Python 3.9+ (tested with 3.10)
- OpenAI Python SDK (v1.2.3 or later)
- Basic knowledge of Python scripting
- OpenAI API key (get one at platform.openai.com)
- Familiarity with JSON data structures
- Optional: VS Code or another code editor
1. Install Required Tools and Set Up Your Environment
-
Set up a Python virtual environment:
python3 -m venv prompt-chaining-env source prompt-chaining-env/bin/activate
-
Install the OpenAI SDK:
pip install openai==1.2.3
-
Verify installation:
pip list | grep openai
Expected output:
openai 1.2.3 -
Set your OpenAI API key as an environment variable:
export OPENAI_API_KEY="sk-..."
Tip: Store this in
.envand usepython-dotenvfor security.
2. Understand the Prompt Chaining Workflow
In prompt chaining, each AI prompt solves a sub-task, and its output feeds into the next step. For this guide, we’ll automate a workflow that:
- Extracts key points from a customer support email
- Classifies the sentiment
- Generates a summary and suggested response
The workflow visually:
Diagram: Each box is a prompt; arrows show data flow.
For more on chaining patterns, see Prompt Chaining Patterns: How to Design Robust Multi-Step AI Workflows.
3. Implement Step 1: Extract Key Points
-
Create
extract_key_points.py:import os import openai openai.api_key = os.getenv("OPENAI_API_KEY") def extract_key_points(email_text): prompt = ( "Extract the three most important key points from the following customer support email:\n" f"Email: \"{email_text}\"\n" "Key Points:" ) response = openai.chat.completions.create( model="gpt-3.5-turbo", messages=[{"role": "user", "content": prompt}], max_tokens=150, temperature=0.3, ) return response.choices[0].message.content.strip() if __name__ == "__main__": email = '''Hi, I ordered a laptop last week but haven't received any shipping info. Also, the payment was deducted twice from my card. Please help!''' print(extract_key_points(email))Run the script:
python extract_key_points.py
Expected output:
1. Customer has not received shipping information for a recent laptop order.
2. Payment was deducted twice from their card.
3. Customer is requesting assistance.
4. Implement Step 2: Sentiment Classification
-
Create
classify_sentiment.py:import os import openai openai.api_key = os.getenv("OPENAI_API_KEY") def classify_sentiment(key_points): prompt = ( "Given these customer concerns, classify the overall sentiment as 'Positive', 'Neutral', or 'Negative':\n" f"{key_points}\n" "Sentiment:" ) response = openai.chat.completions.create( model="gpt-3.5-turbo", messages=[{"role": "user", "content": prompt}], max_tokens=10, temperature=0, ) return response.choices[0].message.content.strip() if __name__ == "__main__": key_points = ''' 1. Customer has not received shipping information for a recent laptop order. 2. Payment was deducted twice from their card. 3. Customer is requesting assistance. ''' print(classify_sentiment(key_points))Run the script:
python classify_sentiment.py
Expected output:Negative
5. Implement Step 3: Generate Summary and Suggested Response
-
Create
generate_reply.py:import os import openai openai.api_key = os.getenv("OPENAI_API_KEY") def generate_summary_and_reply(key_points, sentiment): prompt = ( "Based on these key points and the overall sentiment, do two things:\n" "1. Write a one-sentence summary of the customer's issue.\n" "2. Suggest a polite, helpful reply.\n" f"Key Points:\n{key_points}\n" f"Sentiment: {sentiment}\n" "Output format:\nSummary: ...\nSuggested Reply: ..." ) response = openai.chat.completions.create( model="gpt-3.5-turbo", messages=[{"role": "user", "content": prompt}], max_tokens=200, temperature=0.3, ) return response.choices[0].message.content.strip() if __name__ == "__main__": key_points = ''' 1. Customer has not received shipping information for a recent laptop order. 2. Payment was deducted twice from their card. 3. Customer is requesting assistance. ''' sentiment = "Negative" print(generate_summary_and_reply(key_points, sentiment))Run the script:
python generate_reply.py
Expected output:
Summary: The customer is concerned about not receiving shipping information and being double-charged for a recent laptop order.
Suggested Reply: We're very sorry for the inconvenience. We'll investigate your order and the payment issue immediately and update you with shipping details as soon as possible. Thank you for your patience.
6. Chain the Prompts into an End-to-End Workflow
-
Create
chained_workflow.py:import os import openai openai.api_key = os.getenv("OPENAI_API_KEY") def extract_key_points(email_text): prompt = ( "Extract the three most important key points from the following customer support email:\n" f"Email: \"{email_text}\"\n" "Key Points:" ) response = openai.chat.completions.create( model="gpt-3.5-turbo", messages=[{"role": "user", "content": prompt}], max_tokens=150, temperature=0.3, ) return response.choices[0].message.content.strip() def classify_sentiment(key_points): prompt = ( "Given these customer concerns, classify the overall sentiment as 'Positive', 'Neutral', or 'Negative':\n" f"{key_points}\n" "Sentiment:" ) response = openai.chat.completions.create( model="gpt-3.5-turbo", messages=[{"role": "user", "content": prompt}], max_tokens=10, temperature=0, ) return response.choices[0].message.content.strip() def generate_summary_and_reply(key_points, sentiment): prompt = ( "Based on these key points and the overall sentiment, do two things:\n" "1. Write a one-sentence summary of the customer's issue.\n" "2. Suggest a polite, helpful reply.\n" f"Key Points:\n{key_points}\n" f"Sentiment: {sentiment}\n" "Output format:\nSummary: ...\nSuggested Reply: ..." ) response = openai.chat.completions.create( model="gpt-3.5-turbo", messages=[{"role": "user", "content": prompt}], max_tokens=200, temperature=0.3, ) return response.choices[0].message.content.strip() if __name__ == "__main__": email = '''Hi, I ordered a laptop last week but haven't received any shipping info. Also, the payment was deducted twice from my card. Please help!''' key_points = extract_key_points(email) print("Key Points:\n", key_points) sentiment = classify_sentiment(key_points) print("Sentiment:", sentiment) summary_and_reply = generate_summary_and_reply(key_points, sentiment) print("Summary & Suggested Reply:\n", summary_and_reply)Run the chained workflow:
python chained_workflow.py
Expected output:
Key Points:
1. Customer has not received shipping information for a recent laptop order.
2. Payment was deducted twice from their card.
3. Customer is requesting assistance.
Sentiment: Negative
Summary & Suggested Reply:
Summary: The customer is concerned about not receiving shipping information and being double-charged for a recent laptop order.
Suggested Reply: We're very sorry for the inconvenience. We'll investigate your order and the payment issue immediately and update you with shipping details as soon as possible. Thank you for your patience.Screenshot Description: Terminal showing each step's output, confirming the chain is working as intended.
7. Visualize the Workflow (Optional)
-
Install
graphvizfor visualization:pip install graphviz
-
Create
visualize_workflow.py:from graphviz import Digraph dot = Digraph(comment='Prompt Chaining Workflow') dot.node('A', 'Customer Email') dot.node('B', 'Extract Key Points') dot.node('C', 'Classify Sentiment') dot.node('D', 'Generate Summary & Reply') dot.edges(['AB', 'BC', 'BD', 'CD']) dot.render('prompt_chaining_workflow', view=True)Run to generate a workflow diagram:
python visualize_workflow.py
Expected: Opens a PDF showing the prompt chaining flow.Screenshot Description: A diagram with boxes labeled "Customer Email", "Extract Key Points", "Classify Sentiment", and "Generate Summary & Reply", connected in sequence.
Common Issues & Troubleshooting
-
OpenAI API errors: If you see
openai.error.AuthenticationError, double-check yourOPENAI_API_KEYenvironment variable. - Output format inconsistencies: If the model returns unexpected formats, add explicit output instructions in your prompts (e.g., "Output format: Summary: ... Suggested Reply: ...").
-
Rate limits: If you hit
429 Too Many Requests, slow down your script or upgrade your OpenAI plan. - Chaining errors: If one step's output is missing or malformed, print intermediate results and add validation before passing to the next prompt.
- Dependency issues: Ensure all required packages are installed in your virtual environment.
Next Steps
- Experiment with more complex chains (e.g., add entity extraction or escalation routing).
- Try different models (e.g., GPT-4 for higher accuracy).
- Integrate with webhooks or workflow automation tools (like Zapier or Airflow).
- For advanced multi-step design patterns, see Prompt Chaining Patterns: How to Design Robust Multi-Step AI Workflows.
- For broader strategies on agentic AI and role-based prompt engineering, refer to Prompt Engineering for Agentic AI Workflows: Role Assignments, Tools, and Typical Mistakes.
By mastering prompt chaining, you’re on your way to building powerful, automated AI workflows that can transform your business or project. Keep experimenting, and check back for more AI Playbooks from Tech Daily Shot!