A practical collection of hands-on exercises focused on mastering LangChain fundamentals. Learn to build AI applications using completion models, chat models, prompt templates, and chat prompt templates with popular LLM providers.
This repository provides structured learning exercises for developers getting started with LangChain and Generative AI. Each exercise builds upon previous concepts, guiding you from basic LLM interactions to sophisticated prompt engineering and conversational AI systems.
- Python 3.8 or higher
- Basic Python programming knowledge
- API keys from supported providers (Google AI, Groq)
- Clone this repository:
git clone https://github.com/yourusername/exercises-for-genai-devs.git
cd exercises-for-genai-devs- Install required packages:
pip install -r requirements.txt- Set up environment variables:
cp .env.example .env
# Edit .env file and add your API keys- Create your configuration file:
Create a
config.pyfile in the root directory following the pattern shown in the exercises.
Each exercise includes:
- Objective: Clear learning goals
- Concepts: Key concepts covered
- Instructions: Step-by-step guidance
- Starter Code: Boilerplate to begin with
- Expected Output: What you should see when complete
- Challenge: Extension activities for deeper learning
Objective: Create a reusable configuration module for LangChain models
Concepts: Environment management, Model initialization, Code organization
- Create a
config.pyfile that loads different LLM providers - Implement functions to load Google AI and Groq models
- Use environment variables for API key management
- Test your configuration with a simple model call
# config.py
import os
from dotenv import load_dotenv
from langchain_google_genai import ChatGoogleGenerativeAI, GoogleGenerativeAI
from langchain_groq import ChatGroq
load_dotenv()
def load_google_llm():
# TODO: Implement Google completion model loader
pass
def load_google_chat_model():
# TODO: Implement Google chat model loader
pass
def load_groq_chat_model():
# TODO: Implement Groq chat model loader
pass- How to organize LangChain configurations
- Environment variable management
- Different model types (completion vs chat)
- API key security practices
Add error handling for missing API keys and create a model selector function.
Objective: Use a completion model to generate text responses
Concepts: LLM completion, Basic prompting, Streaming responses
- Load a Google completion model using your config
- Create a simple prompt about a topic you're interested in
- Get a response using the
invoke()method - Implement streaming to see the response generated in real-time
from config import load_google_llm
# TODO: Load the model
# TODO: Create a prompt
# TODO: Get response using invoke()
# TODO: Implement streaming with stream()- See immediate response with invoke()
- Watch text appear word-by-word with streaming
- Handle the response content properly
Create a simple loop that accepts user prompts and responds using the completion model.
Objective: Build conversational interactions using chat models
Concepts: Chat models, Message formatting, System prompts, Conversation flow
- Load a Google chat model from your config
- Create a message structure with system and user roles
- Use the chat model to respond to questions
- Experiment with different system prompts to change AI behavior
from config import load_google_chat_model
# TODO: Load chat model
# TODO: Create messages array with system and user messages
# TODO: Get response using invoke()
# TODO: Test different system promptsmessages = [
("system", "You are a helpful assistant specialized in..."),
("user", "Your question here")
]Create different personas by changing the system message and test how responses change.
Objective: Implement real-time streaming for chat interactions
Concepts: Streaming responses, Real-time output, User experience optimization
- Use your chat model from the previous exercise
- Implement streaming using the
stream()method - Handle the streaming output to display smoothly
- Add proper formatting and user experience elements
# Streaming pattern
for part in chat_model.stream("Your question"):
print(part.content, end="", flush=True)- Smooth text appearance
- Proper output formatting
- Handling empty content parts
- User experience considerations
Add a typing indicator or progress visualization while streaming.
Objective: Build a continuous conversation interface
Concepts: While loops, User input handling, Exit conditions, Chat flow
- Create a welcome message and interface
- Implement a while loop for continuous conversation
- Handle user input and model responses
- Add proper exit conditions and cleanup
from config import load_google_chat_model
chat_model = load_google_chat_model()
# TODO: Create welcome interface
# TODO: Initialize messages array with system prompt
# TODO: Implement chat loop
# TODO: Handle exit conditions- Welcome message with clear instructions
- Professional interface design
- Multiple exit commands (exit, quit, bye)
- Proper goodbye message
Add conversation history management to maintain context across multiple exchanges.
Objective: Create reusable, dynamic prompts using PromptTemplate
Concepts: Template creation, Variable substitution, Prompt engineering, Code reusability
- Import and use LangChain's PromptTemplate
- Create a template with multiple variables
- Accept user input for template variables
- Format and execute the template with a completion model
from langchain_core.prompts import PromptTemplate
template = PromptTemplate.from_template(
"Your template string with {variable1} and {variable2}"
)- Book summarization with title and author
- Recipe generation with ingredients and cuisine
- Learning explanations with topic and difficulty level
Create a template validator that ensures all required variables are provided.
Objective: Build sophisticated prompt templates with multiple parameters
Concepts: Complex templating, Input validation, User experience design
- Design a prompt template with 4+ variables
- Create a user-friendly input collection process
- Add loading indicators and professional formatting
- Use streaming for better user experience
- Tutorial generator (topic, audience, length, format)
- Product description creator (product, features, target audience, tone)
- Story writer (genre, characters, setting, length)
- Clear input prompts
- Loading indicators
- Formatted output presentation
- Error handling for invalid inputs
Add input validation and smart defaults for optional parameters.
Objective: Master conversational prompt templates with multiple message types
Concepts: ChatPromptTemplate, Multi-role conversations, Context building
- Use ChatPromptTemplate.from_messages()
- Create templates with system, user, and AI message roles
- Include multiple variables across different message types
- Build a complex conversational context
chat_prompt = ChatPromptTemplate.from_messages([
("system", "You are an {role} specialized in {domain}..."),
("user", "Previous context or example..."),
("ai", "Example AI response with {variable}..."),
("user", "{main_user_input}")
])- Role-based conversation design
- Context building through multiple messages
- Variable distribution across message types
- Professional conversation flow
Create a template that maintains conversation context while allowing for dynamic topic changes.
Objective: Build a sophisticated expert system using advanced chat templates
Concepts: Expert system design, Domain specialization, Professional AI personas
- Create a comprehensive expert system template
- Include multiple conversation turns with context
- Use variables to customize expertise and domain
- Implement professional conversation management
- Expert role definition
- Domain specialization
- Conversation history
- Professional response patterns
- Dynamic expert selection
- Domain-specific knowledge activation
- Context-aware responses
- Professional interaction patterns
Add memory management to maintain conversation context across multiple expert consultations.
Objective: Combine all concepts into a complete interactive application
Concepts: Application architecture, User experience, Feature integration
- Create a menu-driven application interface
- Integrate completion models, chat models, and templates
- Provide multiple interaction modes
- Include proper error handling and user guidance
- Multiple AI interaction modes
- Template-based conversations
- Streaming and non-streaming options
- Professional user interface
- Comprehensive error handling
- Configuration management
- Model selection system
- Template library
- User interface layer
- Error handling system
Add conversation export functionality and session management.
exercises-for-genai-devs/
├── README.md
├── requirements.txt
├── .env.example
├── .gitignore
├── config.py
├── exercises/
│ ├── exercise_01_setup/
│ │ ├── README.md
│ │ ├── starter.py
│ │ └── solution.py
│ ├── exercise_02_completion/
│ │ ├── README.md
│ │ ├── starter.py
│ │ └── solution.py
│ └── [other exercises...]
├── solutions/
│ ├── config_solution.py
│ ├── exercise_01_solution.py
│ └── [other solutions...]
└── resources/
├── templates/
└── examples/
Master basic LangChain setup, model loading, and simple interactions.
Learn streaming, conversation loops, and user experience design.
Advanced prompt engineering and template-based AI interactions.
Build complete applications combining all learned concepts with professional Streamlit interfaces.
Each Streamlit application should include:
# Standard imports and setup
import streamlit as st
import sys
import os
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from config import load_google_chat_model, load_google_llm
# Page configuration
st.set_page_config(
page_title="Exercise Name",
page_icon="🤖",
layout="wide",
initial_sidebar_state="expanded"
)
# Session state initialization
if "messages" not in st.session_state:
st.session_state.messages = []Sidebar Configuration
with st.sidebar:
st.title("Configuration")
model_choice = st.selectbox("Choose Model:", ["Google", "Groq"])
temperature = st.slider("Temperature:", 0.0, 1.0, 0.7)
max_tokens = st.number_input("Max Tokens:", 100, 4000, 2000)Chat Interface
# Display chat history
for message in st.session_state.messages:
with st.chat_message(message["role"]):
st.write(message["content"])
# Chat input
if prompt := st.chat_input("Type your message..."):
st.session_state.messages.append({"role": "user", "content": prompt})
# Process and respondStreaming Implementation
# Streaming response in Streamlit
with st.chat_message("assistant"):
message_placeholder = st.empty()
full_response = ""
for chunk in model.stream(prompt):
full_response += chunk.content
message_placeholder.markdown(full_response + "▌")
message_placeholder.markdown(full_response)- File Upload:
st.file_uploader()for document processing - Data Display:
st.dataframe(),st.json()for structured data - Metrics:
st.metric()for performance indicators - Progress:
st.progress(),st.spinner()for loading states - Layout:
st.columns(),st.expander(),st.tabs()for organization
# Always use environment variables for API keys
# Create reusable model loading functions
# Handle missing credentials gracefully# Check for API key availability
# Handle network timeouts
# Manage rate limiting
# Validate user inputs# Provide clear instructions
# Use loading indicators
# Format output professionally
# Handle edge cases gracefully- Missing API Keys: Ensure .env file is properly configured
- Import Errors: Verify all required packages are installed
- Rate Limiting: Implement proper delays between requests
- Token Limits: Monitor and handle context window limitations
- Check exercise README files for specific guidance
- Review solution files for implementation examples
- Test with simple examples before building complex features
We welcome contributions that align with our focused approach:
- Exercise Contributions: Add exercises using the established patterns
- Solution Improvements: Enhance existing solutions with better practices
- Documentation: Improve clarity and add helpful examples
- Bug Fixes: Report and fix issues in existing code
- Follow the established exercise format
- Include comprehensive documentation
- Test all code before submitting
- Focus on practical, hands-on learning
After completing these exercises, you'll have solid foundations in:
- LangChain configuration and setup
- Completion and chat model usage
- Prompt template design and implementation
- Professional AI application development
Consider exploring:
- Advanced LangChain features (agents, chains)
- Vector databases and RAG systems
- Production deployment patterns
- Integration with web frameworks
Start your journey with Exercise 1 and build your GenAI development skills step by step.
