Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
04c6ead
Update .gitignore to include __pycache__ and Python bytecode files
abimmost Sep 5, 2025
0981f32
Update .gitignore to include __pycache__ and Python bytecode files
abimmost Sep 5, 2025
092bd32
Refactor environment_settings function to improve error handling and …
abimmost Sep 5, 2025
98de4cf
Update .gitignore to include additional Python bytecode files
abimmost Sep 5, 2025
14cbaee
Add README.md files for exercises 1 to 6
abimmost Sep 5, 2025
3895818
Add empty solution.py files for exercises 1 to 6
abimmost Sep 5, 2025
756e66e
Remove empty solution.py file for exercise 1
abimmost Sep 5, 2025
5baec74
Refactor imports in settings.py for better organization and remove re…
abimmost Sep 5, 2025
a83362a
Add solution implementation for exercise 2 with LLM invocation and st…
abimmost Sep 5, 2025
321c559
Add detailed notes to README.md for exercise 1
abimmost Sep 5, 2025
cf16f81
Implement streaming response functionality in solution.py for exercise 2
abimmost Sep 5, 2025
6fd3b2b
Add detailed instructions and starter code for Exercise 2 in README.md
abimmost Sep 5, 2025
c95414b
Add comprehensive instructions and starter code for Exercise 3 in REA…
abimmost Sep 5, 2025
3efe3e1
Implement solution for Exercise 3 with LLM chat invocation and respon…
abimmost Sep 5, 2025
811753a
Refactor app.py to improve structure and readability by adding enviro…
abimmost Sep 5, 2025
3e09350
Add pprint to requirements for improved data formatting
abimmost Sep 13, 2025
c3f70dc
Implement chat functionality in solution.py for Exercise 5
abimmost Sep 13, 2025
2ccc53b
Add streamlit to requirements for enhanced UI capabilities
abimmost Sep 13, 2025
95b9f2f
Add configs.py for model loading and selection functionality
abimmost Sep 13, 2025
4c40401
Add README.md for Exercise 4: Streaming Responses with detailed steps…
abimmost Sep 13, 2025
2a83631
Implement chat model loading and streaming responses in solution.py
abimmost Sep 13, 2025
b641ae7
Refactor LLM connection to use load_google_llm function in solution.py
abimmost Sep 13, 2025
b9cb59f
Refactor LLM connection to use load_google_chat_model function in sol…
abimmost Sep 13, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,2 +1,4 @@
.env
langchain_env/
langchain_env/
*__pycache__/
config.py
81 changes: 42 additions & 39 deletions app.py
Original file line number Diff line number Diff line change
@@ -1,52 +1,55 @@

from config import environment_settings, connect_to_llm

environment_settings()
llm = connect_to_llm()
results=llm.invoke("Tell, me who is the GOAT, iLya suskever or Andrej Karpathy?")
print(llm)

# print output
print(results)
if not environment_settings():
print("Exiting...")

else:
llm = connect_to_llm()
results=llm.invoke("Tell, me who is the GOAT, iLya suskever or Andrej Karpathy?")
print(llm)

# print output
print(results)

# A PROMPT MUST ALWAYS HAVE THIS FORMAT: ICE FC=>
# 1. intent
# 2. Context
# 3. Examples
# 4. Format
# 5. Constraints
# 6. Polish
# 7. Iaterate
prompt="""
INTENT:
You are to behave as a world-class football analyst, widely regarded as the best in the world. Always provide factual, concise, and professional responses to the user’s queries.
# A PROMPT MUST ALWAYS HAVE THIS FORMAT: ICE FC=>
# 1. intent
# 2. Context
# 3. Examples
# 4. Format
# 5. Constraints
# 6. Polish
# 7. Iaterate
prompt="""
INTENT:
You are to behave as a world-class football analyst, widely regarded as the best in the world. Always provide factual, concise, and professional responses to the user’s queries.

CONTEXT:
You have 30 years of experience playing professional football and 12 years coaching at elite levels. You now work as a football analyst with a PhD in Applied Mathematics. You are meticulous, detail-oriented, and always rely on deep research from online sources, books, databases, APIs, and documents to support your answers.
CONTEXT:
You have 30 years of experience playing professional football and 12 years coaching at elite levels. You now work as a football analyst with a PhD in Applied Mathematics. You are meticulous, detail-oriented, and always rely on deep research from online sources, books, databases, APIs, and documents to support your answers.

FORMAT:
- Always start by introducing yourself in this style:
FORMAT:
- Always start by introducing yourself in this style:

"Mr. Gita, I was born with football, Sir/Madam.
I have played for clubs such as Arsenal, Chelsea, Barcelona, Real Madrid, PSG, and Manchester United.
I have coached teams like Liverpool, Manchester City, and Tottenham.
I have won the Ballon d’Or 5 times and the FIFA World Player of the Year 4 times.
I have also won the UEFA Champions League 3 times and the English Premier League 4 times.
I am considered one of the greatest football players of all time. How can I help you today?"
"Mr. Gita, I was born with football, Sir/Madam.
I have played for clubs such as Arsenal, Chelsea, Barcelona, Real Madrid, PSG, and Manchester United.
I have coached teams like Liverpool, Manchester City, and Tottenham.
I have won the Ballon d’Or 5 times and the FIFA World Player of the Year 4 times.
I have also won the UEFA Champions League 3 times and the English Premier League 4 times.
I am considered one of the greatest football players of all time. How can I help you today?"

- After this introduction, answer the user’s query in **less than 100 words**. Keep it short, precise, and professional.
- After this introduction, answer the user’s query in **less than 100 words**. Keep it short, precise, and professional.

CONSTRAINTS:
- Always maintain a professional tone.
- Always keep answers under 100 words.
- Always provide factual responses.
CONSTRAINTS:
- Always maintain a professional tone.
- Always keep answers under 100 words.
- Always provide factual responses.

EXAMPLES:
Q: Who is the GOAT of football?
A: "The GOAT of football is Lionel Messi."
EXAMPLES:
Q: Who is the GOAT of football?
A: "The GOAT of football is Lionel Messi."


"""
"""

for part in llm.stream(prompt):
print(part, end="", flush=True)
for part in llm.stream(prompt):
print(part, end="", flush=True)
76 changes: 62 additions & 14 deletions config/settings.py
Original file line number Diff line number Diff line change
@@ -1,30 +1,78 @@
import os
from dotenv import load_dotenv, find_dotenv
from langchain_google_genai import ChatGoogleGenerativeAI, GoogleGenerativeAI
from langchain_groq import ChatGroq

def environment_settings():
import os
from dotenv import load_dotenv, find_dotenv
load_dotenv(find_dotenv())
# print the keys
GOOGLE_API_KEY = os.getenv("GOOGLE_API_KEY")
GROQ_API_KEY = os.getenv("GROQ_API_KEY")
print(f"Google API Key: {GOOGLE_API_KEY}")
print(f"Groq API Key: {GROQ_API_KEY}")

try:
# Load environment variables
if not load_dotenv(find_dotenv()):
raise EnvironmentError("No .env file found")

# Get API keys
GOOGLE_API_KEY = os.getenv("GOOGLE_API_KEY")
GROQ_API_KEY = os.getenv("GROQ_API_KEY")

# Validate API keys
missing_keys = []
if not GOOGLE_API_KEY:
missing_keys.append("GOOGLE_API_KEY")
if not GROQ_API_KEY:
missing_keys.append("GROQ_API_KEY")

if missing_keys:
raise ValueError(f"Missing required environment variables: {', '.join(missing_keys)}")

return True

except ValueError as ve:
print(f"Environment Error: {ve}")
print("Please check your .env file and ensure all required API keys are set.")
return False
except EnvironmentError as ee:
print(f"Environment Error: {ee}")
print("Please make sure .env file exists in the project root directory.")
return False
except Exception as e:
print(f"Unexpected error: {e}")
return False

def connect_to_llm():
from langchain_google_genai import GoogleGenerativeAI
environment_settings()
llm=GoogleGenerativeAI(model="gemini-2.5-flash", temperature=0.7)
return llm
def connect_to_llm_chat():
from langchain_google_genai import ChatGoogleGenerativeAI
environment_settings()
llm=ChatGoogleGenerativeAI(model="gemini-2.5-flash", temperature=0.7)
return llm
def connect_to_groq():
from langchain_groq import ChatGroq
environment_settings()
llm=ChatGroq(model="deepseek-r1-distill-llama-70b", temperature=0.7)
return llm

def connect_to_db_url():
import os
from dotenv import load_dotenv, find_dotenv
load_dotenv(find_dotenv())
DATABASE_URL = os.getenv("DATABASE_URL")
print(f"Database URL: {DATABASE_URL}")
return DATABASE_URL
return DATABASE_URL

def select_model():
print("\nAvailable Models:")
print("1. Google LLM")
print("2. Google Chat Model")
print("3. Groq Chat Model")

while True:
try:
choice = int(input("\nSelect a model (1-3): "))
if choice == 1:
return connect_to_llm()
elif choice == 2:
return connect_to_llm_chat()
elif choice == 3:
return connect_to_groq()
else:
print("Invalid choice. Please select 1, 2, or 3.")
except ValueError:
print("Please enter a valid number.")
36 changes: 36 additions & 0 deletions configs.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# config.py
import os
from config import connect_to_llm, connect_to_llm_chat, connect_to_groq

def load_google_llm():
llm = connect_to_llm()
return llm

def load_google_chat_model():
chat_model = connect_to_llm_chat()
return chat_model

def load_groq_chat_model():
groq_model = connect_to_groq()
return groq_model

def select_model():
print("\nAvailable Models:")
print("1. Google LLM")
print("2. Google Chat Model")
print("3. Groq Chat Model")

while True:
try:
choice = int(input("\nSelect a model (1-3): "))
if choice == 1:
return load_google_llm()
elif choice == 2:
return load_google_chat_model()
elif choice == 3:
return load_groq_chat_model()
else:
print("Invalid choice. Please select 1, 2, or 3.")
except ValueError:
print("Please enter a valid number.")

6 changes: 6 additions & 0 deletions exercises/exercise_1/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# Exercise 1

## Notes

- The `config` folder and its contents were used to complete this task, not the `config.py` file.
- The model selection function was implemented in the `settings.py` file.
31 changes: 31 additions & 0 deletions exercises/exercise_2/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# Exercise 2: Your First LLM Completion

**Objective**: Use a completion model to generate text responses

**Concepts**: LLM completion, Basic prompting, Streaming responses

### Instructions

1. Load a Google completion model using your config
2. Create a simple prompt about a topic you're interested in
3. Get a response using the `invoke()` method
4. Implement streaming to see the response generated in real-time

### Starter Code

```python
from config import load_google_llm

# TODO: Load the model
# TODO: Create a prompt
# TODO: Get response using invoke()
# TODO: Implement streaming with stream()
```

### Expected Behavior
- See immediate response with invoke()
- Watch text appear word-by-word with streaming
- Handle the response content properly

### Challenge
Create a simple loop that accepts user prompts and responds using the completion model.
15 changes: 15 additions & 0 deletions exercises/exercise_2/solution.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
from configs import load_google_llm
import time

llm = load_google_llm()
while True:
prompt = input("\nEnter your prompt: ")

# response = llm.invoke(prompt)
# print("Response from invoke:\n", response)

# time.sleep(4)

print("\nResponse from stream:")
for part in llm.stream(prompt):
print(part, end="", flush=True)
34 changes: 34 additions & 0 deletions exercises/exercise_3/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
# Exercise 3: Basic Chat Model Interaction

**Objective**: Build conversational interactions using chat models

**Concepts**: Chat models, Message formatting, System prompts, Conversation flow

### Instructions

1. Load a Google chat model from your config
2. Create a message structure with system and user roles
3. Use the chat model to respond to questions
4. Experiment with different system prompts to change AI behavior

### Starter Code

```python
from config import load_google_chat_model

# TODO: Load chat model
# TODO: Create messages array with system and user messages
# TODO: Get response using invoke()
# TODO: Test different system prompts
```

### Expected Message Format
```python
messages = [
("system", "You are a helpful assistant specialized in..."),
("user", "Your question here")
]
```

### Challenge
Create different personas by changing the system message and test how responses change.
9 changes: 9 additions & 0 deletions exercises/exercise_3/solution.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
from configs import load_google_chat_model

llm_chat = load_google_chat_model()
messages = [
("system", "You are a helpful assistant specialized in human anatomy"),
("user", "Can digestive acid in my stomach be secreted even without food in it?")
]
response = llm_chat.invoke(messages)
print(f"Assistant: {response}")
29 changes: 29 additions & 0 deletions exercises/exercise_4/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# Exercise 4: Streaming Responses

## Goal

The goal of this exercise is to demonstrate how to stream responses from a language model. This is useful when you want to display the response to the user as it is being generated, rather than waiting for the entire response to be available.

## Steps

1. **Import necessary libraries**: You will need to import the `connect_to_llm_chat` function from the `config` module.
2. **Connect to the language model**: Use the `connect_to_llm_chat` function to get a chat model object.
3. **Create a loop**: Create a `while True` loop to continuously get user input.
4. **Use the `stream` method**: Inside the loop, call the `stream` method on the chat model object, passing the user's prompt as an argument.
5. **Print the response**: The `stream` method returns a generator. Iterate over the generator and print the `content` of each part of the response. Use `end=""` and `flush=True` in the `print` function to ensure the response is printed as it is received.

## Solution

```python
from config import connect_to_llm_chat

chat_model = connect_to_llm_chat()

while True:
prompt = input("User prompt: ")
# Streaming pattern
print("\nBot:")
for part in chat_model.stream(prompt):
print(part.content, end="", flush=True)
```

10 changes: 10 additions & 0 deletions exercises/exercise_4/solution.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
from configs import load_google_chat_model

chat_model = load_google_chat_model()

while True:
prompt = input("User prompt: ")
# Streaming pattern
print("\nBot:")
for part in chat_model.stream(prompt):
print(part.content, end="", flush=True)
1 change: 1 addition & 0 deletions exercises/exercise_5/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@

14 changes: 14 additions & 0 deletions exercises/exercise_5/solution.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
from configs import load_google_chat_model

chat = load_google_chat_model

print("THE AI ASSISTANT WELCOMES YOU")
print("-"*30,"\n Begin by entering a prompt")

while True:
prompt = input("USER: ")
if prompt.lower() == "quit" or "exit":
print("Goodbyte...")
break
for part in chat.stream(prompt):
print(part, end="", flush=True)
1 change: 1 addition & 0 deletions exercises/exercise_6/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@

Loading