diff --git a/.gitignore b/.gitignore index 9a72755..30300db 100644 --- a/.gitignore +++ b/.gitignore @@ -1,2 +1,4 @@ .env -langchain_env/ \ No newline at end of file +langchain_env/ +*__pycache__/ +config.py \ No newline at end of file diff --git a/app.py b/app.py index 6101cff..2b984a6 100644 --- a/app.py +++ b/app.py @@ -1,52 +1,55 @@ - from config import environment_settings, connect_to_llm -environment_settings() -llm = connect_to_llm() -results=llm.invoke("Tell, me who is the GOAT, iLya suskever or Andrej Karpathy?") -print(llm) -# print output -print(results) +if not environment_settings(): + print("Exiting...") + +else: + llm = connect_to_llm() + results=llm.invoke("Tell, me who is the GOAT, iLya suskever or Andrej Karpathy?") + print(llm) + + # print output + print(results) -# A PROMPT MUST ALWAYS HAVE THIS FORMAT: ICE FC=> -# 1. intent -# 2. Context -# 3. Examples -# 4. Format -# 5. Constraints -# 6. Polish -# 7. Iaterate -prompt=""" -INTENT: -You are to behave as a world-class football analyst, widely regarded as the best in the world. Always provide factual, concise, and professional responses to the user’s queries. + # A PROMPT MUST ALWAYS HAVE THIS FORMAT: ICE FC=> + # 1. intent + # 2. Context + # 3. Examples + # 4. Format + # 5. Constraints + # 6. Polish + # 7. Iaterate + prompt=""" + INTENT: + You are to behave as a world-class football analyst, widely regarded as the best in the world. Always provide factual, concise, and professional responses to the user’s queries. -CONTEXT: -You have 30 years of experience playing professional football and 12 years coaching at elite levels. You now work as a football analyst with a PhD in Applied Mathematics. You are meticulous, detail-oriented, and always rely on deep research from online sources, books, databases, APIs, and documents to support your answers. + CONTEXT: + You have 30 years of experience playing professional football and 12 years coaching at elite levels. You now work as a football analyst with a PhD in Applied Mathematics. You are meticulous, detail-oriented, and always rely on deep research from online sources, books, databases, APIs, and documents to support your answers. -FORMAT: -- Always start by introducing yourself in this style: + FORMAT: + - Always start by introducing yourself in this style: -"Mr. Gita, I was born with football, Sir/Madam. -I have played for clubs such as Arsenal, Chelsea, Barcelona, Real Madrid, PSG, and Manchester United. -I have coached teams like Liverpool, Manchester City, and Tottenham. -I have won the Ballon d’Or 5 times and the FIFA World Player of the Year 4 times. -I have also won the UEFA Champions League 3 times and the English Premier League 4 times. -I am considered one of the greatest football players of all time. How can I help you today?" + "Mr. Gita, I was born with football, Sir/Madam. + I have played for clubs such as Arsenal, Chelsea, Barcelona, Real Madrid, PSG, and Manchester United. + I have coached teams like Liverpool, Manchester City, and Tottenham. + I have won the Ballon d’Or 5 times and the FIFA World Player of the Year 4 times. + I have also won the UEFA Champions League 3 times and the English Premier League 4 times. + I am considered one of the greatest football players of all time. How can I help you today?" -- After this introduction, answer the user’s query in **less than 100 words**. Keep it short, precise, and professional. + - After this introduction, answer the user’s query in **less than 100 words**. Keep it short, precise, and professional. -CONSTRAINTS: -- Always maintain a professional tone. -- Always keep answers under 100 words. -- Always provide factual responses. + CONSTRAINTS: + - Always maintain a professional tone. + - Always keep answers under 100 words. + - Always provide factual responses. -EXAMPLES: -Q: Who is the GOAT of football? -A: "The GOAT of football is Lionel Messi." + EXAMPLES: + Q: Who is the GOAT of football? + A: "The GOAT of football is Lionel Messi." -""" + """ -for part in llm.stream(prompt): - print(part, end="", flush=True) + for part in llm.stream(prompt): + print(part, end="", flush=True) \ No newline at end of file diff --git a/config/settings.py b/config/settings.py index 9cd74a1..68f6037 100644 --- a/config/settings.py +++ b/config/settings.py @@ -1,30 +1,78 @@ +import os +from dotenv import load_dotenv, find_dotenv +from langchain_google_genai import ChatGoogleGenerativeAI, GoogleGenerativeAI +from langchain_groq import ChatGroq + def environment_settings(): - import os - from dotenv import load_dotenv, find_dotenv - load_dotenv(find_dotenv()) - # print the keys - GOOGLE_API_KEY = os.getenv("GOOGLE_API_KEY") - GROQ_API_KEY = os.getenv("GROQ_API_KEY") - print(f"Google API Key: {GOOGLE_API_KEY}") - print(f"Groq API Key: {GROQ_API_KEY}") + + try: + # Load environment variables + if not load_dotenv(find_dotenv()): + raise EnvironmentError("No .env file found") + + # Get API keys + GOOGLE_API_KEY = os.getenv("GOOGLE_API_KEY") + GROQ_API_KEY = os.getenv("GROQ_API_KEY") + + # Validate API keys + missing_keys = [] + if not GOOGLE_API_KEY: + missing_keys.append("GOOGLE_API_KEY") + if not GROQ_API_KEY: + missing_keys.append("GROQ_API_KEY") + + if missing_keys: + raise ValueError(f"Missing required environment variables: {', '.join(missing_keys)}") + + return True + + except ValueError as ve: + print(f"Environment Error: {ve}") + print("Please check your .env file and ensure all required API keys are set.") + return False + except EnvironmentError as ee: + print(f"Environment Error: {ee}") + print("Please make sure .env file exists in the project root directory.") + return False + except Exception as e: + print(f"Unexpected error: {e}") + return False def connect_to_llm(): - from langchain_google_genai import GoogleGenerativeAI + environment_settings() llm=GoogleGenerativeAI(model="gemini-2.5-flash", temperature=0.7) return llm def connect_to_llm_chat(): - from langchain_google_genai import ChatGoogleGenerativeAI + environment_settings() llm=ChatGoogleGenerativeAI(model="gemini-2.5-flash", temperature=0.7) return llm def connect_to_groq(): - from langchain_groq import ChatGroq + environment_settings() llm=ChatGroq(model="deepseek-r1-distill-llama-70b", temperature=0.7) return llm def connect_to_db_url(): - import os - from dotenv import load_dotenv, find_dotenv load_dotenv(find_dotenv()) DATABASE_URL = os.getenv("DATABASE_URL") print(f"Database URL: {DATABASE_URL}") - return DATABASE_URL \ No newline at end of file + return DATABASE_URL + +def select_model(): + print("\nAvailable Models:") + print("1. Google LLM") + print("2. Google Chat Model") + print("3. Groq Chat Model") + + while True: + try: + choice = int(input("\nSelect a model (1-3): ")) + if choice == 1: + return connect_to_llm() + elif choice == 2: + return connect_to_llm_chat() + elif choice == 3: + return connect_to_groq() + else: + print("Invalid choice. Please select 1, 2, or 3.") + except ValueError: + print("Please enter a valid number.") \ No newline at end of file diff --git a/configs.py b/configs.py new file mode 100644 index 0000000..fe24913 --- /dev/null +++ b/configs.py @@ -0,0 +1,36 @@ +# config.py +import os +from config import connect_to_llm, connect_to_llm_chat, connect_to_groq + +def load_google_llm(): + llm = connect_to_llm() + return llm + +def load_google_chat_model(): + chat_model = connect_to_llm_chat() + return chat_model + +def load_groq_chat_model(): + groq_model = connect_to_groq() + return groq_model + +def select_model(): + print("\nAvailable Models:") + print("1. Google LLM") + print("2. Google Chat Model") + print("3. Groq Chat Model") + + while True: + try: + choice = int(input("\nSelect a model (1-3): ")) + if choice == 1: + return load_google_llm() + elif choice == 2: + return load_google_chat_model() + elif choice == 3: + return load_groq_chat_model() + else: + print("Invalid choice. Please select 1, 2, or 3.") + except ValueError: + print("Please enter a valid number.") + diff --git a/exercises/exercise_1/README.md b/exercises/exercise_1/README.md new file mode 100644 index 0000000..fb332a1 --- /dev/null +++ b/exercises/exercise_1/README.md @@ -0,0 +1,6 @@ +# Exercise 1 + +## Notes + +- The `config` folder and its contents were used to complete this task, not the `config.py` file. +- The model selection function was implemented in the `settings.py` file. diff --git a/exercises/exercise_2/README.md b/exercises/exercise_2/README.md new file mode 100644 index 0000000..115b48a --- /dev/null +++ b/exercises/exercise_2/README.md @@ -0,0 +1,31 @@ +# Exercise 2: Your First LLM Completion + +**Objective**: Use a completion model to generate text responses + +**Concepts**: LLM completion, Basic prompting, Streaming responses + +### Instructions + +1. Load a Google completion model using your config +2. Create a simple prompt about a topic you're interested in +3. Get a response using the `invoke()` method +4. Implement streaming to see the response generated in real-time + +### Starter Code + +```python +from config import load_google_llm + +# TODO: Load the model +# TODO: Create a prompt +# TODO: Get response using invoke() +# TODO: Implement streaming with stream() +``` + +### Expected Behavior +- See immediate response with invoke() +- Watch text appear word-by-word with streaming +- Handle the response content properly + +### Challenge +Create a simple loop that accepts user prompts and responds using the completion model. diff --git a/exercises/exercise_2/solution.py b/exercises/exercise_2/solution.py new file mode 100644 index 0000000..335a809 --- /dev/null +++ b/exercises/exercise_2/solution.py @@ -0,0 +1,15 @@ +from configs import load_google_llm +import time + +llm = load_google_llm() +while True: + prompt = input("\nEnter your prompt: ") + + # response = llm.invoke(prompt) + # print("Response from invoke:\n", response) + + # time.sleep(4) + + print("\nResponse from stream:") + for part in llm.stream(prompt): + print(part, end="", flush=True) \ No newline at end of file diff --git a/exercises/exercise_3/README.md b/exercises/exercise_3/README.md new file mode 100644 index 0000000..70ecbc0 --- /dev/null +++ b/exercises/exercise_3/README.md @@ -0,0 +1,34 @@ +# Exercise 3: Basic Chat Model Interaction + +**Objective**: Build conversational interactions using chat models + +**Concepts**: Chat models, Message formatting, System prompts, Conversation flow + +### Instructions + +1. Load a Google chat model from your config +2. Create a message structure with system and user roles +3. Use the chat model to respond to questions +4. Experiment with different system prompts to change AI behavior + +### Starter Code + +```python +from config import load_google_chat_model + +# TODO: Load chat model +# TODO: Create messages array with system and user messages +# TODO: Get response using invoke() +# TODO: Test different system prompts +``` + +### Expected Message Format +```python +messages = [ + ("system", "You are a helpful assistant specialized in..."), + ("user", "Your question here") +] +``` + +### Challenge +Create different personas by changing the system message and test how responses change. diff --git a/exercises/exercise_3/solution.py b/exercises/exercise_3/solution.py new file mode 100644 index 0000000..98edad0 --- /dev/null +++ b/exercises/exercise_3/solution.py @@ -0,0 +1,9 @@ +from configs import load_google_chat_model + +llm_chat = load_google_chat_model() +messages = [ + ("system", "You are a helpful assistant specialized in human anatomy"), + ("user", "Can digestive acid in my stomach be secreted even without food in it?") +] +response = llm_chat.invoke(messages) +print(f"Assistant: {response}") \ No newline at end of file diff --git a/exercises/exercise_4/README.md b/exercises/exercise_4/README.md new file mode 100644 index 0000000..1b82d27 --- /dev/null +++ b/exercises/exercise_4/README.md @@ -0,0 +1,29 @@ +# Exercise 4: Streaming Responses + +## Goal + +The goal of this exercise is to demonstrate how to stream responses from a language model. This is useful when you want to display the response to the user as it is being generated, rather than waiting for the entire response to be available. + +## Steps + +1. **Import necessary libraries**: You will need to import the `connect_to_llm_chat` function from the `config` module. +2. **Connect to the language model**: Use the `connect_to_llm_chat` function to get a chat model object. +3. **Create a loop**: Create a `while True` loop to continuously get user input. +4. **Use the `stream` method**: Inside the loop, call the `stream` method on the chat model object, passing the user's prompt as an argument. +5. **Print the response**: The `stream` method returns a generator. Iterate over the generator and print the `content` of each part of the response. Use `end=""` and `flush=True` in the `print` function to ensure the response is printed as it is received. + +## Solution + +```python +from config import connect_to_llm_chat + +chat_model = connect_to_llm_chat() + +while True: + prompt = input("User prompt: ") + # Streaming pattern + print("\nBot:") + for part in chat_model.stream(prompt): + print(part.content, end="", flush=True) +``` + diff --git a/exercises/exercise_4/solution.py b/exercises/exercise_4/solution.py new file mode 100644 index 0000000..2484d4f --- /dev/null +++ b/exercises/exercise_4/solution.py @@ -0,0 +1,10 @@ +from configs import load_google_chat_model + +chat_model = load_google_chat_model() + +while True: + prompt = input("User prompt: ") + # Streaming pattern + print("\nBot:") + for part in chat_model.stream(prompt): + print(part.content, end="", flush=True) \ No newline at end of file diff --git a/exercises/exercise_5/README.md b/exercises/exercise_5/README.md new file mode 100644 index 0000000..8d1c8b6 --- /dev/null +++ b/exercises/exercise_5/README.md @@ -0,0 +1 @@ + diff --git a/exercises/exercise_5/solution.py b/exercises/exercise_5/solution.py new file mode 100644 index 0000000..ef7752b --- /dev/null +++ b/exercises/exercise_5/solution.py @@ -0,0 +1,14 @@ +from configs import load_google_chat_model + +chat = load_google_chat_model + +print("THE AI ASSISTANT WELCOMES YOU") +print("-"*30,"\n Begin by entering a prompt") + +while True: + prompt = input("USER: ") + if prompt.lower() == "quit" or "exit": + print("Goodbyte...") + break + for part in chat.stream(prompt): + print(part, end="", flush=True) \ No newline at end of file diff --git a/exercises/exercise_6/README.md b/exercises/exercise_6/README.md new file mode 100644 index 0000000..8d1c8b6 --- /dev/null +++ b/exercises/exercise_6/README.md @@ -0,0 +1 @@ + diff --git a/exercises/exercise_6/solution.py b/exercises/exercise_6/solution.py new file mode 100644 index 0000000..8d1c8b6 --- /dev/null +++ b/exercises/exercise_6/solution.py @@ -0,0 +1 @@ + diff --git a/requirement.txt b/requirement.txt index 3fc95c7..28a3b37 100644 --- a/requirement.txt +++ b/requirement.txt @@ -3,3 +3,4 @@ langchain_google_genai python_dotenv langchain_openai langchain_groq +streamlit \ No newline at end of file