diff --git a/README.md b/README.md
index 82b7927f..a36d61e2 100644
--- a/README.md
+++ b/README.md
@@ -31,6 +31,8 @@ Powered by [Nebius AI Studio](https://dub.sh/nebius) - your one-stop platform fo
- [LangChain-LangGraph Starter](starter_ai_agents/langchain_langgraph_starter) - LangChain + LangGraph starter.
- [AWS Strands Agent Starter](starter_ai_agents/aws_strands_starter) - Weather report Agent.
- [Camel AI Starter](starter_ai_agents/camel_ai_starter) - Performance benchmarking tool that compares the performance of various AI models.
+- [Career_Guidence](simple_ai_agents/Career_Guidence) - Gives the guidence and roadmap towards career based on the user selected options,it also supports Traditional Rag.
+- [Recruitify](simple_ai_agents/Recruitify) - A web app that analyse the resume based on the JobDescription,Interview Question,Enhance Resume,Practise interview based on the question generated.
## πͺΆ Simple Agents
@@ -47,6 +49,7 @@ Powered by [Nebius AI Studio](https://dub.sh/nebius) - your one-stop platform fo
- [Nebius Chat](simple_ai_agents/nebius_chat) - Nebius AI Studio Chat interface.
- [Talk to Your DB](simple_ai_agents/talk_to_db) - Talk to your Database with GibsonAI & Langchain
+
## ποΈ MCP Agents
**Examples using Model Context Protocol:**
@@ -96,6 +99,8 @@ Powered by [Nebius AI Studio](https://dub.sh/nebius) - your one-stop platform fo
- [Price Monitoring Agent](advance_ai_agents/price_monitoring_agent) - Price monitoring and alerting Agent powered by CrewAi, Twilio & Nebius.
- [Startup Idea Validator Agent](advance_ai_agents/startup_idea_validator_agent) - Agentic Workflow to validate and analyze startup ideas.
- [Meeting Assistant Agent](advance_ai_agents/meeting_assistant_agent) - Agentic Workflow that send meeting notes and creates task based on conversation.
+- [AlgoMentor](advance_ai_agents/AlgoMentor) - Agentic Workflow that give the BruteForce,SubOptimal & Optimal Approach and code of the Problem , and also supports the notes making .
+-[ScholaeLens](advance_ai_agents/ScholarLens) - Agentic Workflow for Find Research Paper,Summarizer,Agentic Rag With Pinecone and Agnno, and Companion of papers.
## πΊ Playlist of Demo Videos & Tutorials
diff --git a/advance_ai_agents/AlgoMentor/.env.example b/advance_ai_agents/AlgoMentor/.env.example
new file mode 100644
index 00000000..4824601e
--- /dev/null
+++ b/advance_ai_agents/AlgoMentor/.env.example
@@ -0,0 +1,7 @@
+# API Keys
+GROQ_API_KEY=your_groq_api_key_here
+GOOGLE_API_KEY=your_google_api_key_here
+
+# Application Settings
+DEBUG=False
+LOG_LEVEL=INFO
\ No newline at end of file
diff --git a/advance_ai_agents/AlgoMentor/.gitignore b/advance_ai_agents/AlgoMentor/.gitignore
new file mode 100644
index 00000000..c5f88213
--- /dev/null
+++ b/advance_ai_agents/AlgoMentor/.gitignore
@@ -0,0 +1,80 @@
+# Byte-compiled / optimized / DLL files
+__pycache__/
+*.py[cod]
+*$py.class
+
+# Distribution / packaging
+.Python
+build/
+develop-eggs/
+dist/
+downloads/
+eggs/
+.eggs/
+lib/
+lib64/
+parts/
+sdist/
+var/
+wheels/
+share/python-wheels/
+*.egg-info/
+.installed.cfg
+*.egg
+MANIFEST
+
+# PyInstaller
+*.manifest
+*.spec
+# Jupyter Notebook
+.ipynb_checkpoints
+
+# IPython
+profile_default/
+ipython_config.py
+
+# pyenv
+.python-version
+
+# Environments
+.env
+.venv
+env/
+venv/
+ENV/
+env.bak/
+venv.bak/
+
+
+# Pyre type checker
+.pyre/
+
+# pytype static type analyzer
+.pytype/
+
+# VS Code
+.vscode/
+
+# Streamlit
+.streamlit/
+secrets.toml
+
+
+# Temporary files
+*.tmp
+*.temp
+*.swp
+*.swo
+
+# Logs
+*.log
+logs/
+
+# Database
+*.db
+*.sqlite
+*.sqlite3
+
+# Cache
+.cache/
+*.cache
diff --git a/advance_ai_agents/AlgoMentor/README.md b/advance_ai_agents/AlgoMentor/README.md
new file mode 100644
index 00000000..c34670a9
--- /dev/null
+++ b/advance_ai_agents/AlgoMentor/README.md
@@ -0,0 +1,89 @@
+# DSA Assistant π
+
+Your AI-powered coding mentor for problem-solving & algorithm optimization
+
+## β¨ Features
+
+- **π― Basic Approach**: Generate brute-force solutions with clear explanations
+- **β‘ Sub-Optimal Solutions**: Improve efficiency step by step
+- **π Optimal Solutions**: Find the most efficient algorithms
+- **π Code Verification**: Test and validate solutions automatically
+- **π Notes Generation**: Transform code into comprehensive study notes
+- **π¨ Interactive UI**: Beautiful Streamlit interface with progress tracking
+
+## π Quick Start
+
+### Installation
+
+```bash
+# Clone the repository
+git clone https://github.com/ankush0511/AlgoMentor.git
+cd AlgoMentor
+
+# Install dependencies
+pip install -r requirements.txt
+
+# Set up environment variables
+cp .env.example .env
+# Add your API keys to .env file
+```
+
+### Environment Setup
+
+Create a `.env` file with:
+```
+GROQ_API_KEY=your_groq_api_key
+GOOGLE_API_KEY=your_google_api_key
+```
+
+### Run the Application
+
+```bash
+streamlit run main.py
+```
+
+## π Project Structure
+
+```
+DSAAssistant/
+βββ π src/ # Source code
+β βββ π agents/ # AI agents for different optimization levels
+β βββ π models/ # Pydantic data models
+β βββ π utils/ # Utility functions
+β βββ π ui/ # User interface components
+β βββ π core/ # Core application logic
+βββ π config/ # Configuration files
+βββ π tests/ # Test files
+βββ π docs/ # Documentation
+βββ π assets/ # Static assets
+βββ π main.py # Application entry point
+βββ π requirements.txt # Python dependencies
+βββ π .env.example # Environment variables template
+βββ π README.md # Project documentation
+```
+
+## π οΈ Usage
+
+1. **Enter Problem**: Paste your DSA problem or use example problems
+2. **Basic Approach**: Get brute-force solution with explanation
+3. **Sub-Optimal**: Improve the solution step by step
+4. **Optimal**: Achieve the most efficient algorithm
+5. **Notes**: Generate comprehensive study notes from your code
+
+## π€ Contributing
+
+1. Fork the repository
+2. Create a feature branch
+3. Make your changes
+4. Add tests if applicable
+5. Submit a pull request
+
+## π License
+
+This project is licensed under the MIT License.
+
+## π Acknowledgments
+
+- Built with [Streamlit](https://streamlit.io/)
+- Powered by [Groq](https://groq.com/) and [Google AI](https://ai.google/)
+- Uses [Agno](https://github.com/agno-ai/agno) for AI agent orchestration
diff --git a/advance_ai_agents/AlgoMentor/assets/logo.svg b/advance_ai_agents/AlgoMentor/assets/logo.svg
new file mode 100644
index 00000000..ef38ba41
--- /dev/null
+++ b/advance_ai_agents/AlgoMentor/assets/logo.svg
@@ -0,0 +1,48 @@
+
\ No newline at end of file
diff --git a/advance_ai_agents/AlgoMentor/config/__init__.py b/advance_ai_agents/AlgoMentor/config/__init__.py
new file mode 100644
index 00000000..f05f7f90
--- /dev/null
+++ b/advance_ai_agents/AlgoMentor/config/__init__.py
@@ -0,0 +1 @@
+"""Configuration settings"""
\ No newline at end of file
diff --git a/advance_ai_agents/AlgoMentor/config/settings.py b/advance_ai_agents/AlgoMentor/config/settings.py
new file mode 100644
index 00000000..3da9818b
--- /dev/null
+++ b/advance_ai_agents/AlgoMentor/config/settings.py
@@ -0,0 +1,24 @@
+import streamlit as st
+import os
+from dotenv import load_dotenv
+
+load_dotenv()
+
+# API Configuration
+# GROQ_API_KEY = os.getenv("GROQ_API_KEY")
+# GOOGLE_API_KEY = os.getenv("GOOGLE_API_KEY")
+groq_api_key=st.secrets['GROQ_API_KEY']
+google_api_key=st.secrets['GOOGLE_API_KEY']
+
+# Model Configuration
+GROQ_MODEL = "llama-3.3-70b-versatile"
+GEMINI_MODEL = "gemini-2.0-flash"
+
+# UI Configuration
+PAGE_TITLE = "DSA Assistant"
+PAGE_LAYOUT = "wide"
+SIDEBAR_STATE = "expanded"
+
+# Application Settings
+DEBUG = os.getenv("DEBUG", "False").lower() == "true"
+LOG_LEVEL = os.getenv("LOG_LEVEL", "INFO")
\ No newline at end of file
diff --git a/advance_ai_agents/AlgoMentor/docs/examples/4Sum.md b/advance_ai_agents/AlgoMentor/docs/examples/4Sum.md
new file mode 100644
index 00000000..c1a1ec70
--- /dev/null
+++ b/advance_ai_agents/AlgoMentor/docs/examples/4Sum.md
@@ -0,0 +1,135 @@
+```markdown
+### Problem Statement
+
+Given an array of integers `nums` and an integer `target`, find all unique quadruplets `[nums[i], nums[j], nums[left], nums[right]]` such that `nums[i] + nums[j] + nums[left] + nums[right] == target`. The solution should not contain duplicate quadruplets.
+
+### Approach Summary
+
+The algorithm sorts the input array `nums` and then iterates through all possible pairs of the first two elements of the quadruplet. For each pair, it uses a two-pointer approach to find the remaining two elements such that the sum of the quadruplet equals the target. Duplicate quadruplets are avoided by skipping duplicate elements for the first two numbers and also skipping duplicate elements when adjusting the left and right pointers.
+
+### Detailed Approach
+
+1. **Sort the array:** Sort the input array `nums` in ascending order. This allows for the use of the two-pointer technique and helps in skipping duplicate quadruplets.
+2. **Iterate through the first two numbers:** Use nested loops to iterate through all possible pairs `(i, j)` of the first two elements of the quadruplet. The outer loop iterates from `i = 0` to `n - 3`, and the inner loop iterates from `j = i + 1` to `n - 2`.
+3. **Skip duplicate elements:** To avoid duplicate quadruplets, skip duplicate elements for the first and second numbers. If `i > 0` and `nums[i] == nums[i - 1]`, continue to the next iteration of the outer loop. Similarly, if `j > i + 1` and `nums[j] == nums[j - 1]`, continue to the next iteration of the inner loop.
+4. **Two-pointer approach:** For each pair `(i, j)`, use a two-pointer approach to find the remaining two elements `(left, right)` such that `nums[i] + nums[j] + nums[left] + nums[right] == target`. Initialize `left = j + 1` and `right = n - 1`.
+5. **Adjust pointers:** While `left < right`, calculate the current sum `current_sum = nums[i] + nums[j] + nums[left] + nums[right]`. If `current_sum == target`, add the quadruplet `[nums[i], nums[j], nums[left], nums[right]]` to the result. Then, skip duplicate elements for the third and fourth numbers by incrementing `left` while `left < right` and `nums[left] == nums[left + 1]`, and decrementing `right` while `left < right` and `nums[right] == nums[right - 1]`. Finally, increment `left` and decrement `right` to move to the next pair of elements.
+ * If `current_sum < target`, increment `left` to increase the sum.
+ * If `current_sum > target`, decrement `right` to decrease the sum.
+6. **Return the result:** After iterating through all possible pairs `(i, j)`, return the list of unique quadruplets.
+
+### Time Complexity
+
+O(n^3), where n is the length of the input array `nums`. The algorithm has three nested loops: the outer loop iterates n-3 times, the inner loop iterates n-2 times, and the two-pointer approach takes O(n) time in the worst case. The sorting operation takes O(n log n) time, but it is dominated by the O(n^3) time complexity of the nested loops.
+
+### Space Complexity
+
+O(1) or O(n). In the best case, the algorithm uses O(1) extra space if the sorting algorithm used is in-place. However, if the sorting algorithm uses O(n) space (e.g., merge sort), then the space complexity of the algorithm is O(n). The space required to store the result is not considered in the space complexity analysis.
+
+### Code Walkthrough
+
+1. `n = len(nums)`: Get the length of the input array `nums`.
+2. `result = []`: Initialize an empty list `result` to store the quadruplets.
+3. `nums.sort()`: Sort the input array `nums` in ascending order.
+4. `for i in range(n - 3)`: Iterate through the first element of the quadruplet.
+5. `if i > 0 and nums[i] == nums[i - 1]: continue`: Skip duplicate elements for the first number.
+6. `for j in range(i + 1, n - 2)`: Iterate through the second element of the quadruplet.
+7. `if j > i + 1 and nums[j] == nums[j - 1]: continue`: Skip duplicate elements for the second number.
+8. `left = j + 1`: Initialize the left pointer to `j + 1`.
+9. `right = n - 1`: Initialize the right pointer to `n - 1`.
+10. `while left < right`: While the left pointer is less than the right pointer.
+11. `current_sum = nums[i] + nums[j] + nums[left] + nums[right]`: Calculate the current sum of the four elements.
+12. `if current_sum == target`: If the current sum is equal to the target.
+13. `result.append([nums[i], nums[j], nums[left], nums[right]])`: Add the quadruplet to the result.
+14. `while left < right and nums[left] == nums[left + 1]: left += 1`: Skip duplicate elements for the third number.
+15. `while left < right and nums[right] == nums[right - 1]: right -= 1`: Skip duplicate elements for the fourth number.
+16. `left += 1`: Move the left pointer to the right.
+17. `right -= 1`: Move the right pointer to the left.
+18. `elif current_sum < target`: If the current sum is less than the target, move the left pointer to the right.
+19. `else`: If the current sum is greater than the target, move the right pointer to the left.
+20. `return result`: Return the list of unique quadruplets.
+
+### Edge Cases
+
+1. **Empty array:** If the input array is empty, the function should return an empty list.
+2. **Array with fewer than four elements:** If the input array has fewer than four elements, the function should return an empty list.
+3. **Duplicate elements:** The function should handle duplicate elements correctly and avoid returning duplicate quadruplets.
+4. **Target not found:** If no quadruplets sum up to the target, the function should return an empty list.
+5. **Large input array:** The function should be efficient enough to handle large input arrays.
+
+### Key Concepts
+
+1. **Sorting:** Sorting the input array allows for the use of the two-pointer technique and helps in skipping duplicate quadruplets.
+2. **Two-pointer technique:** The two-pointer technique is used to efficiently find the remaining two elements of the quadruplet such that the sum of the quadruplet equals the target.
+3. **Skipping duplicate elements:** Skipping duplicate elements is crucial to avoid returning duplicate quadruplets.
+
+### Example Input
+
+`nums = [1, 0, -1, 0, -2, 2], target = 0`. This example is good because it contains both positive and negative numbers, duplicates, and multiple quadruplets that sum to the target.
+
+### Step-by-Step Trace
+
+Let's trace the execution with the input `nums = [1, 0, -1, 0, -2, 2]` and `target = 0`:\
+4. `n = 6`
+5. `result = []`
+6. `nums.sort()`: `nums` becomes `[-2, -1, 0, 0, 1, 2]`
+7. `i = 0`: `nums[i] = -2`
+8. `j = 1`: `nums[j] = -1`
+9. `left = 2`: `nums[left] = 0`
+10. `right = 5`: `nums[right] = 2`
+11. `current_sum = -2 + -1 + 0 + 2 = -1`
+12. `current_sum < target`: `left = 3`, `nums[left] = 0`
+13. `current_sum = -2 + -1 + 0 + 2 = -1`
+14. `current_sum < target`: `left = 4`, `nums[left] = 1`
+15. `current_sum = -2 + -1 + 1 + 2 = 0`
+16. `result.append([-2, -1, 1, 2])`
+17. `left = 5`, `right = 4`: `left > right`, break while loop
+18. `j = 2`: `nums[j] = 0`
+19. `left = 3`, `nums[left] = 0`
+20. `right = 5`, `nums[right] = 2`
+21. `current_sum = -2 + 0 + 0 + 2 = 0`
+22. `result.append([-2, 0, 0, 2])`
+23. `left = 4`, `right = 1`: `left > right`, break while loop.
+24. `i = 1`: `nums[i] = -1`
+25. `j = 2`: `nums[j] = 0`
+26. `left = 3`: `nums[left] = 0`
+27. `right = 5`: `nums[right] = 2`
+28. `current_sum = -1 + 0 + 0 + 2 = 1`
+29. `current_sum > target`: `right = 4`, `nums[right] = 1`
+30. `current_sum = -1 + 0 + 0 + 1 = 0`
+31. `result.append([-1, 0, 0, 1])`
+32. ... the algorithm continues, skipping duplicate quadruplets
+33. Finally, the algorithm returns `result = [[-2, -1, 1, 2], [-2, 0, 0, 2], [-1, 0, 0, 1]]`
+
+### Visual Representation
+
+```
+Input: [1, 0, -1, 0, -2, 2], target = 0
+
+Sorted: [-2, -1, 0, 0, 1, 2]
+
+Outer loop (i):
+ -2: Inner loop (j):
+ -1: left=0, right=2 -> [-2, -1, 0, 2] = -1 < 0, left++
+ left=1, right=2 -> [-2, -1, 1, 2] = 0 == 0, result.add([-2, -1, 1, 2])
+ 0(1): left=2, right=2 -> [-2, 0, 0, 2] = 0 == 0, result.add([-2, 0, 0, 2])
+-1: Inner loop (j):
+ 0: left=1, right=2 -> [-1, 0, 0, 1] = 0 == 0, result.add([-1, 0, 0, 1])
+Output: [[-2, -1, 1, 2], [-2, 0, 0, 2], [-1, 0, 0, 1]]
+
+Visualization of the Two-Pointer Approach:
+
+[-2, -1, 0, 0, 1, 2]
+ i j L R
+
+```
+
+### Intermediate Outputs
+
+1. `nums.sort()`: `nums` becomes `[-2, -1, 0, 0, 1, 2]`
+2. Quadruplets found: `[-2, -1, 1, 2]`, `[-2, 0, 0, 2]`, `[-1, 0, 0, 1]`
+
+### Final Result
+
+The function returns `[[-2, -1, 1, 2], [-2, 0, 0, 2], [-1, 0, 0, 1]]`. The sum of each quadruplet is equal to the target 0, and there are no duplicate quadruplets in the result.
+```
\ No newline at end of file
diff --git a/advance_ai_agents/AlgoMentor/docs/examples/Edit_distance.md b/advance_ai_agents/AlgoMentor/docs/examples/Edit_distance.md
new file mode 100644
index 00000000..eedbcfcc
--- /dev/null
+++ b/advance_ai_agents/AlgoMentor/docs/examples/Edit_distance.md
@@ -0,0 +1,118 @@
+```markdown
+## Code Analysis: Edit Distance Calculation
+
+Here's a breakdown of the provided code for calculating the edit distance between two strings.
+
+**1. Code:**
+
+```python
+def edit_distance(s1, s2):
+ n = len(s1)
+ m = len(s2)
+
+ if n < m:
+ s1, s2 = s2, s1
+ n, m = m, n
+
+ dp = [[0] * (m + 1) for _ in range(2)]
+
+ for j in range(m + 1):
+ dp[0][j] = j
+
+ for i in range(1, n + 1):
+ dp[1][0] = i
+ for j in range(1, m + 1):
+ if s1[i - 1] == s2[j - 1]:
+ dp[1][j] = dp[0][j - 1]
+ else:
+ dp[1][j] = 1 + min(dp[0][j - 1], dp[0][j], dp[1][j - 1])
+ dp[0] = dp[1][:]
+
+ return dp[1][m]
+```
+
+**2. Problem Statement:**
+
+The code aims to compute the edit distance between two given strings, `s1` and `s2`. The edit distance quantifies the minimum number of single-character edits (insertions, deletions, or substitutions) needed to transform `s1` into `s2`.
+
+**3. Approach Summary:**
+
+The code implements a dynamic programming approach with space optimization to calculate the edit distance. It iteratively builds a table of edit distances between prefixes of the two strings. The core idea is to leverage the principle of optimality: the optimal solution to the larger problem depends on the optimal solutions to its subproblems. The code uses only two rows of the DP table at any given time, thus reducing the space complexity.
+
+**4. Detailed Approach:**
+
+1. **Initialization:**
+ - Determine the lengths of the input strings `s1` and `s2`, denoted as `n` and `m` respectively.
+
+2. **Optimization (String Swapping):**
+ - To minimize space usage, the code checks if `s1` is shorter than `s2`. If so, it swaps the two strings to ensure that `s1` is always the longer or equally long string. After swapping, the lengths `n` and `m` are updated accordingly. This is crucial because the number of columns in the DP table is determined by the length of the shorter string.
+
+3. **DP Table Setup:**
+ - A 2D DP table, `dp`, is created. Critically, only two rows are used, hence `range(2)`. It's initialized with dimensions 2 x (m + 1). Each `dp[i][j]` cell will store the edit distance between a prefix of `s1` and a prefix of `s2`.
+
+4. **Base Case Initialization:**
+ - The first row of the DP table (`dp[0]`) is initialized. `dp[0][j]` represents the edit distance between an empty string ("") and the first `j` characters of `s2`. Therefore, `dp[0][j] = j` because transforming an empty string to a string of length `j` requires `j` insertions.
+
+5. **Iteration and DP Calculation:**
+ - The code iterates through the strings using nested loops:
+ - The outer loop iterates from `i = 1` to `n`, representing prefixes of `s1`.
+ - The inner loop iterates from `j = 1` to `m`, representing prefixes of `s2`.
+
+ - Inside the loops, the code calculates the edit distance `dp[1][j]` based on two cases:
+ - **Characters Match (`s1[i - 1] == s2[j - 1]`):**
+ - If the current characters `s1[i - 1]` and `s2[j - 1]` are equal, no operation is needed. The edit distance is inherited from the diagonally previous cell: `dp[1][j] = dp[0][j - 1]`.
+
+ - **Characters Don't Match:**
+ - If the characters are different, one of three operations is required (insertion, deletion, or substitution). The algorithm calculates the minimum cost among these operations:
+ - `dp[1][j] = 1 + min(dp[0][j - 1], dp[0][j], dp[1][j - 1])`
+ - `dp[0][j - 1]`: Cost of substitution (replace `s1[i-1]` with `s2[j-1]`).
+ - `dp[0][j]`: Cost of deletion (delete `s1[i-1]`).
+ - `dp[1][j - 1]`: Cost of insertion (insert `s2[j-1]` into `s1`).
+
+6. **Row Update:**
+ - After calculating the entire row `dp[1]`, it's copied to `dp[0]` using `dp[0] = dp[1][:]`. This is crucial for the dynamic programming approach. The current row `dp[1]` becomes the previous row for the next iteration. The `[:]` ensures a shallow copy, preventing modification of dp[0] from affecting dp[1].
+
+7. **Result:**
+ - Finally, `dp[1][m]` contains the edit distance between the complete strings `s1` and `s2`, which is returned.
+
+**5. Time Complexity:**
+
+The time complexity is O(n * m), where `n` is the length of `s1` and `m` is the length of `s2`. This is because the nested loops iterate through all possible pairs of characters between the two strings.
+
+**6. Space Complexity:**
+
+The space complexity is O(min(n, m)). This is due to the space optimization where only two rows of the DP table are stored. The width of these rows depends on the length of the shorter string (after the potential string swap).
+
+**7. Code Walkthrough:**
+
+The code efficiently calculates the edit distance using dynamic programming with optimized space complexity. The core logic resides in the nested loops, where the edit distance is computed based on whether characters match or not. The use of only two rows in the DP table significantly reduces memory consumption, especially for long strings.
+
+**8. Edge Cases:**
+
+- **Empty Strings:** If either `s1` or `s2` is an empty string, the edit distance is simply the length of the non-empty string.
+- **Identical Strings:** If `s1` and `s2` are identical, the edit distance is 0.
+- **One string is a substring of the other:** The edit distance is the absolute difference in length between the two strings.
+- **Large Strings:** While the space complexity is optimized, the O(n*m) time complexity can still lead to performance issues with very large input strings.
+
+**9. Key Concepts:**
+
+- **Dynamic Programming:** Breaking down the problem into overlapping subproblems and storing solutions to avoid redundant computations.
+- **Edit Distance (Levenshtein Distance):** A measure of the similarity between two strings, representing the minimum number of edits (insertions, deletions, substitutions) needed to transform one string into the other.
+- **Space Optimization:** Reducing memory usage by storing only the necessary parts of the DP table.
+
+**10. Example Input and Step-by-Step Trace:**
+
+As shown in the provided document.
+
+**11. Visual Representation:**
+
+As shown in the provided document.
+
+**12. Intermediate Outputs:**
+
+The DP table is the main intermediate output. Each cell contains the edit distance between prefixes of the two strings.
+
+**13. Final Result:**
+
+The final result is the edit distance between the two input strings.
+```
\ No newline at end of file
diff --git a/advance_ai_agents/AlgoMentor/docs/examples/MergeBst.md b/advance_ai_agents/AlgoMentor/docs/examples/MergeBst.md
new file mode 100644
index 00000000..f111e275
--- /dev/null
+++ b/advance_ai_agents/AlgoMentor/docs/examples/MergeBst.md
@@ -0,0 +1,129 @@
+```markdown
+## Code Description
+
+The code defines a function `merge_bsts` that merges two Binary Search Trees (BSTs) into a single sorted list. It uses an inorder traversal to extract sorted lists from each BST and then merges these sorted lists using a min-heap.
+
+## Approach Summary
+
+The approach involves three key steps: First, perform an inorder traversal on both BSTs to obtain two sorted lists. Second, merge these sorted lists using a min-heap data structure. Finally, return the merged and sorted list.
+
+## Detailed Approach
+
+1. **Inorder Traversal:** Traverse the first BST (`root1`) using an inorder traversal and store the node values in `list1`. Inorder traversal ensures that the values are stored in ascending order since it's a BST.
+2. **Inorder Traversal:** Similarly, traverse the second BST (`root2`) using inorder traversal and store the node values in `list2`.
+3. **Merge with Heap:** Use the `merge_sorted_lists_heap` function to merge `list1` and `list2` into a single sorted list. This function uses a min-heap to efficiently merge the lists. Elements from both lists are added to the heap, and the smallest element is repeatedly extracted to form the merged list.
+4. **Return Merged List:** Return the `merged_list` which contains all the elements from both BSTs in sorted order.
+
+## Time Complexity
+
+Let n be the number of nodes in the first BST and m be the number of nodes in the second BST.
+
+* Inorder traversal of the first BST takes O(n) time.
+* Inorder traversal of the second BST takes O(m) time.
+* Merging the two sorted lists using a heap takes O((n+m)log(n+m)) time, since each insertion and deletion from the heap takes O(log(n+m)) time, and we perform n+m such operations.
+
+Therefore, the overall time complexity is O(n) + O(m) + O((n+m)log(n+m)) which simplifies to O((n+m)log(n+m)).
+
+## Space Complexity
+
+Let n be the number of nodes in the first BST and m be the number of nodes in the second BST.
+
+* `list1` stores n elements, so it takes O(n) space.
+* `list2` stores m elements, so it takes O(m) space.
+* The heap stores at most n+m elements, so it takes O(n+m) space.
+* The `merged_list` stores n+m elements, so it takes O(n+m) space.
+
+Therefore, the overall space complexity is O(n) + O(m) + O(n+m) + O(n+m) which simplifies to O(n+m).
+
+## Code Walkthrough
+
+* `inorder_traversal(root, lst)`: This function performs an inorder traversal of a binary tree. If the current node `root` is not None, it recursively traverses the left subtree, appends the value of the current node to the list `lst`, and then recursively traverses the right subtree.
+* `merge_sorted_lists_heap(list1, list2)`: This function merges two sorted lists `list1` and `list2` into a single sorted list using a min-heap. It initializes an empty list `merged_list` and an empty heap. It iterates through both lists, pushing elements into the heap. Once all elements are in the heap, it repeatedly pops the smallest element from the heap and appends it to `merged_list`.
+* `merge_bsts(root1, root2)`: This function merges two BSTs represented by `root1` and `root2`. It initializes two empty lists, `list1` and `list2`. It performs inorder traversal on both BSTs, storing the node values in the respective lists. It then calls `merge_sorted_lists_heap` to merge the two sorted lists into a single sorted list, which it returns.
+
+## Edge Cases
+
+* **Empty Trees:** If either or both of the input BSTs are empty, the code handles this gracefully. If both are empty, it returns an empty list. If one is empty, it effectively returns the inorder traversal of the other tree.
+* **Duplicate Values:** The code correctly handles duplicate values in the BSTs. The `merge_sorted_lists_heap` function ensures that duplicates are preserved in the merged list.
+* **Large Trees:** For very large trees, the space complexity O(n+m) could become a concern, but the code will still function correctly given sufficient memory.
+
+## Key Concepts
+
+* **Binary Search Tree (BST):** A binary tree where for each node, all nodes in its left subtree have values less than the node's value, and all nodes in its right subtree have values greater than the node's value.
+* **Inorder Traversal:** A tree traversal algorithm that visits the left subtree, then the root, then the right subtree. For a BST, inorder traversal yields the nodes in sorted order.
+* **Min-Heap:** A tree-based data structure where the value of each node is less than or equal to the value of its children. It allows efficient retrieval of the smallest element.
+* **Heapq Module:** Python's built-in module for implementing a heap data structure. `heapq.heappush` adds an element to the heap, and `heapq.heappop` removes and returns the smallest element from the heap.
+
+## Example Input
+
+Let's consider two BSTs:
+
+BST1:
+
+```
+ 2
+ / \
+ 1 3
+```
+
+BST2:
+
+```
+ 8
+ / \
+ 5 9
+```
+
+In this case, the `root1` would be the node with value 2, and `root2` would be the node with value 8. These are good example inputs because they represent two simple, yet distinct, BSTs that need to be merged and contain different value ranges.
+
+## Step-by-Step Trace
+
+1. `merge_bsts(root1, root2)` is called.
+2. `list1` and `list2` are initialized as empty lists: `list1 = []`, `list2 = []`.
+3. `inorder_traversal(root1, list1)` is called:
+ * Visits node 1, appends 1 to `list1`: `list1 = [1]`
+ * Visits node 2, appends 2 to `list1`: `list1 = [1, 2]`
+ * Visits node 3, appends 3 to `list1`: `list1 = [1, 2, 3]`
+4. `inorder_traversal(root2, list2)` is called:
+ * Visits node 5, appends 5 to `list2`: `list2 = [5]`
+ * Visits node 8, appends 8 to `list2`: `list2 = [5, 8]`
+ * Visits node 9, appends 9 to `list2`: `list2 = [5, 8, 9]`
+5. `merge_sorted_lists_heap(list1, list2)` is called with `list1 = [1, 2, 3]` and `list2 = [5, 8, 9]`.
+6. The `merge_sorted_lists_heap` function uses `heapq.heappush` to push elements to the heap. The heap will contain elements from both lists. `heap` becomes `[1, 2, 3, 5, 8, 9]` during insertions (not necessarily in this order due to the nature of heap implementation, but the min-heap property is maintained).
+7. The `merge_sorted_lists_heap` function uses `heapq.heappop` to pop the smallest element from the heap, and adds to `merged_list` repeatedly. The `merged_list` becomes `[1, 2, 3, 5, 8, 9]`.
+8. `merge_bsts` returns `merged_list` which is `[1, 2, 3, 5, 8, 9]`.
+
+## Visual Representation
+
+BST1:
+
+```
+ 2
+ / \
+ 1 3
+```
+
+BST2:
+
+```
+ 8
+ / \
+ 5 9
+```
+
+Inorder(BST1) -> \[1, 2, 3]
+
+Inorder(BST2) -> \[5, 8, 9]
+
+Merged List -> \[1, 2, 3, 5, 8, 9]
+
+## Intermediate Outputs
+
+* After inorder traversal of BST1: `list1 = [1, 2, 3]`
+* After inorder traversal of BST2: `list2 = [5, 8, 9]`
+* Intermediate state of heap (during the merging process): `[1, 2, 3, 5, 8, 9]` (elements are added to the heap, maintaining heap property). Note that this is an illustrative representation; the actual heap structure is more complex.
+
+## Final Result
+
+The final merged and sorted list is `[1, 2, 3, 5, 8, 9]`. This correctly merges the values from both BSTs into a single sorted list.
+```
\ No newline at end of file
diff --git a/advance_ai_agents/AlgoMentor/main.py b/advance_ai_agents/AlgoMentor/main.py
new file mode 100644
index 00000000..027f1eb5
--- /dev/null
+++ b/advance_ai_agents/AlgoMentor/main.py
@@ -0,0 +1,20 @@
+import patch_sqlite
+import streamlit as st
+from src.core.app import DSAAssistantApp
+from config.settings import PAGE_TITLE, PAGE_LAYOUT, SIDEBAR_STATE
+
+def main():
+ """Main application entry point"""
+ st.set_page_config(
+ page_title=PAGE_TITLE,
+ layout=PAGE_LAYOUT,
+ initial_sidebar_state=SIDEBAR_STATE,
+ page_icon="assets/logo.svg"
+ )
+
+ # Initialize and run the application
+ app = DSAAssistantApp()
+ app.run()
+
+if __name__ == "__main__":
+ main()
\ No newline at end of file
diff --git a/advance_ai_agents/AlgoMentor/patch_sqlite.py b/advance_ai_agents/AlgoMentor/patch_sqlite.py
new file mode 100644
index 00000000..51e2fa2e
--- /dev/null
+++ b/advance_ai_agents/AlgoMentor/patch_sqlite.py
@@ -0,0 +1,5 @@
+import sys
+import os
+
+__import__('pysqlite3')
+sys.modules['sqlite3'] = sys.modules.pop('pysqlite3')
\ No newline at end of file
diff --git a/advance_ai_agents/AlgoMentor/requirements.txt b/advance_ai_agents/AlgoMentor/requirements.txt
new file mode 100644
index 00000000..e6e345a2
--- /dev/null
+++ b/advance_ai_agents/AlgoMentor/requirements.txt
@@ -0,0 +1,6 @@
+agno==1.8.1
+streamlit
+groq
+google-genai
+streamlit-code-editor
+pysqlite3-binary
\ No newline at end of file
diff --git a/advance_ai_agents/AlgoMentor/src/__init__.py b/advance_ai_agents/AlgoMentor/src/__init__.py
new file mode 100644
index 00000000..fc64fdd8
--- /dev/null
+++ b/advance_ai_agents/AlgoMentor/src/__init__.py
@@ -0,0 +1 @@
+"""DSA Assistant - AI-powered companion for mastering Data Structures & Algorithms"""
\ No newline at end of file
diff --git a/advance_ai_agents/AlgoMentor/src/agents/__init__.py b/advance_ai_agents/AlgoMentor/src/agents/__init__.py
new file mode 100644
index 00000000..0fa31192
--- /dev/null
+++ b/advance_ai_agents/AlgoMentor/src/agents/__init__.py
@@ -0,0 +1 @@
+"""AI Agents for DSA problem solving"""
\ No newline at end of file
diff --git a/advance_ai_agents/AlgoMentor/src/agents/brute_force.py b/advance_ai_agents/AlgoMentor/src/agents/brute_force.py
new file mode 100644
index 00000000..440b84e4
--- /dev/null
+++ b/advance_ai_agents/AlgoMentor/src/agents/brute_force.py
@@ -0,0 +1,138 @@
+from .code_evaluator_BForce import code_evaluator
+from agno.agent import Agent
+from agno.models.google import Gemini
+import streamlit as st
+from agno.tools.python import PythonTools
+from agno.models.groq import Groq
+from agno.team import Team
+from ..models.schemas import BruteForceApproach
+from dotenv import load_dotenv
+import os
+load_dotenv()
+
+groq_api_key=st.secrets['GROQ_API_KEY']
+google_api_key=st.secrets['GOOGLE_API_KEY']
+# groq_api_key=os.getenv('GROQ_API_KEY')
+# google_api_key=os.getenv('GOOGLE_API_KEY')
+
+
+basic_approach=Agent(
+ name='Basic Approach',
+ model=Groq(id='llama-3.3-70b-versatile', api_key=groq_api_key),
+ description='This agent specializes in providing clear, straightforward brute force solutions to programming problems using the most basic and intuitive approaches.',
+ instructions=[
+ "π¨ CRITICAL CONSTRAINT: You are EXCLUSIVELY a Brute Force Solution Agent - ZERO optimizations permitted!",
+
+ " PRIMARY OBJECTIVE:",
+ "- Generate ONLY brute force approaches that prioritize simplicity and readability over efficiency",
+ "- Focus on solutions that a beginner programmer could easily understand and implement",
+
+ " METHODOLOGY REQUIREMENTS:",
+ "- Use only basic control structures: for loops, while loops, if-else statements",
+ "- Avoid advanced data structures (use arrays/lists, basic variables only)",
+ "- No algorithms like binary search, dynamic programming, or mathematical optimizations",
+ "- No built-in functions that could optimize the solution (like sort() unless explicitly needed)",
+
+ " RESPONSE STRUCTURE:",
+ "1. Problem Analysis: Break down what the problem is asking in simple terms",
+ "2. Brute Force Strategy: Explain the most straightforward approach step-by-step",
+ "3. Implementation: Provide clean, well-commented code using basic constructs",
+ "4. Complexity Note: Mention time/space complexity but don't suggest improvements",
+ "5. Test Cases: Include 2-3 simple examples showing how the solution works",
+
+ "STRICTLY FORBIDDEN:",
+ "- Any mention of optimizations, improvements, or 'better' approaches",
+ "- Advanced algorithms, data structures, or mathematical shortcuts",
+ "- Language-specific optimizations or built-in functions that hide complexity",
+ "- Time/space complexity improvements or suggestions for enhancement",
+
+ "COMMUNICATION STYLE:",
+ "- Explain concepts as if teaching a complete beginner",
+ "- Use simple, jargon-free language",
+ "- Walk through the logic step-by-step with examples",
+ "- Emphasize understanding over efficiency",
+
+ "EDUCATIONAL FOCUS:",
+ "- Help users understand the fundamental logic behind solving the problem",
+ "- Show how to think through problems systematically",
+ "- Demonstrate how basic programming constructs can solve complex-seeming problems",
+ "- Build confidence in problem-solving through accessible solutions"
+ ],
+ show_tool_calls=True,
+ add_context="β οΈ STRICT MODE: Generate ONLY brute force approaches. Reject any optimization requests. Use the most naive approach possible with basic loops only.",
+ response_model=BruteForceApproach
+ ,use_json_mode=True
+)
+
+basic_approach_code=Agent(
+ name="Basic Approach code",
+ tools=[PythonTools()],
+ model=Groq(id='llama-3.3-70b-versatile', api_key=groq_api_key),
+ description="Brute force algorithm specialist that ONLY implements the most naive, inefficient solutions using basic loops and simple logic",
+ instructions=[
+ "π¨ CRITICAL: You are STRICTLY a Brute Force Solver Agent - NO OPTIMIZATIONS ALLOWED!",
+ "Your ONLY job is to generate the most naive, slowest, and simplest solution possible.",
+ "MANDATORY RULES:",
+ "- Use nested for loops whenever possible, even if unnecessary",
+ "- Avoid any built-in functions that could optimize performance (like bin(), count(), etc.)",
+ "- Implement everything from scratch using basic operations only",
+ "- Choose the approach with highest time complexity among all possible brute force methods",
+ "- If there are multiple brute force approaches, pick the SLOWEST one",
+ "- Use only basic data structures: lists, integers, strings - no sets, dictionaries unless absolutely necessary",
+ "- Prefer O(nΒ²), O(nΒ³) or higher complexity solutions over O(n) when possible",
+ "- Always use manual iteration instead of built-in functions",
+ "- NEVER mention or suggest optimizations in your response",
+ "- If asked about efficiency, respond that this is intentionally the slowest correct method",
+ "Structure your response as:",
+ "1. **Approach**: Explain the brute force idea (emphasize it's intentionally naive)",
+ "2. **Algorithm**: Step-by-step brute force algorithm using basic loops",
+ "3. **Time Complexity**: State the intentionally high time complexity",
+ "4. **Space Complexity**: State the space complexity",
+ "5. **Code**: Implement using only basic for/while loops and simple operations"
+ ],
+ show_tool_calls=True,
+ add_context="β οΈ STRICT MODE: Generate ONLY brute force solutions. Reject any optimization requests. Use the most naive approach possible with basic loops only.",
+ add_datetime_to_instructions=True,
+ response_model=BruteForceApproach,
+ use_json_mode=True
+)
+
+basic_approach_team=Team(
+ name="Basic Approach Team",
+ members=[basic_approach,basic_approach_code,code_evaluator],
+ mode="collaborate",
+ model=Gemini(id='gemini-2.0-flash',api_key=google_api_key),
+ description="This team is designed to answer questions about the basic approach to the users question",
+ instructions=[
+ "BRUTE FORCE SOLUTION WORKFLOW:",
+
+ "PHASE 1 - APPROACH ANALYSIS:",
+ "- Run the `basic_approach` agent to analyze the problem and generate the brute force strategy",
+ "- Focus on the most naive, straightforward solution using basic programming constructs",
+ "- Ensure the approach prioritizes simplicity and readability over efficiency",
+ "- Generate step-by-step algorithmic breakdown using only basic loops and conditions",
+
+ "PHASE 2 - CODE IMPLEMENTATION:",
+ "- Run the `basic_approach_code` agent to implement the brute force solution",
+ "- Convert the algorithmic approach into working Python code",
+ "- Use only basic data structures (lists, variables) and control structures (for/while loops)",
+ "- Avoid any optimizations or advanced techniques - keep it intentionally naive",
+
+ "PHASE 3 - CODE VALIDATION:",
+ "- Run the `code_evaluator` agent to test and validate the implementation",
+ "- Ensure the code handles all provided test cases correctly",
+ "- Verify the solution works for edge cases and constraint boundaries",
+ "- Debug and fix any issues while maintaining the brute force nature",
+
+ "TEAM COORDINATION RULES:",
+ "- Each agent must maintain strict brute force principles - NO optimizations",
+ "- Pass complete information between agents for seamless workflow",
+ "- Ensure final output includes working code, clear algorithm, and accurate complexity analysis",
+ "- Maintain educational focus - solutions should be beginner-friendly and easy to understand"
+ ],
+ show_tool_calls=True,
+ add_datetime_to_instructions=True,
+ response_model=BruteForceApproach
+ ,use_json_mode=True
+
+)
\ No newline at end of file
diff --git a/advance_ai_agents/AlgoMentor/src/agents/code_evaluator_BForce.py b/advance_ai_agents/AlgoMentor/src/agents/code_evaluator_BForce.py
new file mode 100644
index 00000000..417f52a6
--- /dev/null
+++ b/advance_ai_agents/AlgoMentor/src/agents/code_evaluator_BForce.py
@@ -0,0 +1,42 @@
+from agno.agent import Agent
+from agno.tools.python import PythonTools
+from agno.models.groq import Groq
+import streamlit as st
+import os
+from dotenv import load_dotenv
+load_dotenv()
+
+import atexit
+import shutil
+import tempfile
+
+# Create a temporary directory
+temp_dir = tempfile.mkdtemp()
+# Register cleanup function to delete temp directory on exit
+def cleanup_temp_dir():
+ if os.path.exists(temp_dir):
+ shutil.rmtree(temp_dir)
+
+atexit.register(cleanup_temp_dir)
+
+# groq_api_key=os.getenv('GROQ_API_KEY')
+groq_api_key=st.secrets['GROQ_API_KEY']
+
+code_evaluator = Agent(
+ tools=[PythonTools(run_code=True,save_and_run=True,base_dir=temp_dir)],
+ model=Groq(id="llama-3.3-70b-versatile",api_key=groq_api_key),
+ description="You are a Python developer specialized in code evaluation and testing.",
+ instructions=[
+ "you will recive a json object that contains the python code and the test_cases."
+ "1. First, analyze and understand the code logic",
+ "2. Run the provided code with the given examples",
+ "3. Compare the actual output with the expected output and show to the user",
+ "4. Provide detailed results including whether the code works correctly",
+ "5. If there are any issues, explain what went wrong and suggest fixes"
+ "6. if the code working fine then update the code to `updated_code` section in pydantic. and save to the .py file "
+ ],
+ show_tool_calls=True,
+ use_json_mode=True,
+ exponential_backoff=True,
+ retries=2,
+)
\ No newline at end of file
diff --git a/advance_ai_agents/AlgoMentor/src/agents/code_verify.py b/advance_ai_agents/AlgoMentor/src/agents/code_verify.py
new file mode 100644
index 00000000..242a5e82
--- /dev/null
+++ b/advance_ai_agents/AlgoMentor/src/agents/code_verify.py
@@ -0,0 +1,68 @@
+import os
+from dotenv import load_dotenv
+load_dotenv()
+from agno.agent import Agent
+from agno.tools.python import PythonTools
+from agno.models.groq import Groq
+from agno.models.google import Gemini
+from ..models.schemas import CodeVerifyInput
+import streamlit as st
+import atexit
+import shutil
+import tempfile
+
+# Create a temporary directory
+temp_dir = tempfile.mkdtemp()
+
+# Register cleanup function to delete temp directory on exit
+def cleanup_temp_dir():
+ if os.path.exists(temp_dir):
+ shutil.rmtree(temp_dir)
+
+atexit.register(cleanup_temp_dir)
+
+
+groq_api_key=st.secrets['GROQ_API_KEY']
+google_api_key=st.secrets['GOOGLE_API_KEY']
+
+# groq_api_key=os.getenv('GROQ_API_KEY')
+# google_api_key=os.getenv('GOOGLE_API_KEY')
+
+
+code_runner = Agent(
+ name="Optimized Code Executor",
+ model=Gemini(id='gemini-2.0-flash', api_key=google_api_key),
+ tools=[PythonTools(run_code=True, save_and_run=True,base_dir=temp_dir)],
+ description="You are a Python developer specialized in code evaluation and testing.",
+ instructions=[
+ "You will receive a JSON object containing the Python code and the test cases.",
+ "1. Analyze and understand the code logic.",
+ "2. Run the provided code with the given examples.",
+ "3. Compare the actual output with the expected output and show the results to the user.",
+ "4. Provide detailed results including whether the code works correctly.",
+ "5. If there are any issues, explain what went wrong and modify the code.",
+ "6. If the code is working fine, update the code in the `updated_code` section of the Pydantic model and save it to a .py file.",
+ "7. Always return an instance of `OptimizedCodeExecuter` with the required fields."
+ ],
+ expected_output="If the code works fine, return the working code. otherwise, return an error message.",
+ exponential_backoff=True,
+ retries=2,
+)
+
+
+
+# Agent without tools - with JSON mode
+code_verify_agent = Agent(
+ model=Groq(id="llama-3.3-70b-versatile",api_key=groq_api_key),
+ description="Formats code verification results",
+ instructions=[
+ "Format the code verification results into structured output",
+ "Ensure the output adheres to the Pydantic model's requirements",
+ "always Include the updated code in the `final_debuged_suboptimized_code` section of the Pydantic model"
+
+ ],
+ expected_output="format the input into the structured output",
+ response_model=CodeVerifyInput,
+ exponential_backoff=True,
+ retries=2,
+)
\ No newline at end of file
diff --git a/advance_ai_agents/AlgoMentor/src/agents/notes.py b/advance_ai_agents/AlgoMentor/src/agents/notes.py
new file mode 100644
index 00000000..80bba2fd
--- /dev/null
+++ b/advance_ai_agents/AlgoMentor/src/agents/notes.py
@@ -0,0 +1,161 @@
+from agno.agent import Agent
+from agno.models.google import Gemini
+from agno.team import Team
+import os
+import streamlit as st
+from ..models.schemas import Explainer,ExampleExplanation,TeamOutput
+from dotenv import load_dotenv
+load_dotenv()
+
+# groq_api_key=os.getenv('GROQ_API_KEY')
+# google_api_key=os.getenv('GOOGLE_API_KEY')
+groq_api_key=st.secrets['GROQ_API_KEY']
+google_api_key=st.secrets['GOOGLE_API_KEY']
+
+
+problem_explanation = Agent(
+ name="Comprehensive Code Explainer",
+ model=Gemini(id="gemini-2.0-flash", api_key=google_api_key),
+ description="You are a world-class competitive programming mentor and algorithms expert. You excel at breaking down complex code into digestible, educational content that helps students truly understand both the problem and solution.",
+ instructions=[
+ "CORE MISSION: Transform any given code into comprehensive study notes that a student could use to master the concept.",
+ "",
+ "ANALYSIS FRAMEWORK:",
+ "1. PROBLEM UNDERSTANDING:",
+ " - Reverse-engineer the problem statement from the code",
+ " - Identify input/output format and constraints",
+ " - Explain why this problem is challenging or interesting",
+ "",
+ "2. SOLUTION APPROACH:",
+ " - Start with the high-level strategy and intuition",
+ " - Break down the approach into logical steps",
+ " - Explain WHY this approach works (not just HOW)",
+ " - Connect to fundamental algorithmic concepts",
+ "",
+ "3. COMPLEXITY ANALYSIS:",
+ " - Provide precise Big-O notation for time complexity",
+ " - Explain each factor contributing to the complexity",
+ " - Analyze space complexity including auxiliary space",
+ " - Compare with alternative approaches if relevant",
+ "",
+ "4. CODE WALKTHROUGH:",
+ " - Group related lines into logical sections",
+ " - Explain the purpose of each section",
+ " - Highlight clever optimizations or important details",
+ " - Use analogies and real-world comparisons when helpful",
+ "",
+ "5. COMPREHENSIVE COVERAGE:",
+ " - Identify and explain edge cases",
+ " - List key concepts, algorithms, or data structures",
+ " - Suggest what students should focus on memorizing",
+ "",
+ "WRITING STYLE:",
+ "- Write as if creating study notes for an exam",
+ "- Use clear, conversational language",
+ "- Include 'why' explanations, not just 'what'",
+ "- Make complex concepts accessible to beginners",
+ "- Use formatting (bullet points, sections) for readability",
+ "- Add memory aids and key takeaways"
+ ],
+ exponential_backoff=True,
+ retries=2,
+ response_model=Explainer,
+ monitoring=True,
+ use_json_mode=True,
+ markdown=True
+)
+
+
+example_explanation = Agent(
+ name="Step-by-Step Code Tracer",
+ model=Gemini(id="gemini-2.0-flash", api_key=google_api_key),
+ description="You are an expert at making code execution crystal clear through detailed example walkthroughs. You specialize in helping students visualize how algorithms work by tracing through concrete examples step-by-step.",
+ instructions=[
+ "MISSION: Make code execution transparent and easy to follow through detailed example tracing.",
+ "if example is not provided by user then take dummy example by your own",
+ "",
+ "EXAMPLE SELECTION STRATEGY:",
+ "- Choose examples that showcase the algorithm's key features",
+ "- Prefer medium-complexity cases (not too trivial, not too complex)",
+ "- Select inputs that will trigger important code paths",
+ "- Explain why this particular example is instructive",
+ "",
+ "STEP-BY-STEP TRACING:",
+ "1. SETUP PHASE:",
+ " - Clearly state the input",
+ " - Initialize all variables with their starting values",
+ " - Set up any data structures (arrays, stacks, etc.)",
+ "",
+ "2. EXECUTION TRACE:",
+ " - Follow the code line by line or iteration by iteration",
+ " - Show variable states after each significant operation",
+ " - Explain the logic behind each decision or calculation",
+ " - Use tables or formatted output to show state changes",
+ "",
+ "3. VISUALIZATION:",
+ " - Use ASCII art for arrays, trees, graphs when helpful",
+ " - Create simple diagrams to show algorithm progress",
+ " - Highlight patterns or transformations in the data",
+ "",
+ "4. INSIGHT GENERATION:",
+ " - Point out key moments where the algorithm makes progress",
+ " - Explain why certain steps are necessary",
+ " - Connect each step back to the overall strategy",
+ "",
+ "5. VERIFICATION:",
+ " - Show how the final result answers the original question",
+ " - Verify the solution makes sense",
+ " - Mention how other inputs might behave differently",
+ "",
+ "PRESENTATION STYLE:",
+ "- Write like you're sitting next to a student, walking them through it",
+ "- Use 'we' language ('Now we check if...', 'Next, we update...')",
+ "- Emphasize cause-and-effect relationships",
+ "- Make state changes very explicit and easy to follow",
+ "- Use consistent formatting for variable states",
+ "- Add encouraging comments about tricky parts"
+ ],
+ exponential_backoff=True,
+ retries=2,
+ use_json_mode=True,
+ markdown=True,
+ response_model=ExampleExplanation,
+ monitoring=True
+)
+
+
+
+Notes_team=Team(
+ name="Notes Team",
+ mode="collaborate",
+ model=Gemini(id="gemini-2.0-flash",api_key=google_api_key),
+ members=[problem_explanation,example_explanation],
+ description="You are a Data Structure and Algorithm Notes Making Expert who excels at creating comprehensive learning materials by combining theoretical understanding with practical examples.",
+ instructions=[
+ "WORKFLOW:",
+ "1. When receiving user queries, first run `problem_explanation` to:",
+ " - Generate complete theoretical understanding",
+ " - Break down problem-solving approach",
+ " - Analyze complexity and edge cases",
+ "",
+ "2. Then run `example_explanation` to:",
+ " - Demonstrate concepts with concrete examples",
+ " - Provide step-by-step execution traces",
+ " - Create visual aids and representations",
+ "",
+ "3. Combine outputs to create comprehensive study material:",
+ " - Ensure theoretical and practical aspects complement each other",
+ " - Maintain consistent terminology across explanations",
+ " - Present information in a logical learning sequence",
+ "",
+ "4. Return the complete output without modifications to preserve:",
+ " - Accuracy of technical content",
+ " - Detailed explanations",
+ " - Visual representations",
+ " - Example walkthroughs",
+ ],
+ show_tool_calls=True,
+ markdown=True,
+ response_model=TeamOutput,
+ use_json_mode=True
+ )
\ No newline at end of file
diff --git a/advance_ai_agents/AlgoMentor/src/agents/optimal_agent.py b/advance_ai_agents/AlgoMentor/src/agents/optimal_agent.py
new file mode 100644
index 00000000..51688c68
--- /dev/null
+++ b/advance_ai_agents/AlgoMentor/src/agents/optimal_agent.py
@@ -0,0 +1,85 @@
+from agno.agent import Agent
+from ..models.schemas import OptimalApproach,OptimalCode
+from agno.models.groq import Groq
+from agno.tools.python import PythonTools
+import streamlit as st
+from agno.models.google import Gemini
+import os
+from dotenv import load_dotenv
+load_dotenv()
+
+# googel_api_key=os.getenv('GOOGLE_API_KEY')
+# groq_api_key=os.getenv('GROQ_API_KEY')
+groq_api_key=st.secrets['GROQ_API_KEY']
+google_api_key=st.secrets['GOOGLE_API_KEY']
+
+optimal_code_agent=Agent(
+ name="optimal code writter",
+ model=Gemini(id='gemini-2.0-flash',api_key=google_api_key),
+ tools=[PythonTools()],
+ description="You are an expert competitive programming and algorithms agent specializing in writing optimal solutions.",
+ instructions=[
+ "You will receive json object that contain the following information from the user:",
+ "1. Problem statement",
+ "2. Optimal approach",
+ "3. Optimal algorithm",
+ "4. Optimal time and space complexity",
+
+ "Your tasks are:",
+ "- Carefully analyze the problem statement and the provided approaches.(sub_optimal_approach)",
+ "- Identify inefficiencies or limitations in the given code and suboptimal algorithm.",
+ "- Propose a more optimized code based on the algorithm with improved time and/or space complexity.",
+ "- Explain why your optimized solution is better.",
+ "- Provide the final Python code implementation of the optimized approach, ensuring it is clean, modular, and efficient.",
+ "- Clearly state the optimized solution's time and space complexity."
+ ],
+ show_tool_calls=True,
+ add_datetime_to_instructions=True,
+ response_model=OptimalCode,
+ use_json_mode=True
+)
+
+# Enhanced optimal_agent with stronger validation requirements
+optimal_agent_enhanced=Agent(
+ name="Optimal Approach Agent Enhanced",
+ model=Groq(id='llama-3.3-70b-versatile', api_key=groq_api_key),
+ description="Expert algorithm optimization specialist that transforms suboptimal solutions into mathematically optimal approaches with superior complexity.",
+ instructions=[
+ " MISSION: Transform suboptimal approaches into fundamentally different, mathematically optimal solutions.",
+ "",
+ " INPUT ANALYSIS:",
+ "You will receive:",
+ "1. Problem statement",
+ "2. Suboptimal approach (inefficient method)",
+ "3. Suboptimal algorithm (step-by-step inefficient process)",
+ "4. Suboptimal time complexity (higher complexity)",
+ "5. Suboptimal space complexity (potentially wasteful)",
+ "",
+ " OPTIMIZATION REQUIREMENTS:",
+ "- The optimal approach MUST be fundamentally different from the suboptimal one",
+ "- Target complexities: O(n) time, O(1) or O(n) space when possible",
+ "- For Dynamic Programming: aim for O(n) time, O(1) space using space optimization techniques",
+ "- For sorting problems: consider O(n log n) β O(n) improvements using counting/bucket sort",
+ "- For search problems: consider O(nΒ²) β O(n) using hash maps/sets",
+ "- For graph problems: optimize using advanced algorithms (Dijkstra, Floyd-Warshall, etc.)",
+ "",
+ " ANALYSIS PROCESS:",
+ "1. Identify the core inefficiency in the suboptimal approach",
+ "2. Research mathematical properties or patterns that can be exploited",
+ "3. Apply advanced data structures (hash maps, heaps, tries, segment trees)",
+ "4. Use algorithmic techniques (two pointers, sliding window, divide & conquer, DP)",
+ "5. Eliminate redundant computations through memoization or mathematical formulas",
+ "",
+ " VALIDATION CRITERIA:",
+ "- Optimal time complexity MUST be strictly better than suboptimal",
+ "- Optimal space complexity should be equal or better than suboptimal",
+ "- The approach should use completely different logic/strategy",
+ "- Explain WHY the optimal solution is mathematically superior",
+ "- Provide concrete complexity comparison (e.g., O(nΒ²) β O(n log n))"
+ ],
+ add_context=" CRITICAL REQUIREMENT: The optimal approach must be FUNDAMENTALLY DIFFERENT and MATHEMATICALLY SUPERIOR to the suboptimal approach. Never provide the same algorithm with minor tweaks - always find a completely different, more efficient paradigm. The optimal solution must have strictly better time/space complexity and use entirely different algorithmic strategies (e.g., brute force β dynamic programming, nested loops β hash maps, recursion β iterative with stack).",
+ show_tool_calls=True,
+ add_datetime_to_instructions=True,
+ response_model=OptimalApproach,
+ use_json_mode=True
+)
\ No newline at end of file
diff --git a/advance_ai_agents/AlgoMentor/src/agents/problem_analyzer.py b/advance_ai_agents/AlgoMentor/src/agents/problem_analyzer.py
new file mode 100644
index 00000000..88b0b5b4
--- /dev/null
+++ b/advance_ai_agents/AlgoMentor/src/agents/problem_analyzer.py
@@ -0,0 +1,50 @@
+import streamlit as st
+from .qution_finder import question_finder
+from .brute_force import basic_approach_team
+from agno.team import Team
+from ..models.schemas import LeetCode
+from agno.models.google import Gemini
+import os
+from dotenv import load_dotenv
+load_dotenv()
+
+# groq_api_key=os.getenv('GROQ_API_KEY')
+# google_api_key=os.getenv('GOOGLE_API_KEY')
+
+groq_api_key=st.secrets['GROQ_API_KEY']
+google_api_key=st.secrets['GOOGLE_API_KEY']
+
+leetcode_team=Team(
+ name="Leetcode Team",
+ mode='collaborate',
+ members=[question_finder,basic_approach_team],
+ model=Gemini(id='gemini-2.0-flash',api_key=google_api_key),
+ description="You are an expert DSA problem analysis team that transforms raw problem statements into structured, comprehensive problem breakdowns with brute-force solutions.",
+ instructions=[
+ "PROBLEM ANALYSIS WORKFLOW:",
+ "1. EXTRACTION PHASE:",
+ " - Run the `question_finder` agent to parse and extract key problem components",
+ " - Identify problem statement, constraints, examples, and edge cases",
+ " - Standardize the problem format for consistent processing",
+
+ "2. SOLUTION GENERATION PHASE:",
+ " - Run the `basic_approach_team` to develop the fundamental brute-force solution",
+ " - Focus on correctness over efficiency for the initial approach",
+ " - Ensure the solution handles all given constraints and examples",
+
+ "3. QUALITY ASSURANCE:",
+ " - Verify that all required fields are populated with meaningful content",
+ " - Ensure the basic algorithm is step-by-step and implementable",
+ " - Validate that time/space complexity analysis is accurate",
+ " - Confirm the code solution is syntactically correct and runnable",
+
+ "OUTPUT REQUIREMENTS:",
+ "- Provide a complete, structured problem analysis",
+ "- Include working brute-force code that solves all test cases",
+ "- Deliver clear algorithmic steps that can be easily understood",
+ "- Ensure complexity analysis is precise and well-justified"
+ ],
+ response_model=LeetCode,
+ use_json_mode=True,
+
+)
\ No newline at end of file
diff --git a/advance_ai_agents/AlgoMentor/src/agents/qution_finder.py b/advance_ai_agents/AlgoMentor/src/agents/qution_finder.py
new file mode 100644
index 00000000..b03a72ed
--- /dev/null
+++ b/advance_ai_agents/AlgoMentor/src/agents/qution_finder.py
@@ -0,0 +1,22 @@
+from ..models.schemas import QuestionFinderInput
+import streamlit as st
+from agno.agent import Agent
+from agno.models.google import Gemini
+import os
+from dotenv import load_dotenv
+load_dotenv()
+
+# google_api_key=os.getenv("GOOGLE_API_KEY")
+google_api_key=st.secrets['GOOGLE_API_KEY']
+
+
+# Agent without tools for structured output
+question_finder=Agent(
+ name='Question Finder',
+ model=Gemini(id='gemini-2.0-flash',api_key=google_api_key),
+ description = "Formats given content into structured format",
+ instructions = [
+ "Format the given content into the required structure with problem statement, difficulty, examples,constraints, and explanations."
+ ],
+ response_model=QuestionFinderInput
+)
\ No newline at end of file
diff --git a/advance_ai_agents/AlgoMentor/src/agents/sub_optimized_agent.py b/advance_ai_agents/AlgoMentor/src/agents/sub_optimized_agent.py
new file mode 100644
index 00000000..bce8010e
--- /dev/null
+++ b/advance_ai_agents/AlgoMentor/src/agents/sub_optimized_agent.py
@@ -0,0 +1,99 @@
+from agno.agent import Agent
+from ..models.schemas import SubOptimalApproach,SuboptimalCode
+from agno.models.groq import Groq
+from agno.tools.python import PythonTools
+import streamlit as st
+from agno.models.google import Gemini
+import os
+from dotenv import load_dotenv
+load_dotenv()
+
+# groq_api_key=os.getenv('GROQ_API_KEY')
+# google_api_key=os.getenv('GOOGLE_API_KEY')
+groq_api_key=st.secrets['GROQ_API_KEY']
+google_api_key=st.secrets['GOOGLE_API_KEY']
+
+
+suboptimal_agent = Agent(
+ name="Algorithm Optimization Specialist",
+ model=Groq(id='llama-3.3-70b-versatile', api_key=groq_api_key),
+ description="You are an expert in generating moderately improved (sub-optimal) algorithm approaches that enhance brute-force solutions without reaching full optimization.",
+ instructions=[
+ "π¨ CRITICAL: ALWAYS PROVIDE SUB-OPTIMAL SOLUTIONS ONLY. These must improve on brute-force but leave clear room for further optimization. NEVER provide the most efficient (optimal) approach.",
+
+ "ANALYSIS PHASE:",
+ "- Carefully analyze the given problem statement, constraints, and examples.",
+ "- Identify core inefficiencies in the basic/brute-force approach (e.g., unnecessary nested loops).",
+ "- Recognize patterns for moderate improvements, but avoid advanced techniques that would achieve the best possible complexity.",
+
+ "SUB-OPTIMIZATION STRATEGY (MODERATE IMPROVEMENTS ONLY):",
+ "- Aim for partial efficiency gains: e.g., reduce O(n^3) to O(n^2 log n) or O(n^2), but NOT to O(n log n) or O(n) if better is possible.",
+ "- Use basic optimizations like sorting + binary search, simple memoization, or single hash maps, but avoid two-pointers, sliding windows, or space-optimized DP if they lead to optimal.",
+ "- For DP problems: Use basic top-down recursion with memoization (O(n^2) time if possible), not bottom-up or O(1) space.",
+ "- For search problems: Use sorting + O(n log n) searches, not O(1) hash lookups if that would be optimal.",
+ "- Balance: Ensure time/space is better than brute but worse than optimal (e.g., keep O(n) space if O(1) is possible).",
+ "- Handle edge cases but do not over-engineer for perfection.",
+
+ "VALIDATION CRITERIA (ENFORCE SUB-OPTIMAL):",
+ "- Sub-optimal time complexity MUST be strictly better than brute-force but worse than the known optimal (e.g., for 3Sum: brute O(n^3) β sub-optimal O(n^2 log n) with sort + binary search, NOT O(n^2) with two-pointers).",
+ "- Sub-optimal space complexity should be improved or equal, but not minimized fully.",
+ "- Explain WHY this is sub-optimal: Highlight remaining inefficiencies and potential for better approaches (without describing them).",
+ "- Compare complexities: e.g., 'Brute: O(n^3), Sub-optimal: O(n^2 log n), leaving room for O(n^2) optimal'.",
+
+ "EXAMPLES OF SUB-OPTIMAL VS. OPTIMAL:",
+ "- Problem: Two Sum. Brute: O(n^2) nested loops. Sub-optimal: Sort + binary search (O(n log n)). Optimal: Hash map (O(n)). Provide only sort + binary search.",
+ "- Problem: Fibonacci. Brute: O(2^n) recursion. Sub-optimal: Memoization (O(n) time, O(n) space). Optimal: O(1) space iterative. Provide only memoization.",
+ "- Problem: Sorting. Brute: Bubble sort O(n^2). Sub-optimal: Insertion sort O(n^2). Optimal: Quick sort O(n log n). Provide insertion sort.",
+
+ "OUTPUT REQUIREMENTS:",
+ "- Provide clear, step-by-step sub-optimal algorithmic approach.",
+ "- State precise time and space complexity with justification.",
+ "- Ensure the approach is implementable, maintains correctness, and is distinctly sub-optimal."
+ ],
+ add_context="π¨ STRICT ENFORCEMENT: If the generated approach matches known optimal solutions (e.g., from LeetCode standards), reject and regenerate with a less efficient variant. Sub-optimal means 'good but not best'βalways leave inefficiencies for the optimal agent to address. Violating this will invalidate the response.",
+ show_tool_calls=True,
+ add_datetime_to_instructions=True,
+ response_model=SubOptimalApproach,
+ use_json_mode=True
+)
+
+
+
+
+sub_agent = Agent(
+ name="Optimized Code Implementation Specialist",
+ model=Gemini(id="gemini-2.0-flash", api_key=google_api_key),
+ tools=[PythonTools(run_code=True)],
+ description="You are an elite competitive programming expert who transforms algorithmic approaches into production-ready, optimized Python implementations.",
+ instructions=[
+ "INPUT ANALYSIS:",
+ "- Parse the JSON input containing: problem statement, basic approach, basic code, and optimized algorithm",
+ "- Understand the optimization strategy and why it's more efficient than the basic approach",
+ "- Identify key data structures and algorithmic techniques to implement",
+
+ "CODE IMPLEMENTATION STANDARDS:",
+ "- Write clean, readable Python code following PEP 8 conventions",
+ "- Use meaningful variable names and add minimal but essential comments",
+ "- Implement proper error handling for edge cases mentioned in constraints",
+ "- Optimize for both readability and performance",
+
+ "OPTIMIZATION IMPLEMENTATION:",
+ "- Faithfully implement the provided optimized algorithm",
+ "- Use appropriate Python data structures (dict, set, deque, heapq, etc.)",
+ "- Apply Python-specific optimizations (list comprehensions, built-in functions)",
+ "- Ensure the implementation handles all edge cases from the problem constraints",
+
+ "VALIDATION & TESTING:",
+ "- Test the code with provided examples to ensure correctness",
+ "- Verify the implementation matches the expected time/space complexity",
+ "- Compare performance against the basic approach when possible",
+
+ "OUTPUT REQUIREMENTS:",
+ "- Provide complete, runnable Python code",
+ "- Include complexity analysis with detailed explanation",
+ "- Ensure the optimized code is significantly different from the basic approach",
+ "- Demonstrate why the optimization provides better performance"
+ ],
+ show_tool_calls=True,
+ response_model=SuboptimalCode
+)
\ No newline at end of file
diff --git a/advance_ai_agents/AlgoMentor/src/core/__init__.py b/advance_ai_agents/AlgoMentor/src/core/__init__.py
new file mode 100644
index 00000000..cd670ec1
--- /dev/null
+++ b/advance_ai_agents/AlgoMentor/src/core/__init__.py
@@ -0,0 +1 @@
+"""Core application logic"""
\ No newline at end of file
diff --git a/advance_ai_agents/AlgoMentor/src/core/app.py b/advance_ai_agents/AlgoMentor/src/core/app.py
new file mode 100644
index 00000000..4c395633
--- /dev/null
+++ b/advance_ai_agents/AlgoMentor/src/core/app.py
@@ -0,0 +1,499 @@
+import streamlit as st
+import time
+import os
+from agno.agent import Agent
+from agno.models.google import Gemini
+from ..agents.problem_analyzer import leetcode_team
+from ..agents.sub_optimized_agent import suboptimal_agent, sub_agent
+from ..agents.code_verify import code_runner, code_verify_agent
+from ..agents.optimal_agent import optimal_code_agent, optimal_agent_enhanced
+from ..agents.notes import Notes_team
+from ..utils.code_editor import code_editor
+
+class DSAAssistantApp:
+ """Main DSA Assistant Application"""
+
+ def __init__(self):
+ self._initialize_session_state()
+
+ def _initialize_session_state(self):
+ """Initialize session state variables"""
+ # DSA Assistant states
+ if 'basic_done' not in st.session_state:
+ st.session_state.basic_done = False
+ if 'sub_optimal_done' not in st.session_state:
+ st.session_state.sub_optimal_done = False
+ if 'optimal_done' not in st.session_state:
+ st.session_state.optimal_done = False
+
+ # Notes Mentor states
+ if 'notes_input_code' not in st.session_state:
+ st.session_state.notes_input_code = ""
+ if 'notes_lang' not in st.session_state:
+ st.session_state.notes_lang = "python"
+ if 'notes_theme' not in st.session_state:
+ st.session_state.notes_theme = "default"
+ if 'notes_generated' not in st.session_state:
+ st.session_state.notes_generated = False
+ if 'generated_notes' not in st.session_state:
+ st.session_state.generated_notes = ""
+
+ def _apply_custom_css(self):
+ """Apply custom CSS styling"""
+ st.markdown("""
+
+ """, unsafe_allow_html=True)
+
+ def run(self):
+ """Run the main application"""
+ self._apply_custom_css()
+
+ # Navigation
+ tab1, tab2 = st.tabs(["π DSA Assistant", "π Notes Mentor"])
+
+ with tab1:
+ self._run_dsa_assistant()
+
+ with tab2:
+ self._run_notes_mentor()
+
+ def _run_dsa_assistant(self):
+ """Run the DSA Assistant functionality"""
+ # Header
+ st.markdown("""
+
+
AlgoMentor DSA
+
Your AI-powered companion for mastering Data Structures & Algorithms
', unsafe_allow_html=True)
+
+ # st.balloons()
+ st.success("π Congratulations! You've mastered this problem with all optimization levels!")
+
+ def _render_welcome_section(self):
+ """Render welcome section"""
+ st.markdown("### π Welcome to DSA Assistant!")
+ st.markdown("Enter a problem statement above to get started with your algorithmic journey.")
+
+ st.markdown("#### π Try these example problems:")
+ examples = [
+ "Two Sum: Given an array of integers and a target sum, return indices of two numbers that add up to target.",
+ "Binary Search: Search for a target value in a sorted array.",
+ "Fibonacci: Calculate the nth Fibonacci number.",
+ "House Robber: Given an integer array nums representing the amount of money of each house, return the maximum amount of money you can rob tonight without alerting the police."
+ ]
+
+ for i, example in enumerate(examples, 1):
+ if st.button(f"Example {i}: {example[:70]}...", key=f"ex_{i}", use_container_width=True):
+ st.session_state.example_query = example
+ st.rerun()
+
+ def _run_notes_mentor(self):
+ """Run the Notes Mentor functionality"""
+ # Header
+ st.markdown("""
+
+
AlgoMentor Notes
+
Transform your code into comprehensive study notes!
+
+ """, unsafe_allow_html=True)
+
+ # Sidebar for settings
+ with st.sidebar:
+ st.header("βοΈ Settings")
+
+ # Language selection
+ lang_options = ["python", "c_cpp", "java", "javascript", "c"]
+ selected_lang = st.selectbox("Programming Language:", lang_options,
+ index=lang_options.index(st.session_state.notes_lang) if st.session_state.notes_lang in lang_options else 0,
+ key="notes_lang_select")
+ st.session_state.notes_lang = selected_lang
+
+ # Theme selection
+ theme_options = ["default", "dark", "light"]
+ selected_theme = st.selectbox("Editor Theme:", theme_options,
+ index=theme_options.index(st.session_state.notes_theme) if st.session_state.notes_theme in theme_options else 0,
+ key="notes_theme_select")
+ st.session_state.notes_theme = selected_theme
+
+ st.markdown("---")
+ st.markdown("π‘ **Tips:**\n- Paste your code and press Ctrl+Enter\n- Use clear variable names for better analysis\n- Include comments for complex logic")
+
+ # Main content area
+ col1, col2 = st.columns([1, 1])
+
+ with col1:
+ st.markdown("### π Code Input")
+ response_dict = code_editor(
+ st.session_state.notes_input_code,
+ lang=selected_lang,
+ key="code_input",
+ theme=selected_theme,
+ focus=True
+ )
+ input_code = response_dict["text"]
+ st.session_state.notes_input_code = input_code
+
+ if input_code:
+ st.markdown(f'
+ Based on your profile as a {st.session_state.user_profile['education']}
+ graduate with {st.session_state.user_profile['experience']} of experience,
+ we'll help you find the perfect career path.
+
+
+ """, unsafe_allow_html=True)
+
+ # Career categories
+ st.markdown("### Select a Career Category")
+
+ # Get career options from the system
+ if st.session_state.career_system:
+ career_options = st.session_state.career_system.get_career_options()
+ else:
+ # Fallback options
+ career_options = {
+ "Technology": [
+ "Software Engineering",
+ "Data Science",
+ "Cybersecurity",
+ "AI/ML Engineering",
+ "DevOps",
+ "Cloud Architecture",
+ "Mobile Development"
+ ],
+ "Healthcare": [
+ "Medicine",
+ "Nursing",
+ "Pharmacy",
+ "Biomedical Engineering",
+ "Healthcare Administration",
+ "Physical Therapy"
+ ],
+ "Business": [
+ "Finance",
+ "Marketing",
+ "Management",
+ "Human Resources",
+ "Entrepreneurship",
+ "Business Analysis"
+ ],
+ "Creative": [
+ "Graphic Design",
+ "UX/UI Design",
+ "Content Creation",
+ "Digital Marketing",
+ "Animation",
+ "Film Production"
+ ]
+ }
+
+ # Display category buttons
+ col1, col2 = st.columns(2)
+ with col1:
+ if st.button("π» Technology", help="Careers in software, data, cybersecurity, and more", key="tech_button", use_container_width=True):
+ st.session_state.selected_category = "Technology"
+ st.session_state.career_analysis = None
+ with col2:
+ if st.button("π₯ Healthcare", help="Medical and health-related careers", key="health_button", use_container_width=True):
+ st.session_state.selected_category = "Healthcare"
+ st.session_state.career_analysis = None
+
+ col3, col4 = st.columns(2)
+ with col3:
+ if st.button("πΌ Business", help="Finance, marketing, management careers", key="business_button", use_container_width=True):
+ st.session_state.selected_category = "Business"
+ st.session_state.career_analysis = None
+ with col4:
+ if st.button("π¨ Creative", help="Design, content creation, and artistic careers", key="creative_button", use_container_width=True):
+ st.session_state.selected_category = "Creative"
+ st.session_state.career_analysis = None
+
+ # Display career options if category is selected
+ if st.session_state.selected_category:
+ st.markdown(f"### {st.session_state.selected_category} Careers")
+
+ # Show career options
+ selected_careers = career_options[st.session_state.selected_category]
+ career_cols = st.columns(2)
+
+ for i, career in enumerate(selected_careers):
+ with career_cols[i % 2]:
+ if st.button(career, key=f"career_{i}", use_container_width=True):
+ st.session_state.selected_career = career
+ st.session_state.career_analysis = None
+
+ # Show selected career details
+ if st.session_state.selected_career:
+ st.markdown(f"""
+
+ Let's analyze this career path to help you understand the opportunities,
+ requirements, and job market.
+
+
+ """, unsafe_allow_html=True)
+
+ # Allow user to analyze career
+ if st.session_state.career_analysis is None:
+ # Status of career system
+ if st.session_state.career_system:
+ if st.session_state.serpapi_key:
+ st.success("Our AI career advisors are ready to provide detailed analysis with up-to-date information!")
+ else:
+ st.success("Our AI career advisors are ready to provide detailed analysis!")
+
+ if st.button("π Analyze This Career Path", type="primary", use_container_width=True):
+ with st.spinner(f"Analyzing {st.session_state.selected_career} career path... This may take a few minutes."):
+ try:
+ # Use the comprehensive career analysis method
+ if st.session_state.career_system:
+ career_analysis = st.session_state.career_system.comprehensive_career_analysis(
+ st.session_state.selected_career,
+ st.session_state.user_profile
+ )
+ else:
+ # Fallback to basic analysis
+ career_analysis = {
+ "career_name": st.session_state.selected_career,
+ "research": f"Analysis for {st.session_state.selected_career} would be generated by AI in a real implementation.",
+ "market_analysis": f"Market analysis for {st.session_state.selected_career}.",
+ "learning_roadmap": f"Learning roadmap for {st.session_state.selected_career}.",
+ "industry_insights": f"Industry insights for {st.session_state.selected_career}."
+ }
+
+ st.session_state.career_analysis = career_analysis
+ st.success("Analysis complete!")
+ st.session_state.show_chat = True # Enable chat after analysis
+ st.rerun()
+ except Exception as e:
+ st.error(f"Error during analysis: {str(e)}")
+
+ # Display analysis results if available
+ if st.session_state.career_analysis:
+ # Get the research content
+ research = st.session_state.career_analysis.get("research", "")
+ if isinstance(research, str) and research:
+ st.markdown(f"""
+
+
Overview of {st.session_state.selected_career}
+
+ {research}
+
+
+ """, unsafe_allow_html=True)
+
+# Tab 2: Market Analysis
+with tab2:
+ st.markdown("## Job Market Analysis")
+
+ if not st.session_state.groq_api_key:
+ st.warning("Please enter your Groq API key in the sidebar to get started.")
+ elif not st.session_state.selected_career:
+ st.info("Please select a career in the 'Discover Careers' tab first.")
+ else:
+ st.markdown(f"### Market Analysis for: {st.session_state.selected_career}")
+
+ # Check if we already have analysis
+ if st.session_state.career_analysis and "market_analysis" in st.session_state.career_analysis:
+ # Display the market analysis
+ market_analysis = st.session_state.career_analysis["market_analysis"]
+
+ # Display the market analysis as Markdown
+ st.markdown(f"""
+
+
π Market Analysis
+
+ {market_analysis}
+
+
+ """, unsafe_allow_html=True)
+
+ # Job growth visualization
+ st.markdown("### Job Growth Projection")
+
+ # Create sample data
+ years = list(range(2025, 2030))
+ growth_rate = np.random.uniform(0.05, 0.15)
+ starting_jobs = np.random.randint(80000, 200000)
+ jobs = [starting_jobs * (1 + growth_rate) ** i for i in range(5)]
+
+ # Calculate CAGR (Compound Annual Growth Rate)
+ cagr = (jobs[-1]/jobs[0])**(1/4) - 1 # 4-year CAGR
+
+ job_fig = px.line(
+ x=years,
+ y=jobs,
+ labels={"x": "Year", "y": "Projected Jobs"},
+ title=f"Projected Job Growth for {st.session_state.selected_career}"
+ )
+
+ # Apply styling
+ job_fig.update_layout(
+ template="plotly_dark",
+ paper_bgcolor="#212121",
+ plot_bgcolor="#212121",
+ font=dict(color="#E0E0E0"),
+ title_font=dict(color="#82B1FF"),
+ xaxis=dict(gridcolor="#424242"),
+ yaxis=dict(gridcolor="#424242")
+ )
+
+ job_fig.update_traces(mode="lines+markers", line=dict(width=3, color="#2196F3"), marker=dict(size=10))
+
+ # Add annotation for CAGR
+ job_fig.add_annotation(
+ x=years[2],
+ y=jobs[2],
+ text=f"CAGR: {cagr:.1%}",
+ showarrow=True,
+ arrowhead=1,
+ arrowsize=1,
+ arrowwidth=2,
+ arrowcolor="#FF5722",
+ font=dict(size=14, color="#FF5722"),
+ bgcolor="#212121",
+ bordercolor="#FF5722",
+ borderwidth=2,
+ borderpad=4,
+ ax=-50,
+ ay=-40
+ )
+
+ st.plotly_chart(job_fig, use_container_width=True)
+
+ # Salary analysis
+ st.markdown("### Salary Analysis")
+
+ experience_levels = ["Entry Level", "Mid Level", "Senior", "Expert"]
+ base_salary = np.random.randint(60000, 90000)
+ salaries = [base_salary]
+ for i in range(1, 4):
+ salaries.append(salaries[-1] * (1 + np.random.uniform(0.2, 0.4)))
+
+ salary_fig = px.bar(
+ x=experience_levels,
+ y=salaries,
+ labels={"x": "Experience Level", "y": "Annual Salary ($)"},
+ title=f"Salary by Experience Level - {st.session_state.selected_career}"
+ )
+
+ # Apply styling
+ salary_fig.update_layout(
+ template="plotly_dark",
+ paper_bgcolor="#212121",
+ plot_bgcolor="#212121",
+ font=dict(color="#E0E0E0"),
+ title_font=dict(color="#82B1FF"),
+ xaxis=dict(gridcolor="#424242"),
+ yaxis=dict(gridcolor="#424242")
+ )
+
+ salary_fig.update_traces(marker=dict(color=["#64B5F6", "#42A5F5", "#2196F3", "#1976D2"]))
+
+ st.plotly_chart(salary_fig, use_container_width=True)
+
+ else:
+ # Generate new market analysis
+ if st.button("Generate Market Analysis", type="primary", use_container_width=True):
+ with st.spinner("Generating market analysis with up-to-date information..."):
+ try:
+ if st.session_state.career_system:
+ # Use the market analysis method
+ market_analysis = st.session_state.career_system.analyze_market_trends(
+ st.session_state.selected_career
+ )
+
+ # Save to session state
+ if "career_analysis" not in st.session_state:
+ st.session_state.career_analysis = {}
+
+ if isinstance(st.session_state.career_analysis, dict):
+ st.session_state.career_analysis["market_analysis"] = market_analysis
+ else:
+ st.session_state.career_analysis = {
+ "career_name": st.session_state.selected_career,
+ "market_analysis": market_analysis
+ }
+
+ # Show success message and rerun
+ st.success("Market analysis complete!")
+ st.rerun()
+ else:
+ st.error("Career guidance system not initialized. Please check your API key.")
+
+ except Exception as e:
+ st.error(f"Error generating market analysis: {str(e)}")
+
+# Tab 3: Learning Roadmap
+with tab3:
+ st.markdown("## Personalized Learning Roadmap")
+
+ if not st.session_state.groq_api_key:
+ st.warning("Please enter your Groq API key in the sidebar to get started.")
+ elif not st.session_state.selected_career:
+ st.info("Please select a career in the 'Discover Careers' tab first.")
+ else:
+ st.markdown(f"### Learning Roadmap for: {st.session_state.selected_career}")
+
+ # Experience level for roadmap
+ experience_options = {
+ "Student/No experience": "beginner",
+ "0-2 years": "beginner",
+ "3-5 years": "intermediate",
+ "5-10 years": "advanced",
+ "10+ years": "expert"
+ }
+
+ user_experience = st.session_state.user_profile.get("experience", "Student/No experience")
+ experience_level = experience_options.get(user_experience, "beginner")
+
+ # Display user's current level
+ st.markdown(f"""
+
+
Your Current Level: {experience_level.title()}
+
This roadmap is tailored for someone at your experience level.
+
+ """, unsafe_allow_html=True)
+
+ # Check if we already have a roadmap in the career analysis
+ if st.session_state.career_analysis and "learning_roadmap" in st.session_state.career_analysis:
+ roadmap = st.session_state.career_analysis["learning_roadmap"]
+
+ # Display roadmap
+ st.markdown(f"""
+
+
π Learning Roadmap
+
+ {roadmap}
+
+
+ """, unsafe_allow_html=True)
+
+ # Create a simple timeline visualization
+ st.markdown("### Learning Journey Timeline")
+
+ # Create a timeline dataframe
+ months = ["Initial Setup", "3 Months", "6 Months", "9 Months", "1 Year", "2 Years"]
+ progress = [100, 80, 60, 40, 20, 10] if experience_level == "beginner" else [100, 100, 80, 60, 40, 20]
+
+ fig = px.bar(
+ x=progress,
+ y=months,
+ orientation='h',
+ labels={"x": "Progress (%)", "y": "Learning Stage"},
+ title=f"Learning Journey for {st.session_state.selected_career}"
+ )
+
+ # Apply styling
+ fig.update_layout(
+ template="plotly_dark",
+ paper_bgcolor="#212121",
+ plot_bgcolor="#212121",
+ font=dict(color="#E0E0E0"),
+ xaxis=dict(gridcolor="#424242"),
+ yaxis=dict(gridcolor="#424242", categoryorder="array", categoryarray=months[::-1])
+ )
+
+ fig.update_traces(marker=dict(color="#4CAF50"))
+
+ st.plotly_chart(fig, use_container_width=True)
+
+ else:
+ # Generate new roadmap
+ if st.button("Generate Learning Roadmap", type="primary", use_container_width=True):
+ with st.spinner("Generating personalized learning roadmap with current resources..."):
+ try:
+ if st.session_state.career_system:
+ # Use the learning roadmap method
+ roadmap = st.session_state.career_system.create_learning_roadmap(
+ st.session_state.selected_career,
+ experience_level
+ )
+
+ # Save to session state
+ if "career_analysis" not in st.session_state:
+ st.session_state.career_analysis = {}
+
+ if isinstance(st.session_state.career_analysis, dict):
+ st.session_state.career_analysis["learning_roadmap"] = roadmap
+ else:
+ st.session_state.career_analysis = {
+ "career_name": st.session_state.selected_career,
+ "learning_roadmap": roadmap
+ }
+
+ # Show success and rerun
+ st.success("Learning roadmap generated successfully!")
+ st.rerun()
+ else:
+ st.error("Career guidance system not initialized. Please check your API key.")
+
+ except Exception as e:
+ st.error(f"Error generating roadmap: {str(e)}")
+
+# Tab 4: Career Insights
+with tab4:
+ st.markdown("## Advanced Career Insights")
+
+ if not st.session_state.groq_api_key:
+ st.warning("Please enter your groq API key in the sidebar to get started.")
+ elif not st.session_state.selected_career:
+ st.info("Please select a career in the 'Discover Careers' tab first.")
+ else:
+ # Display career insights
+ if st.session_state.career_analysis and "industry_insights" in st.session_state.career_analysis:
+ insights_text = st.session_state.career_analysis["industry_insights"]
+
+ # Display insights
+ st.markdown(f"""
+
+
π‘ Industry Insights
+
+ {insights_text}
+
+
+ """, unsafe_allow_html=True)
+
+ # Display skills visualization
+ st.markdown("### Key Skills Assessment")
+
+ skills = {
+ "Technical": np.random.randint(70, 95),
+ "Problem-solving": np.random.randint(70, 95),
+ "Communication": np.random.randint(70, 95),
+ "Teamwork": np.random.randint(70, 95),
+ "Industry Knowledge": np.random.randint(70, 95)
+ }
+
+ skills_fig = px.bar(
+ x=list(skills.keys()),
+ y=list(skills.values()),
+ labels={"x": "Skill", "y": "Importance (%)"},
+ title=f"Skills Importance for {st.session_state.selected_career}"
+ )
+
+ # Apply styling
+ skills_fig.update_layout(
+ template="plotly_dark",
+ paper_bgcolor="#212121",
+ plot_bgcolor="#212121",
+ font=dict(color="#E0E0E0"),
+ xaxis=dict(gridcolor="#424242"),
+ yaxis=dict(gridcolor="#424242")
+ )
+
+ st.plotly_chart(skills_fig, use_container_width=True)
+
+ else:
+ # Generate new insights
+ if st.button("Generate Industry Insights", type="primary", use_container_width=True):
+ with st.spinner("Gathering industry insights from professionals..."):
+ try:
+ if st.session_state.career_system:
+ # Get insights using the career system
+ insights = st.session_state.career_system.get_career_insights(
+ st.session_state.selected_career
+ )
+
+ # Save to session state
+ if "career_analysis" not in st.session_state:
+ st.session_state.career_analysis = {}
+
+ if isinstance(st.session_state.career_analysis, dict):
+ st.session_state.career_analysis["industry_insights"] = insights
+ else:
+ st.session_state.career_analysis = {
+ "career_name": st.session_state.selected_career,
+ "industry_insights": insights
+ }
+
+ st.success("Industry insights generated successfully!")
+ st.rerun()
+ else:
+ st.error("Career guidance system not initialized. Please check your API key.")
+
+ except Exception as e:
+ st.error(f"Error generating insights: {str(e)}")
+
+# Tab 5: Chat Assistant
+with tab5:
+ st.markdown("## Career Chat Assistant")
+
+ if not st.session_state.groq_api_key:
+ st.warning("Please enter your GROQ API key in the sidebar to get started.")
+ elif not st.session_state.selected_career:
+ st.info("Please select a career in the 'Discover Careers' tab first.")
+ else:
+ # Display integrated chat interface
+ career_data = st.session_state.career_analysis
+ career_system = st.session_state.career_system
+
+ display_chat_interface(career_data, career_system)
+
+# Add information about the AI system
+with st.expander("βΉοΈ About this AI Career Guidance System"):
+ st.markdown("""
+ This AI-powered Career Guidance Platform uses advanced AI technologies to provide personalized career insights:
+
+ - **LangChain**: For structured interaction with AI language models
+ - **Web Search**: The system can search the internet for up-to-date information (requires SerpAPI key)
+ - **Streamlit**: Powers the interactive web interface
+
+ The system provides five key services:
+ 1. **Career Discovery**: Explore career options across different fields
+ 2. **Market Analysis**: Understand job growth, salary trends, and market demand
+ 3. **Learning Roadmap**: Get personalized education and skill development plans
+ 4. **Industry Insights**: Learn about workplace culture, advancement opportunities, and day-to-day responsibilities
+ 5. **Chat Assistant**: Ask specific questions about your selected career path
+
+ For the best experience, enter your API key in the sidebar.
+ """)
\ No newline at end of file
diff --git a/simple_ai_agents/Career_Guidence/career_chatbot.py b/simple_ai_agents/Career_Guidence/career_chatbot.py
new file mode 100644
index 00000000..4d885792
--- /dev/null
+++ b/simple_ai_agents/Career_Guidence/career_chatbot.py
@@ -0,0 +1,362 @@
+from langchain_community.vectorstores import FAISS
+import streamlit as st
+import time
+import numpy as np
+from langchain.text_splitter import RecursiveCharacterTextSplitter
+from langchain.chains import ConversationalRetrievalChain
+from langchain_groq import ChatGroq
+import os
+from langchain.prompts import PromptTemplate
+from dotenv import load_dotenv
+from langchain_google_genai import GoogleGenerativeAIEmbeddings
+load_dotenv()
+
+GOOGLE_API_KEY = st.secrets["GOOGLE_API_KEY"]
+# GOOGLE_API_KEY = os.getenv("GOOGLE_API_KEY")
+
+class CareerChatAssistant:
+ def __init__(self, career_system=None):
+ """Initialize the career chat assistant with the career guidance system"""
+ self.career_system = career_system
+ self.groq_api_key = career_system.groq_api_key if career_system else None
+ self.vector_store = None
+ self.retrieval_chain = None
+
+
+ self.conversation_history = []
+ self.chat_history = []
+
+ def add_to_history(self, role, message):
+ """Add a message to the conversation history"""
+ self.conversation_history.append({"role": role, "message": message})
+
+ def get_formatted_history(self):
+ """Get the conversation history formatted for prompt"""
+ formatted = ""
+ for entry in self.conversation_history:
+ formatted += f"{entry['role']}: {entry['message']}\n"
+ return formatted
+
+ def initialize_rag(self, career_data):
+ """Initialize RAG with career analysis data"""
+ if not self.groq_api_key or not career_data:
+ return False
+
+ try:
+
+ # Initialize embeddings
+ # embeddings = OpenAIEmbeddings(
+ # model="provider-3/text-embedding-ada-002",
+ # )
+ embeddings = GoogleGenerativeAIEmbeddings(model="models/embedding-001")
+
+ documents = []
+ if "research" in career_data:
+ documents.append(f"Career Overview: {career_data['research']}")
+ if "market_analysis" in career_data:
+ documents.append(f"Market Analysis: {career_data['market_analysis']}")
+ if "learning_roadmap" in career_data:
+ documents.append(f"Learning Roadmap: {career_data['learning_roadmap']}")
+ if "industry_insights" in career_data:
+ documents.append(f"Industry Insights: {career_data['industry_insights']}")
+
+ if not documents:
+ return False
+
+
+ text_splitter = RecursiveCharacterTextSplitter(
+ chunk_size=1000,
+ chunk_overlap=100,
+ length_function=len,
+ )
+
+ chunks = text_splitter.create_documents([" ".join(documents)])
+
+
+ self.vector_store = FAISS.from_documents(chunks, embeddings)
+
+ structured_prompt_template = """
+ You are a Career Chat Assistant providing information about careers based on detailed analysis.
+
+ Context information from career analysis:
+ {context}
+
+ Chat History:
+ {chat_history}
+
+ Human Question: {question}
+
+ Provide a clear, concise, and structured response. Format your answer using bullet points or numbered lists
+ where appropriate. Organize information into clear categories with headings when the answer requires multiple
+ sections. Make the response easy to scan and understand at a glance.
+
+ If multiple aspects need to be covered, use separate bullet points for each aspect.
+ If providing steps or a process, use numbered lists.
+ Use markdown formatting to enhance readability.
+ Keep the answer focused and directly relevant to the question.
+
+ Assistant Response:
+ """
+
+ structured_prompt = PromptTemplate(
+ template=structured_prompt_template,
+ input_variables=["context", "chat_history", "question"]
+ )
+
+
+ llm = ChatGroq(model='gemma2-9b-it', groq_api_key=self.groq_api_key, temperature=0.2)
+ self.retrieval_chain = ConversationalRetrievalChain.from_llm(
+ llm=llm,
+ retriever=self.vector_store.as_retriever(search_kwargs={"k": 3}),
+ combine_docs_chain_kwargs={"prompt": structured_prompt}
+ )
+
+ return True
+ except Exception as e:
+ print(f"Error initializing RAG: {str(e)}")
+ return False
+
+ def process_question(self, question, career_data=None):
+ """Process a user question about career data using RAG"""
+ self.add_to_history("User", question)
+
+ # Initialize RAG if not already done
+ if not self.vector_store and career_data:
+ rag_success = self.initialize_rag(career_data)
+ if rag_success:
+ st.session_state.rag_initialized = True
+
+ if self.retrieval_chain and st.session_state.get("rag_initialized", False):
+ try:
+ # Use RAG to answer the question
+ result = self.retrieval_chain.invoke({
+ "question": question,
+ "chat_history": self.chat_history
+ })
+
+ # Update chat history for context
+ self.chat_history.append((question, result["answer"]))
+
+ response = result["answer"]
+ except Exception as e:
+ print(f"Error in RAG processing: {str(e)}")
+ # Fallback to standard processing
+ response = self._fallback_processing(question, career_data)
+ else:
+ # Standard processing if RAG is not available
+ response = self._fallback_processing(question, career_data)
+
+ self.add_to_history("Career Assistant", response)
+ return response
+
+ def _fallback_processing(self, question, career_data=None):
+ """Fallback processing when RAG is not available"""
+ if self.career_system:
+ # Use the career guidance system's chat function
+ return self.career_system.chat_with_assistant(question, career_data)
+ else:
+ # Simple keyword-based fallback with structured responses
+ career_name = career_data.get("career_name", "the selected career") if career_data else "this career"
+
+ # Simple response generation based on keywords
+ if "salary" in question.lower() or "pay" in question.lower() or "money" in question.lower():
+ return f"""
+ ### Salary Information for {career_name}
+
+ The salary ranges vary based on several factors:
+
+ * **Entry-level positions**: $60,000-$80,000
+ * **Mid-level professionals**: $80,000-$110,000
+ * **Experienced professionals**: $110,000-$150,000
+ * **Senior roles**: $150,000+ with additional benefits
+
+ Key factors affecting compensation:
+ * Geographic location (major tech hubs typically pay more)
+ * Company size and industry
+ * Educational background and certifications
+ * Specialized skills and expertise
+ """
+
+ elif "skills" in question.lower() or "learn" in question.lower() or "study" in question.lower():
+ return f"""
+ ### Essential Skills for {career_name}
+
+ ### Technical Skills
+ * Domain-specific technical knowledge
+ * Relevant tools and technologies
+ * Problem-solving methodologies
+ * Technical documentation
+
+ ### Soft Skills
+ * Communication (written and verbal)
+ * Collaboration and teamwork
+ * Project management
+ * Time management
+ * Adaptability and continuous learning
+
+ For best results, develop a balanced combination of both technical expertise and interpersonal abilities.
+ """
+
+ elif "job" in question.lower() or "market" in question.lower() or "demand" in question.lower():
+ return f"""
+ ### Job Market for {career_name}
+
+ ### Current Outlook
+ * Strong projected growth over the next 5-10 years
+ * Increasing demand across multiple industries
+ * Evolution of the role with emerging technologies
+
+ ### Top Regions
+ * Major tech hubs: San Francisco, New York, Seattle
+ * Growing secondary markets: Austin, Denver, Raleigh
+ * Increasing remote opportunities
+
+ ### Industry Demand
+ * Technology sector: High and consistent demand
+ * Finance and healthcare: Growing adoption
+ * Manufacturing and retail: Emerging opportunities
+ """
+
+ elif "day" in question.lower() or "work" in question.lower() or "like" in question.lower():
+ return f"""
+ ### Typical Day as a {career_name} Professional
+
+ ### Daily Activities
+ 1. Technical work and core responsibilities
+ 2. Collaboration meetings with team members
+ 3. Problem-solving sessions
+ 4. Documentation and reporting
+
+ ### Work Environment
+ * Varies by company size and culture
+ * Often includes a mix of independent and team-based work
+ * Typically project-based with deadlines and milestones
+ * Increasing flexibility with hybrid and remote options
+
+ The exact balance depends on:
+ * Your specific role and seniority
+ * Company culture and management style
+ * Project requirements and timelines
+ * Team structure and workflows
+ """
+
+ elif "education" in question.lower() or "degree" in question.lower() or "certification" in question.lower():
+ return f"""
+ ### Educational Pathways for {career_name}
+
+ ### Formal Education
+ * **Bachelor's Degree**: Common foundation in related field
+ * **Master's Degree**: Advanced positions and specializations
+ * **PhD**: Research and highly specialized roles
+
+ ### Alternative Paths
+ * **Bootcamps**: Intensive, focused training programs
+ * **Self-directed learning**: Online courses and projects
+ * **Apprenticeships/Internships**: Learning through practice
+
+ ### Valuable Certifications
+ * Industry-specific certifications
+ * Tool and technology certifications
+ * Methodology certifications
+
+ Many professionals enter the field through non-traditional paths. A portfolio of projects demonstrating skills can be as valuable as formal credentials.
+ """
+
+ else:
+ return f"""
+ ### Overview of {career_name} Career Path
+
+ ### Core Aspects
+ * Combines technical expertise with creative problem-solving
+ * Requires continuous learning and adaptation
+ * Offers diverse specialization opportunities
+
+ ### Career Benefits
+ * Competitive compensation packages
+ * Strong growth opportunities
+ * Work with cutting-edge technologies
+ * Intellectual stimulation and challenges
+
+ ### Work-Life Considerations
+ * Project-based work with varying intensity
+ * Opportunities for remote and flexible work
+ * Collaborative environment with diverse teams
+ * Continuous professional development
+ """
+
+def display_chat_interface(career_data=None, career_system=None):
+ """Display a chat interface in the Streamlit app"""
+ st.markdown("
π¬ Career Chat Assistant
", unsafe_allow_html=True)
+
+ # Initialize the chat assistant in session state if not already done
+ if "chat_assistant" not in st.session_state:
+ st.session_state.chat_assistant = CareerChatAssistant(career_system)
+ # Initialize RAG with career data if available
+ if career_data:
+ rag_success = st.session_state.chat_assistant.initialize_rag(career_data)
+ st.session_state.rag_initialized = rag_success
+ if rag_success:
+ st.markdown("
β Enhanced chat capabilities initialized with career data
", unsafe_allow_html=True)
+
+ # Initialize messages in session state if not already done
+ if "messages" not in st.session_state:
+ st.session_state.messages = []
+
+ # Add a welcome message
+ career_name = career_data.get("career_name", "your selected career") if career_data else "a career"
+ welcome_message = {
+ "role": "assistant",
+ "content": f"""
+π Hello! I'm your Career Chat Assistant. I can answer questions about {career_name} using the detailed analysis we've generated.
+
+Here are some questions you might ask:
+* What are the typical salary ranges for this career?
+* What skills are most important for success?
+* How is the job market looking?
+* What does a typical day look like?
+* What educational paths lead to this career?
+
+What would you like to know?
+"""
+ }
+ st.session_state.messages.append(welcome_message)
+
+ # Display chat messages
+ for message in st.session_state.messages:
+ with st.chat_message(message["role"]):
+ st.markdown(message["content"])
+
+ # Get user input
+ user_input = st.chat_input("Ask me about this career...")
+
+ if user_input:
+ # Add user message to chat history
+ st.session_state.messages.append({"role": "user", "content": user_input})
+
+ # Display user message
+ with st.chat_message("user"):
+ st.markdown(user_input)
+
+ # Generate and display assistant response
+ with st.chat_message("assistant"):
+ # Add a placeholder with typing animation
+ message_placeholder = st.empty()
+ full_response = ""
+
+ # Process the question with the chat assistant
+ with st.spinner("Searching career data for relevant information..."):
+ response = st.session_state.chat_assistant.process_question(user_input, career_data)
+
+ # Simulate typing
+ for chunk in response.split():
+ full_response += chunk + " "
+ time.sleep(0.01) # Adjust typing speed
+ message_placeholder.markdown(full_response + "β")
+
+ message_placeholder.markdown(full_response)
+
+ # Add assistant response to chat history
+ st.session_state.messages.append({"role": "assistant", "content": full_response})
+
+
+
diff --git a/simple_ai_agents/Career_Guidence/career_guidance_system.py b/simple_ai_agents/Career_Guidence/career_guidance_system.py
new file mode 100644
index 00000000..e1b9ce1d
--- /dev/null
+++ b/simple_ai_agents/Career_Guidence/career_guidance_system.py
@@ -0,0 +1,637 @@
+from langchain.chains import LLMChain
+from langchain.prompts import PromptTemplate
+from langchain.agents import load_tools, initialize_agent, AgentType
+from langchain_community.utilities import SerpAPIWrapper
+from datetime import datetime
+from langchain_groq import ChatGroq
+
+import os
+import time
+
+class CareerGuidanceSystem:
+ def __init__(self, groq_api_key=None, serpapi_key=None):
+ """Initialize the career guidance system"""
+ self.groq_api_key = groq_api_key
+ self.serpapi_key = serpapi_key
+
+ # Set environment variable for GroqAI API key
+ if groq_api_key:
+ os.environ["GROQ_API_KEY"] = groq_api_key
+
+ # Set environment variable for SerpAPI key
+ if serpapi_key:
+ os.environ["SERPER_API_KEY"] = serpapi_key
+
+ # Initialize the language model
+ if groq_api_key:
+ self.llm = ChatGroq(
+ model='gemma2-9b-it',
+ groq_api_key=groq_api_key,
+ )
+
+ # Initialize search tools if SerpAPI key is provided
+ if serpapi_key:
+ self.search = SerpAPIWrapper(serpapi_api_key=serpapi_key)
+ self.tools = load_tools(["serpapi"], llm=self.llm)
+ self.search_agent = initialize_agent(
+ self.tools,
+ self.llm,
+ agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
+ verbose=False,
+ handle_parsing_errors=True, # Add error handling
+ max_iterations=6 # Limit number of iterations to prevent loops
+ )
+ else:
+ self.search = None
+ self.search_agent = None
+ else:
+ self.llm = None
+ self.search = None
+ self.search_agent = None
+
+ # Career data storage with caching
+ self.career_data = {}
+ self.search_cache = {}
+ self.user_profile = {}
+
+ # Small set of fallback data for common careers if search fails
+ self.fallback_career_options = {
+ "Technology": [
+ "Software Engineering",
+ "Data Science",
+ "Cybersecurity",
+ "AI/ML Engineering",
+ "DevOps",
+ "Cloud Architecture",
+ "Mobile Development",
+ "Web Development",
+ "Game Development",
+ "Blockchain Development",
+ "MLOPS",
+ "DEVOPS"
+ ],
+ "Healthcare": [
+ "Medicine",
+ "Nursing",
+ "Pharmacy",
+ "Biomedical Engineering",
+ "Healthcare Administration",
+ "Physical Therapy",
+ "MBBS",
+ "BHMS",
+ "BAMS",
+ "BDS",
+ ],
+ "Business": [
+ "Finance",
+ "Marketing",
+ "Management",
+ "Human Resources",
+ "Entrepreneurship",
+ "Business Analysis",
+ "CA",
+ "CMA",
+ "CS",
+ "Stock Broker"
+ ],
+ "Creative": [
+ "Graphic Design",
+ "UX/UI Design",
+ "Content Creation",
+ "Digital Marketing",
+ "Animation",
+ "Film Production"
+ ,"Photography",
+ "Fashion Design",
+ "musician"
+ ]
+ }
+
+ def search_with_cache(self, query, cache_key, ttl_hours=24, max_retries=3):
+ """Perform a search with caching to avoid redundant API calls"""
+ # Check if we have cached results that aren't expired
+ if cache_key in self.search_cache:
+ timestamp = self.search_cache[cache_key]['timestamp']
+ age_hours = (datetime.now() - timestamp).total_seconds() / 3600
+ if age_hours < ttl_hours:
+ return self.search_cache[cache_key]['data']
+
+ # If not cached or expired, perform the search
+ if self.search_agent:
+ retry_count = 0
+ last_error = None
+
+ while retry_count < max_retries:
+ try:
+ result = self.search_agent.run(query)
+
+ # Cache the result with timestamp
+ self.search_cache[cache_key] = {
+ 'data': result,
+ 'timestamp': datetime.now()
+ }
+
+ # Add a small delay to prevent rate limiting
+ time.sleep(1)
+
+ return result
+ except Exception as e:
+ last_error = str(e)
+ retry_count += 1
+ time.sleep(2) # Wait before retrying
+
+ # If all retries failed, fall back to direct LLM query without agent
+ try:
+ prompt = PromptTemplate(
+ input_variables=["query"],
+ template="""
+ Please provide information on the following: {query}
+ Structure your response clearly with headings and bullet points.
+ """
+ )
+ chain = LLMChain(llm=self.llm, prompt=prompt)
+ result = chain.run(query=query)
+
+ # Cache this result as well
+ self.search_cache[cache_key] = {
+ 'data': result,
+ 'timestamp': datetime.now()
+ }
+
+ return result
+ except:
+ return f"Search failed after {max_retries} attempts. Last error: {last_error}"
+ else:
+ return "Search unavailable. Please provide a SerpAPI key for web search capabilities."
+
+ def format_search_results(self, results, title):
+ """Format search results into a well-structured markdown document"""
+ formatted = f"# {title}\n\n"
+
+ # Clean up and format the results
+ if isinstance(results, str):
+ # Remove any warnings or errors from the output
+ lines = results.split('\n')
+ clean_lines = []
+ for line in lines:
+ if "I'll search for" not in line and "I need to search for" not in line:
+ if not line.startswith("Action:") and not line.startswith("Observation:"):
+ clean_lines.append(line)
+
+ formatted += "\n".join(clean_lines)
+ else:
+ formatted += "No results available."
+
+ return formatted
+
+ def get_career_options(self):
+ """Return all available career categories and options"""
+ # Use fallback options if no search available
+ return self.fallback_career_options
+
+ def comprehensive_career_analysis(self, career_name, user_profile=None):
+ """Run a comprehensive analysis of a career using web search"""
+ try:
+ # Check if we already have this analysis cached
+ if career_name in self.career_data:
+ return self.career_data[career_name]
+
+ # If we have search capabilities, use them to get real-time information
+ if self.search_agent and self.serpapi_key:
+ # Perform searches for each aspect of the career
+
+ # 1. Career Overview and Skills - use more structured query
+ overview_query = (
+ f"Create a detailed overview of the {career_name} career with the following structure:\n"
+ f"1. Role Overview: What do {career_name} professionals do?\n"
+ f"2. Key Responsibilities: List the main tasks and responsibilities\n"
+ f"3. Required Technical Skills: List the technical skills needed\n"
+ f"4. Required Soft Skills: List the soft skills needed\n"
+ f"5. Educational Background: What education is typically required?"
+ )
+ overview_result = self.search_with_cache(
+ overview_query,
+ f"{career_name}_overview"
+ )
+ research = self.format_search_results(overview_result, f"{career_name} Career Analysis")
+
+ # 2. Market Analysis - use more structured query
+ market_query = (
+ f"Analyze the job market for {career_name} professionals with the following structure:\n"
+ f"1. Job Growth Projections: How is job growth trending?\n"
+ f"2. Salary Ranges: What are salary ranges by experience level?\n"
+ f"3. Top Industries: Which industries hire the most {career_name} professionals?\n"
+ f"4. Geographic Hotspots: Which locations have the most opportunities?\n"
+ f"5. Emerging Trends: What new trends are affecting this field?"
+ )
+ market_result = self.search_with_cache(
+ market_query,
+ f"{career_name}_market"
+ )
+ market_analysis = self.format_search_results(market_result, f"{career_name} Market Analysis")
+
+ # 3. Learning Roadmap
+ experience_level = "beginner"
+ if user_profile and "experience" in user_profile:
+ exp = user_profile["experience"]
+ if "5-10" in exp or "10+" in exp:
+ experience_level = "advanced"
+ elif "3-5" in exp:
+ experience_level = "intermediate"
+
+ roadmap_query = (
+ f"Create a learning roadmap for becoming a {career_name} professional at the {experience_level} level with this structure:\n"
+ f"1. Skills to Develop: What skills should they focus on?\n"
+ f"2. Education Requirements: What degrees or certifications are needed?\n"
+ f"3. Recommended Courses: What specific courses or training programs work best?\n"
+ f"4. Learning Resources: What books, websites, or tools are helpful?\n"
+ f"5. Timeline: Provide a realistic timeline for skill acquisition"
+ )
+ roadmap_result = self.search_with_cache(
+ roadmap_query,
+ f"{career_name}_roadmap_{experience_level}"
+ )
+ learning_roadmap = self.format_search_results(roadmap_result, f"{career_name} Learning Roadmap")
+
+ # 4. Industry Insights
+ insights_query = (
+ f"Provide industry insights for {career_name} professionals with this structure:\n"
+ f"1. Workplace Culture: What is the typical work environment like?\n"
+ f"2. Day-to-Day Activities: What does a typical workday include?\n"
+ f"3. Career Progression: What career advancement paths exist?\n"
+ f"4. Work-Life Balance: How is the work-life balance in this field?\n"
+ f"5. Success Strategies: What tips help professionals succeed in this field?"
+ )
+ insights_result = self.search_with_cache(
+ insights_query,
+ f"{career_name}_insights"
+ )
+ industry_insights = self.format_search_results(insights_result, f"{career_name} Industry Insights")
+
+ # Create the combined result
+ results = {
+ "career_name": career_name,
+ "research": research,
+ "market_analysis": market_analysis,
+ "learning_roadmap": learning_roadmap,
+ "industry_insights": industry_insights,
+ "timestamp": datetime.now().isoformat()
+ }
+
+ # Cache the results
+ self.career_data[career_name] = results
+
+ return results
+
+ # If no search capabilities, use LLM to generate analysis
+ elif self.llm:
+ # Use LLM chains for each analysis component
+ career_prompt = PromptTemplate(
+ input_variables=["career"],
+ template="""
+ Provide a comprehensive analysis of the {career} career path.
+ Include role overview, key responsibilities, required technical and soft skills,
+ and educational background or alternative paths into the field.
+ Format the response in markdown with clear headings and bullet points.
+ """
+ )
+
+ market_prompt = PromptTemplate(
+ input_variables=["career"],
+ template="""
+ Analyze the current job market for {career} professionals.
+ Include information on job growth projections, salary ranges by experience level,
+ top industries hiring, geographic hotspots, and emerging trends affecting the field.
+ Format the response in markdown with clear headings.
+ """
+ )
+
+ roadmap_prompt = PromptTemplate(
+ input_variables=["career", "experience_level"],
+ template="""
+ Create a detailed learning roadmap for someone pursuing a {career} career path.
+ The person is at a {experience_level} level.
+ Include essential skills to develop, specific education requirements, recommended courses and resources,
+ and a timeline for skill acquisition. Structure the response with clear sections and markdown formatting.
+ """
+ )
+
+ insights_prompt = PromptTemplate(
+ input_variables=["career"],
+ template="""
+ Provide detailed insider insights about working as a {career} professional.
+ Include information on workplace culture, day-to-day activities, career progression paths,
+ work-life balance considerations, and success strategies.
+ Format the response in markdown with clear headings.
+ """
+ )
+
+ # Create chains and run them
+ career_chain = LLMChain(llm=self.llm, prompt=career_prompt)
+ market_chain = LLMChain(llm=self.llm, prompt=market_prompt)
+ roadmap_chain = LLMChain(llm=self.llm, prompt=roadmap_prompt)
+ insights_chain = LLMChain(llm=self.llm, prompt=insights_prompt)
+
+ # Get experience level from user profile
+ experience_level = "beginner"
+ if user_profile and "experience" in user_profile:
+ exp = user_profile["experience"]
+ if "5-10" in exp or "10+" in exp:
+ experience_level = "advanced"
+ elif "3-5" in exp:
+ experience_level = "intermediate"
+
+ # Generate all components
+ research = career_chain.run(career=career_name)
+ market_analysis = market_chain.run(career=career_name)
+ learning_roadmap = roadmap_chain.run(career=career_name, experience_level=experience_level)
+ industry_insights = insights_chain.run(career=career_name)
+
+ # Create the result dictionary
+ results = {
+ "career_name": career_name,
+ "research": research,
+ "market_analysis": market_analysis,
+ "learning_roadmap": learning_roadmap,
+ "industry_insights": industry_insights,
+ "timestamp": datetime.now().isoformat()
+ }
+
+ # Store in cache
+ self.career_data[career_name] = results
+
+ return results
+
+ # If neither search nor LLM are available
+ return {
+ "career_name": career_name,
+ "research": f"Career analysis for {career_name} unavailable. Please provide API keys for enhanced capabilities.",
+ "market_analysis": "Market analysis unavailable. Please provide API keys for enhanced capabilities.",
+ "learning_roadmap": "Learning roadmap unavailable. Please provide API keys for enhanced capabilities.",
+ "industry_insights": "Industry insights unavailable. Please provide API keys for enhanced capabilities."
+ }
+
+ except Exception as e:
+ # Return error information
+ return {
+ "career_name": career_name,
+ "research": f"Error analyzing career: {str(e)}",
+ "market_analysis": "Market analysis not available due to an error",
+ "learning_roadmap": "Learning roadmap not available due to an error",
+ "industry_insights": "Industry insights not available due to an error"
+ }
+
+ def search_career_information(self, career):
+ """Get basic information about a specific career using search"""
+ # Check the cache
+ if career in self.career_data and "research" in self.career_data[career]:
+ return self.career_data[career]["research"]
+
+ # Use search agent if available
+ if self.search_agent:
+ query = f"What are the key responsibilities, required skills, and education for a {career} career?"
+ result = self.search_with_cache(
+ query,
+ f"{career}_info"
+ )
+ formatted = self.format_search_results(result, f"{career} Career Information")
+ return formatted
+
+ # Use LLM if available but no search
+ elif self.llm:
+ prompt = PromptTemplate(
+ input_variables=["career"],
+ template="""
+ Provide information about the {career} career path.
+ Include role description, key responsibilities, required skills,
+ and typical educational requirements.
+ Format as markdown with clear sections.
+ """
+ )
+ chain = LLMChain(llm=self.llm, prompt=prompt)
+ return chain.run(career=career)
+
+ # Fallback to generic response
+ return f"{career} is a career field that requires specialized skills and education. Enable web search for detailed information."
+
+ def analyze_market_trends(self, career):
+ """Analyze market trends for a specific career using search"""
+ # Check the cache
+ if career in self.career_data and "market_analysis" in self.career_data[career]:
+ return self.career_data[career]["market_analysis"]
+
+ # Use search agent if available
+ if self.search_agent:
+ query = f"What are the current job market trends, salary ranges, and growth projections for {career} careers?"
+ result = self.search_with_cache(
+ query,
+ f"{career}_market"
+ )
+ formatted = self.format_search_results(result, f"{career} Market Analysis")
+ return formatted
+
+ # Use LLM if available but no search
+ elif self.llm:
+ prompt = PromptTemplate(
+ input_variables=["career"],
+ template="""
+ Analyze the current job market for {career} professionals.
+ Include information on job growth projections, salary ranges by experience level,
+ top industries hiring, geographic hotspots, and emerging trends affecting the field.
+ Format the response in markdown with clear headings.
+ """
+ )
+ chain = LLMChain(llm=self.llm, prompt=prompt)
+ return chain.run(career=career)
+
+ # Fallback to generic response
+ return f"Market analysis for {career} requires web search capabilities. Please provide a SerpAPI key."
+
+ def create_learning_roadmap(self, career, experience_level="beginner"):
+ """Create a learning roadmap for a specific career"""
+ # Check the cache
+ if career in self.career_data and "learning_roadmap" in self.career_data[career]:
+ return self.career_data[career]["learning_roadmap"]
+
+ # Use search agent if available
+ if self.search_agent:
+ query = f"How to become a {career} professional for someone at {experience_level} level? Include skills to develop, education requirements, courses, resources, and timeline"
+ result = self.search_with_cache(
+ query,
+ f"{career}_roadmap_{experience_level}"
+ )
+ formatted = self.format_search_results(result, f"{career} Learning Roadmap")
+ return formatted
+
+ # Use LLM if available but no search
+ elif self.llm:
+ prompt = PromptTemplate(
+ input_variables=["career", "experience_level"],
+ template="""
+ Create a detailed learning roadmap for someone pursuing a {career} career path.
+ The person is at a {experience_level} level.
+ Include essential skills to develop, specific education requirements, recommended courses and resources,
+ and a timeline for skill acquisition. Structure the response with clear sections and markdown formatting.
+ """
+ )
+ chain = LLMChain(llm=self.llm, prompt=prompt)
+ return chain.run(career=career, experience_level=experience_level)
+
+ # Fallback to generic response
+ return f"A personalized learning roadmap for {career} requires web search capabilities. Please provide a SerpAPI key."
+
+ def get_career_insights(self, career):
+ """Get industry insights for a specific career"""
+ # Check the cache
+ if career in self.career_data and "industry_insights" in self.career_data[career]:
+ return self.career_data[career]["industry_insights"]
+
+ # Use search agent if available
+ if self.search_agent:
+ query = f"What is the workplace culture, day-to-day activities, career progression, and work-life balance like for {career} professionals?"
+ result = self.search_with_cache(
+ query,
+ f"{career}_insights"
+ )
+ formatted = self.format_search_results(result, f"{career} Industry Insights")
+ return formatted
+
+ # Use LLM if available but no search
+ elif self.llm:
+ prompt = PromptTemplate(
+ input_variables=["career"],
+ template="""
+ Provide detailed insider insights about working as a {career} professional.
+ Include information on workplace culture, day-to-day activities, career progression paths,
+ work-life balance considerations, and success strategies.
+ Format the response in markdown with clear headings.
+ """
+ )
+ chain = LLMChain(llm=self.llm, prompt=prompt)
+ return chain.run(career=career)
+
+ # Fallback to generic response
+ return f"Industry insights for {career} require web search capabilities. Please provide a SerpAPI key."
+
+ def chat_with_assistant(self, question, career_data=None):
+ """Engage in conversation with a user about career questions"""
+ if not self.llm:
+ return "Career assistant is not available. Please provide an GROQ API key."
+
+ try:
+ # Create context from career data if available
+ context = ""
+ if career_data and isinstance(career_data, dict):
+ career_name = career_data.get("career_name", "the selected career")
+ context = f"The user has selected the {career_name} career path. "
+
+ # Add relevant sections from career data based on question keywords
+ if any(kw in question.lower() for kw in ["skill", "learn", "study", "education", "degree"]):
+ context += f"Here's information about the career: {career_data.get('research', '')} "
+ context += f"Here's learning roadmap information: {career_data.get('learning_roadmap', '')} "
+
+ if any(kw in question.lower() for kw in ["market", "job", "salary", "pay", "demand", "trend"]):
+ context += f"Here's market analysis information: {career_data.get('market_analysis', '')} "
+
+ if any(kw in question.lower() for kw in ["work", "day", "culture", "balance", "advance"]):
+ context += f"Here's industry insights information: {career_data.get('industry_insights', '')} "
+
+ # Create prompt for the career assistant
+ prompt = PromptTemplate(
+ input_variables=["context", "question"],
+ template="""
+ You are a career guidance assistant helping a user with their career questions.
+
+ Context about the user's selected career:
+ {context}
+
+ User question: {question}
+
+ Provide a helpful, informative response that directly addresses the user's question.
+ Be conversational but concise. Include specific advice or information when possible.
+ Format your response in a structured way with bullet points and headings where appropriate.
+ If the question is outside your knowledge, acknowledge that and provide general career guidance.
+ """
+ )
+
+ # Generate response
+ chain = LLMChain(llm=self.llm, prompt=prompt)
+ response = chain.run(context=context, question=question)
+
+ return response
+
+ except Exception as e:
+ return f"I encountered an error while processing your question: {str(e)}"
+
+ def chat_response(self, user_query, career_data=None, user_profile=None):
+ """
+ Generate a response to a user's chat query about a career.
+
+ Parameters:
+ - user_query: The user's question or message
+ - career_data: Dictionary containing career analysis information
+ - user_profile: User profile information
+
+ Returns:
+ - Formatted HTML string with the response
+ """
+ try:
+ # Extract career name if available
+ career_name = career_data.get("career_name", "this career") if career_data else "this career"
+
+ # Prepare system prompt with available career data
+ system_prompt = f"""You are an expert career advisor specializing in {career_name}.
+ Answer the user's questions based on the following career data and your knowledge.
+ Always format your responses with HTML styling - use appropriate headings, lists,
+ paragraphs, and emphasis to make your response visually appealing and structured.
+ Always return formatted HTML content, not Markdown.
+ """
+
+ # Add career data to the prompt if available
+ if career_data:
+ # Add each section of career data we have
+ for key, value in career_data.items():
+ if key != "career_name" and value:
+ section_name = key.replace("_", " ").title()
+ system_prompt += f"\n\n{section_name}:\n{value}"
+
+ # Add user profile if available
+ if user_profile:
+ profile_text = f"The user has {user_profile.get('education', 'some education')} with "
+ profile_text += f"{user_profile.get('experience', 'some')} experience. "
+
+ # Add skills if available
+ if "skills" in user_profile:
+ profile_text += "Their skill levels (out of 10) are: "
+ skills = user_profile["skills"]
+ for skill, level in skills.items():
+ profile_text += f"{skill}: {level}, "
+ profile_text = profile_text.rstrip(", ")
+
+ system_prompt += f"\n\nUser Profile:\n{profile_text}"
+
+ # Get response from LLM using API
+ response = self.llm.chat(
+ messages=[
+ {"role": "system", "content": system_prompt},
+ {"role": "user", "content": user_query}
+ ]
+ )
+
+ answer = response.content
+
+ # Ensure the response is properly formatted as HTML
+ if "<" not in answer:
+ # If no HTML tags are present, add basic formatting
+ answer = f"
', unsafe_allow_html=True)
+ col1, col2 = st.columns(2)
+ with col1:
+ if st.button("β¬ οΈ Previous Question", disabled=st.session_state.current_q == 0):
+ st.session_state.current_q -= 1
+ st.session_state.recorder_key = f"recorder_{st.session_state.current_q}"
+ st.session_state.start_time = None
+ st.rerun()
+ with col2:
+ if st.button("β‘οΈ Next Question", disabled=st.session_state.current_q >= len(st.session_state.questions) - 1):
+ st.session_state.current_q += 1
+ st.session_state.recorder_key = f"recorder_{st.session_state.current_q}"
+ st.session_state.start_time = None
+ st.rerun()
+ if st.button("β Finish Interview"):
+ self.finish_report()
+
+ def finish_report(self):
+ all_responded = all(r.strip() for r in st.session_state.responses)
+ st.success("Interview completed!")
+ if not all_responded:
+ st.warning("Some questions have no responses. The report may be incomplete.")
+ scores = []
+ for feedback in st.session_state.feedback:
+ if feedback:
+ match = re.search(r"Confidence Score: (\d+)/10.*Accuracy Score: (\d+)/10", feedback, re.DOTALL)
+ if match:
+ scores.append((int(match.group(1)), int(match.group(2))))
+ with st.container():
+ st.markdown('
', unsafe_allow_html=True)
+ if scores:
+ avg_confidence = sum(s[0] for s in scores) / len(scores)
+ avg_accuracy = sum(s[1] for s in scores) / len(scores)
+ st.markdown(f"
Average Scores
Confidence: {avg_confidence:.1f}/10
Accuracy: {avg_accuracy:.1f}/10
", unsafe_allow_html=True)
+ labels = [f"Q{i+1}" for i in range(len(scores))]
+ confidence_data = [s[0] for s in scores]
+ accuracy_data = [s[1] for s in scores]
+ fig = go.Figure(data=[
+ go.Bar(name="Confidence", x=labels, y=confidence_data, marker_color="#2563eb", text=confidence_data, textposition="auto"),
+ go.Bar(name="Accuracy", x=labels, y=accuracy_data, marker_color="#dc2626", text=accuracy_data, textposition="auto")
+ ])
+ fig.update_layout(
+ barmode="group",
+ yaxis=dict(range=[0, 10], title="Score", gridcolor="#e5e7eb"),
+ xaxis=dict(title="Questions"),
+ title=dict(text="Interview Performance Scores", x=0.5, xanchor="center"),
+ template="plotly_white",
+ height=400,
+ margin=dict(t=50, b=50, l=50, r=50),
+ showlegend=True,
+ legend=dict(orientation="h", yanchor="bottom", y=1.02, xanchor="center", x=0.5)
+ )
+ st.plotly_chart(fig, use_container_width=True)
+ else:
+ st.warning("No scores available to display the graph. Please ensure all questions have feedback.")
+ st.markdown('
', unsafe_allow_html=True)
+ timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
+ markdown_report = "# Interview Report\n\n"
+ for i, q in enumerate(st.session_state.questions):
+ markdown_report += f"## Question {i+1}\n"
+ markdown_report += f"**Type:** {q['type']}\n\n"
+ markdown_report += f"**Question:** {q['question']}\n\n"
+ markdown_report += f"**User Response:** {st.session_state.responses[i] if st.session_state.responses[i] else 'Not answered'}\n\n"
+ markdown_report += f"**AI Feedback:**\n{st.session_state.feedback[i] if st.session_state.feedback[i] else 'No feedback'}\n\n"
+ time_taken = st.session_state.answer_times[i]
+ markdown_report += f"**Time Taken:** {timedelta(seconds=int(time_taken)) if time_taken else 'Not recorded'}\n\n"
+ markdown_report += "---\n\n"
+ if scores:
+ markdown_report += f"**Average Confidence Score:** {avg_confidence:.1f}/10\n\n"
+ markdown_report += f"**Average Accuracy Score:** {avg_accuracy:.1f}/10\n\n"
+ st.download_button(
+ label="π₯ Download Feedback Report",
+ data=markdown_report.encode("utf-8"),
+ file_name=f"interview_feedback_Requirement_Agentπ.md",
+ mime="text/markdown",
+ key="download_button",
+ help="Download the interview report as a Markdown file"
+ )
\ No newline at end of file
diff --git a/simple_ai_agents/Recruitify/patch_sqlite.py b/simple_ai_agents/Recruitify/patch_sqlite.py
new file mode 100644
index 00000000..51e2fa2e
--- /dev/null
+++ b/simple_ai_agents/Recruitify/patch_sqlite.py
@@ -0,0 +1,5 @@
+import sys
+import os
+
+__import__('pysqlite3')
+sys.modules['sqlite3'] = sys.modules.pop('pysqlite3')
\ No newline at end of file
diff --git a/simple_ai_agents/Recruitify/requirements.txt b/simple_ai_agents/Recruitify/requirements.txt
new file mode 100644
index 00000000..a06622eb
--- /dev/null
+++ b/simple_ai_agents/Recruitify/requirements.txt
@@ -0,0 +1,17 @@
+python-dotenv
+streamlit
+langchain
+langchain-groq
+pandas
+matplotlib
+faiss-cpu
+pypdf2
+langchain-community
+langchain-google-genai
+google-search-results
+groq
+plotly
+streamlit-mic-recorder
+deepgram-sdk
+nest-asyncio
+pysqlite3-binary
\ No newline at end of file
diff --git a/simple_ai_agents/Recruitify/temp/practise.py b/simple_ai_agents/Recruitify/temp/practise.py
new file mode 100644
index 00000000..028c0a96
--- /dev/null
+++ b/simple_ai_agents/Recruitify/temp/practise.py
@@ -0,0 +1,619 @@
+
+# """import os
+
+# from deepgram import (
+# DeepgramClient,
+# PrerecordedOptions,
+# FileSource,
+# )
+# import tempfile
+# import streamlit as st
+# from streamlit_mic_recorder import mic_recorder
+
+# audio = mic_recorder(
+# start_prompt="βΊοΈ",
+# stop_prompt="βΉοΈ",
+# key='recorder'
+# )
+
+# if audio:
+# st.audio(audio['bytes']) # playback
+
+# # Step 1: Save as a temporary MP3 file
+# with tempfile.NamedTemporaryFile(delete=False, suffix=".mp3") as tmpfile:
+# tmpfile.write(audio['bytes'])
+# tmpfile_path = tmpfile.name
+
+
+# # Path to the audio file
+# AUDIO_FILE = tmpfile_path
+
+# # STEP 1 Create a Deepgram client using the API key
+# # deepgram = DeepgramClient(a)
+# deepgram = DeepgramClient(api_key="37c16f3a101aad2918c257d802f21f1843a9f683")
+
+
+# with open(AUDIO_FILE, "rb") as file:
+# buffer_data = file.read()
+
+# payload: FileSource = {
+# "buffer": buffer_data,
+# }
+
+# #STEP 2: Configure Deepgram options for audio analysis
+# options = PrerecordedOptions(
+# model="nova-3",
+# smart_format=True,
+# )
+
+# # STEP 3: Call the transcribe_file method with the text payload and options
+# response = deepgram.listen.rest.v("1").transcribe_file(payload, options)
+
+
+# text=response.to_dict().get("results").get("channels")[0].get("alternatives")[0].get("paragraphs").get('transcript')
+# print(tp)
+# st.write(tp)
+# """
+
+
+
+
+# import re
+# import streamlit as st
+# from groq import Groq
+# import os
+# import tempfile
+# import pandas as pd
+# from streamlit_mic_recorder import speech_to_text
+# from langchain_groq import ChatGroq
+# from dotenv import load_dotenv
+# import base64
+
+# load_dotenv()
+
+# # ---- Groq Config ----
+# GROQ_API_KEY = os.getenv("GROQ_API_KEY")
+# if not GROQ_API_KEY:
+# st.error("GROQ_API_KEY environment variable not set. Please configure it.")
+# st.stop()
+# client = Groq(api_key=GROQ_API_KEY)
+
+# # ---- State Init ----
+# if "current_q" not in st.session_state:
+# st.session_state.current_q = 0
+# if "questions" not in st.session_state:
+# st.session_state.questions = []
+# if "responses" not in st.session_state:
+# st.session_state.responses = []
+# if "feedback" not in st.session_state:
+# st.session_state.feedback = []
+# if "audio_volume" not in st.session_state:
+# st.session_state.audio_volume = 1.0 # Default volume
+
+# # ---- LLM ----
+# llm = ChatGroq(model='llama-3.3-70b-versatile', temperature=0.7, api_key=GROQ_API_KEY)
+
+# # ---- File Upload ----
+# file_uploaded = st.file_uploader("Upload the interview questions (.md file)", type=["md", "txt"])
+# if file_uploaded and not st.session_state.questions:
+# try:
+# content = file_uploaded.read().decode("utf-8")
+# except UnicodeDecodeError:
+# st.error("Failed to decode file. Ensure itβs a valid .md or .txt file with UTF-8 encoding.")
+# st.stop()
+
+# try:
+# questions = []
+# question_sections = re.split(r"## \d+[.\-]\s*", content)[1:]
+# for section in question_sections:
+# lines = section.strip().split("\n")
+# if len(lines) < 2:
+# st.error("Invalid question format in file.")
+# st.stop()
+# question_type = lines[0].split(" Question")[0].strip()
+# question_text = " ".join(lines[1:]).strip()
+# if question_type and question_text:
+# questions.append({"type": question_type, "question": question_text})
+# except Exception as e:
+# st.error(f"Error parsing file: {str(e)}")
+# st.stop()
+
+# if not questions:
+# st.error("No valid questions found in the file.")
+# st.stop()
+
+# st.session_state.current_q = 0
+# st.session_state.questions = questions
+# st.session_state.responses = ["" for _ in questions]
+# st.session_state.feedback = ["" for _ in questions]
+# st.success(f"Loaded {len(questions)} questions.")
+
+# # ---- Interview Flow ----
+# if st.session_state.questions:
+# # State validation
+# if st.session_state.current_q < 0 or st.session_state.current_q >= len(st.session_state.questions):
+# st.error(f"Invalid question index: {st.session_state.current_q}. Resetting to 0.")
+# st.session_state.current_q = 0
+# st.rerun()
+
+# q = st.session_state.questions[st.session_state.current_q]
+# st.write(f"Progress: Question {st.session_state.current_q+1} of {len(st.session_state.questions)}")
+# st.subheader(f"Question {st.session_state.current_q+1}")
+# st.write(f"**Type:** {q['type']}")
+# st.markdown(q['question'])
+# st.write(f"Debug: Current question index: {st.session_state.current_q}")
+
+# voice = "Deedee-PlayAI"
+# model = "playai-tts"
+# response_format = "mp3"
+
+# # Volume control
+# st.session_state.audio_volume = st.slider("Audio volume", 0.0, 1.0, st.session_state.audio_volume, 0.1)
+
+# if st.button("π Play Question"):
+# with tempfile.NamedTemporaryFile(delete=False, suffix=".mp3") as tmp_file:
+# temp_path = tmp_file.name
+# try:
+# with st.spinner("Generating speech..."):
+# response = client.audio.speech.create(
+# model=model,
+# voice=voice,
+# input=f"This is a {q['type']} question: {q['question']}",
+# response_format=response_format
+# )
+# response.write_to_file(temp_path)
+
+# with open(temp_path, "rb") as audio_file:
+# audio_bytes = audio_file.read()
+# audio_base64 = base64.b64encode(audio_bytes).decode("utf-8")
+
+# audio_html = f"""
+#
+#
+# """
+# st.markdown(audio_html, unsafe_allow_html=True)
+# except Exception as e:
+# st.error(f"Failed to generate or play audio: {str(e)}")
+# finally:
+# if os.path.exists(temp_path):
+# os.remove(temp_path)
+
+# # Record Answer
+# text = speech_to_text(language='en', use_container_width=True, just_once=True, key=f'STT_{st.session_state.current_q}')
+# if not text:
+# text = st.text_area("Type your response (if speech fails):", key=f'text_input_{st.session_state.current_q}')
+# if text:
+# st.write(f"π£ **Your Response:** {text}")
+# st.session_state.responses[st.session_state.current_q] = text
+
+# # AI Feedback
+# prompt = f"""
+# You are an experienced technical interviewer assessing a candidate's answer.
+
+# **Question Type:** {q['type']}
+# **Question:** {q['question']}
+# **Candidate Response:** {text}
+
+# Your task:
+# 1. Evaluate the overall quality of the response.
+# 2. Highlight strengths and positive aspects.
+# 3. Identify specific weaknesses or missing points.
+# 4. Suggest clear, actionable improvements.
+# 5. Provide a **Confidence Score** (1-10) for your evaluation.
+# 6. Provide an **Accuracy Score** (1-10) for how factually correct the answer is.
+
+# Format your response exactly as follows:
+
+# Evaluation:
+# [Your evaluation text here]
+
+# Strengths:
+# - [Strength 1]
+# - [Strength 2]
+
+# Weaknesses:
+# - [Weakness 1]
+# - [Weakness 2]
+
+# Suggestions for Improvement:
+# - [Suggestion 1]
+# - [Suggestion 2]
+
+# Confidence Score: X/10
+# Accuracy Score: X/10
+# """
+# try:
+# ai_response = llm.invoke(prompt).content
+# st.session_state.feedback[st.session_state.current_q] = ai_response
+# st.markdown(f"π€ **AI Feedback:** {ai_response}")
+# except Exception as e:
+# st.error(f"Failed to get AI feedback: {str(e)}")
+
+# # Navigation
+# col1, col2 = st.columns(2)
+# with col1:
+# prev_disabled = st.session_state.current_q == 0
+# if st.button("β¬ οΈ Previous Question", key="prev_btn", disabled=prev_disabled):
+# st.write(f"Debug: Moving to previous question. Current index: {st.session_state.current_q}")
+# st.session_state.current_q -= 1
+# st.write(f"Debug: New index: {st.session_state.current_q}")
+# st.rerun()
+# with col2:
+# next_disabled = st.session_state.current_q >= len(st.session_state.questions) - 1
+# if st.button("β‘οΈ Next Question", key="next_btn", disabled=next_disabled):
+# st.write(f"Debug: Moving to next question. Current index: {st.session_state.current_q}")
+# st.session_state.current_q += 1
+# st.write(f"Debug: New index: {st.session_state.current_q}")
+# st.rerun()
+
+# all_responded = all(response.strip() for response in st.session_state.responses)
+# if st.button("β Finish Interview", disabled=not all_responded):
+# st.success("Interview completed!")
+# if not all_responded:
+# st.warning("Some questions have no responses. The report may be incomplete.")
+
+# # Extract scores for chart
+# scores = []
+# for feedback in st.session_state.feedback:
+# if feedback:
+# match = re.search(r"Confidence Score: (\d+)/10.*Accuracy Score: (\d+)/10", feedback, re.DOTALL)
+# if match:
+# scores.append((int(match.group(1)), int(match.group(2))))
+# import plotly.graph_objects as go
+
+# if scores:
+# avg_confidence = sum(s[0] for s in scores) / len(scores)
+# avg_accuracy = sum(s[1] for s in scores) / len(scores)
+
+# st.write(f"Average Confidence Score: {avg_confidence:.1f}/10")
+# st.write(f"Average Accuracy Score: {avg_accuracy:.1f}/10")
+
+# labels = [f"Q{i+1}" for i in range(len(scores))]
+# confidence_data = [s[0] for s in scores]
+# accuracy_data = [s[1] for s in scores]
+
+# fig = go.Figure(data=[
+# go.Bar(name='Confidence', x=labels, y=confidence_data, marker_color='rgba(75, 192, 192, 0.7)'),
+# go.Bar(name='Accuracy', x=labels, y=accuracy_data, marker_color='rgba(255, 99, 132, 0.7)')
+# ])
+
+# fig.update_layout(
+# barmode='group',
+# yaxis=dict(range=[0, 10], title="Score"),
+# xaxis=dict(title="Questions"),
+# title="Interview Performance Scores",
+# legend=dict(title="Metrics")
+# )
+
+# st.plotly_chart(fig, use_container_width=True)
+
+
+
+# temp
+"""""# import re
+# import streamlit as st
+# from groq import Groq
+# import os
+# import tempfile
+# import base64
+# import plotly.graph_objects as go
+# from streamlit_mic_recorder import mic_recorder
+# from deepgram import DeepgramClient, PrerecordedOptions, FileSource
+# import time
+# from datetime import datetime, timedelta
+# from dotenv import load_dotenv
+# from langchain_groq import ChatGroq
+
+# load_dotenv()
+
+# ---- API Keys ----
+# GROQ_API_KEY = os.getenv("GROQ_API_KEY")
+# DATAGRAM_API_KEY = os.getenv("DATAGRAM_API_KEY")
+
+# class Interview:
+# def __init__(self,groq_api_key,datagram_api_key):
+# self.GROQ_API_KEY=groq_api_key
+# self.DATAGRAM_API_KEY=datagram_api_key
+
+# self.client = Groq(api_key=self.GROQ_API_KEY)
+# self.llm = ChatGroq(model="llama-3.3-70b-versatile", temperature=0.7, api_key=self.GROQ_API_KEY)
+# # ---- Session State ----
+# if "current_q" not in st.session_state:
+# st.session_state.current_q = 0
+# if "questions" not in st.session_state:
+# st.session_state.questions = []
+# if "responses" not in st.session_state:
+# st.session_state.responses = []
+# if "feedback" not in st.session_state:
+# st.session_state.feedback = []
+# if "audio_volume" not in st.session_state:
+# st.session_state.audio_volume = 1.0
+# if "recorder_key" not in st.session_state:
+# st.session_state.recorder_key = f"recorder_{st.session_state.current_q}"
+# if "answer_times" not in st.session_state:
+# st.session_state.answer_times = []
+# if "start_time" not in st.session_state:
+# st.session_state.start_time = None
+
+# def sidebar_upload_and_summary(self):
+# """Handles file upload and shows sidebar summary"""
+# with st.sidebar:
+# st.markdown("### π€ Upload Questions")
+
+# file_uploaded = st.file_uploader("Upload the interview questions (.md file)", type=["md", "txt"])
+# if file_uploaded and not st.session_state.questions:
+# try:
+# content = file_uploaded.read().decode("utf-8")
+# except UnicodeDecodeError:
+# st.error("Failed to decode file. Ensure itβs a valid .md or .txt file with UTF-8 encoding.")
+# st.stop()
+
+# questions = []
+# try:
+# question_sections = re.split(r"## \d+[.\-]\s*", content)[1:]
+# for section in question_sections:
+# lines = section.strip().split("\n")
+# if len(lines) < 2:
+# st.error("Invalid question format in file.")
+# st.stop()
+# question_type = lines[0].split(" Question")[0].strip()
+# question_text = " ".join(lines[1:]).strip()
+# if question_type and question_text:
+# questions.append({"type": question_type, "question": question_text})
+# except Exception as e:
+# st.error(f"Error parsing file: {str(e)}")
+# st.stop()
+
+# if not questions:
+# st.error("No valid questions found in the file.")
+# st.stop()
+
+# st.session_state.current_q = 0
+# st.session_state.questions = questions
+# st.session_state.responses = ["" for _ in questions]
+# st.session_state.feedback = ["" for _ in questions]
+# st.session_state.answer_times = [None for _ in questions]
+# st.session_state.recorder_key = f"recorder_{st.session_state.current_q}"
+# st.success(f"Loaded {len(questions)} questions.")
+
+# # Sidebar UI
+# with st.sidebar:
+# st.session_state.audio_volume = st.slider("ποΈ Audio volume", 0.0, 1.0, st.session_state.audio_volume, 0.1)
+# st.header("Interview Summary")
+# for i, q in enumerate(st.session_state.questions):
+# with st.expander(f"Question {i+1}"):
+# st.write(f"**Type:** {q['type']}")
+# st.markdown(f"**Question:** {q['question']}")
+# st.write(f"**Response:** {st.session_state.responses[i] if st.session_state.responses[i] else 'Not answered'}")
+# st.markdown(f"**Feedback:** {st.session_state.feedback[i] if st.session_state.feedback[i] else 'No feedback yet'}")
+# if st.session_state.answer_times[i] is not None:
+# st.write(f"**Time Taken:** {timedelta(seconds=int(st.session_state.answer_times[i]))}")
+
+# def run(self):
+# """Main interview workflow"""
+# st.title("ποΈ AI-Powered Interview Practice")
+
+# self.sidebar_upload_and_summary()
+
+# if not st.session_state.questions:
+# return
+
+# # Validate state
+# if st.session_state.current_q < 0 or st.session_state.current_q >= len(st.session_state.questions):
+# st.error(f"Invalid question index: {st.session_state.current_q}. Resetting to 0.")
+# st.session_state.current_q = 0
+# st.session_state.recorder_key = f"recorder_{st.session_state.current_q}"
+# st.session_state.start_time = None
+# st.rerun()
+
+# q = st.session_state.questions[st.session_state.current_q]
+# st.subheader(f"Question {st.session_state.current_q+1}")
+# st.write(f"**Type:** {q['type']}")
+# st.markdown(q['question'])
+
+# # Progress Bar
+# with st.sidebar:
+# st.markdown('# Progress')
+# progress = (st.session_state.current_q + 1) / len(st.session_state.questions)
+# st.progress(progress)
+
+# # Play Question
+# if st.button("π Play Question"):
+# voice = "Deedee-PlayAI"
+# model = "playai-tts"
+# response_format = "mp3"
+
+# with tempfile.NamedTemporaryFile(delete=False, suffix=".mp3") as tmp_file:
+# temp_path = tmp_file.name
+# try:
+# with st.spinner("Generating speech..."):
+# response = self.client.audio.speech.create(
+# model=model,
+# voice=voice,
+# input=f"This is a {q['type']} question: {q['question']}",
+# response_format=response_format
+# )
+# response.write_to_file(temp_path)
+
+# with open(temp_path, "rb") as audio_file:
+# audio_bytes = audio_file.read()
+# audio_base64 = base64.b64encode(audio_bytes).decode("utf-8")
+
+# audio_html = f"""
+#
+#
+# """
+# st.markdown(audio_html, unsafe_allow_html=True)
+# except Exception as e:
+# st.error(f"Failed to generate or play audio: {str(e)}")
+# finally:
+# if os.path.exists(temp_path):
+# os.remove(temp_path)
+
+# # Record Answer
+# audio = mic_recorder(start_prompt="Start Interview", stop_prompt="Stop Interview", key=st.session_state.recorder_key)
+# if audio:
+# st.audio(audio['bytes'])
+# if st.session_state.start_time is None:
+# st.session_state.start_time = time.time()
+
+# with tempfile.NamedTemporaryFile(delete=False, suffix=".mp3") as tmpfile:
+# tmpfile.write(audio['bytes'])
+# tmpfile_path = tmpfile.name
+
+# deepgram = DeepgramClient(api_key=self.DATAGRAM_API_KEY)
+# with open(tmpfile_path, "rb") as file:
+# buffer_data = file.read()
+
+# payload: FileSource = {"buffer": buffer_data}
+# options = PrerecordedOptions(model="nova-3", smart_format=True)
+# response = deepgram.listen.rest.v("1").transcribe_file(payload, options)
+
+# text = response.to_dict()["results"]["channels"][0]["alternatives"][0]["paragraphs"]["transcript"]
+# st.session_state.responses[st.session_state.current_q] = text
+
+# # Track time taken
+# if st.session_state.start_time is not None:
+# time_taken = time.time() - st.session_state.start_time
+# st.session_state.answer_times[st.session_state.current_q] = time_taken
+# st.session_state.start_time = None
+
+# st.markdown("**The Response:**")
+# st.write(text)
+
+# # AI Feedback
+# prompt = f"""
+# You are an experienced technical interviewer assessing a candidate's answer.
+
+# **Question Type:** {q['type']}
+# **Question:** {q['question']}
+# **Candidate Response:** {text}
+
+# Your task:
+# 1. Evaluate the overall quality of the response.
+# 2. Highlight strengths and positive aspects.
+# 3. Identify specific weaknesses or missing points.
+# 4. Suggest clear, actionable improvements.
+# 5. Provide a **Confidence Score** (1-10) for your evaluation.
+# 6. Provide an **Accuracy Score** (1-10) for how factually correct the answer is.
+
+# Format your response exactly as follows:
+
+# Evaluation:
+# [Your evaluation text here]
+
+# Strengths:
+# - [Strength 1]
+# - [Strength 2]
+
+# Weaknesses:
+# - [Weakness 1]
+# - [Weakness 2]
+
+# Suggestions for Improvement:
+# - [Suggestion 1]
+# - [Suggestion 2]
+
+# Confidence Score: X/10
+# Accuracy Score: X/10
+# """
+# try:
+# ai_response = self.llm.invoke(prompt).content
+# st.session_state.feedback[st.session_state.current_q] = ai_response
+# except Exception as e:
+# st.error(f"Failed to get AI feedback: {str(e)}")
+
+# # Feedback
+# if st.session_state.feedback[st.session_state.current_q]:
+# st.markdown(f"π€ **AI Feedback:** {st.session_state.feedback[st.session_state.current_q]}")
+
+# # Navigation
+# col1, col2 = st.columns(2)
+# with col1:
+# if st.button("β¬ οΈ Previous Question", disabled=st.session_state.current_q == 0):
+# st.session_state.current_q -= 1
+# st.session_state.recorder_key = f"recorder_{st.session_state.current_q}"
+# st.session_state.start_time = None
+# st.rerun()
+# with col2:
+# if st.button("β‘οΈ Next Question", disabled=st.session_state.current_q >= len(st.session_state.questions) - 1):
+# st.session_state.current_q += 1
+# st.session_state.recorder_key = f"recorder_{st.session_state.current_q}"
+# st.session_state.start_time = None
+# st.rerun()
+
+# # Finish Interview
+# if st.button("β Finish Interview"):
+# self.finish_report()
+
+# def finish_report(self):
+# """Generates final interview report with charts and download option"""
+# all_responded = all(r.strip() for r in st.session_state.responses)
+# st.success("Interview completed!")
+# if not all_responded:
+# st.warning("Some questions have no responses. The report may be incomplete.")
+
+# scores = []
+# for feedback in st.session_state.feedback:
+# if feedback:
+# match = re.search(r"Confidence Score: (\d+)/10.*Accuracy Score: (\d+)/10", feedback, re.DOTALL)
+# if match:
+# scores.append((int(match.group(1)), int(match.group(2))))
+
+# if scores:
+# avg_confidence = sum(s[0] for s in scores) / len(scores)
+# avg_accuracy = sum(s[1] for s in scores) / len(scores)
+
+# st.write(f"Average Confidence Score: {avg_confidence:.1f}/10")
+# st.write(f"Average Accuracy Score: {avg_accuracy:.1f}/10")
+
+# labels = [f"Q{i+1}" for i in range(len(scores))]
+# confidence_data = [s[0] for s in scores]
+# accuracy_data = [s[1] for s in scores]
+
+# fig = go.Figure(data=[
+# go.Bar(name="Confidence", x=labels, y=confidence_data, marker_color="rgba(75, 192, 192, 0.7)"),
+# go.Bar(name="Accuracy", x=labels, y=accuracy_data, marker_color="rgba(255, 99, 132, 0.7)")
+# ])
+# fig.update_layout(barmode="group", yaxis=dict(range=[0, 10], title="Score"), xaxis=dict(title="Questions"), title="Interview Performance Scores")
+# st.plotly_chart(fig, use_container_width=True)
+
+# # Markdown Report
+# timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
+# markdown_report = "# Interview Report\n\n"
+# for i, q in enumerate(st.session_state.questions):
+# markdown_report += f"## Question {i+1}\n"
+# markdown_report += f"**Type:** {q['type']}\n\n"
+# markdown_report += f"**Question:** {q['question']}\n\n"
+# markdown_report += f"**User Response:** {st.session_state.responses[i] if st.session_state.responses[i] else 'Not answered'}\n\n"
+# markdown_report += f"**AI Feedback:**\n{st.session_state.feedback[i] if st.session_state.feedback[i] else 'No feedback'}\n\n"
+# time_taken = st.session_state.answer_times[i]
+# markdown_report += f"**Time Taken:** {timedelta(seconds=int(time_taken)) if time_taken else 'Not recorded'}\n\n"
+# markdown_report += "---\n\n"
+
+# if scores:
+# markdown_report += f"**Average Confidence Score:** {avg_confidence:.1f}/10\n\n"
+# markdown_report += f"**Average Accuracy Score:** {avg_accuracy:.1f}/10\n\n"
+
+# st.download_button(
+# label="π₯ Download Feedback Report (Markdown)",
+# data=markdown_report.encode("utf-8"),
+# file_name=f"interview_feedback_{timestamp}.md",
+# mime="text/markdown"
+# )
+
+"""
\ No newline at end of file
diff --git a/simple_ai_agents/Recruitify/temp/stt.py b/simple_ai_agents/Recruitify/temp/stt.py
new file mode 100644
index 00000000..088c6c37
--- /dev/null
+++ b/simple_ai_agents/Recruitify/temp/stt.py
@@ -0,0 +1,76 @@
+# import streamlit as st
+# import os
+# from streamlit_mic_recorder import mic_recorder, speech_to_text
+
+# state = st.session_state
+
+# if 'text_received' not in state:
+# state.text_received = []
+
+# c1, c2 = st.columns(2)
+# with c1:
+# st.write("Convert speech to text:")
+# with c2:
+# text = speech_to_text(language='en', use_container_width=True, just_once=True, key='STT')
+
+# if text:
+# state.text_received.append(text)
+
+# for text in state.text_received:
+# st.text(text)
+
+# st.write("Record your voice, and play the recorded audio:")
+# audio = mic_recorder(start_prompt="βΊοΈ", stop_prompt="βΉοΈ", key='recorder')
+
+# if audio:
+# st.audio(audio['bytes'])
+
+
+# text = speech_to_text(language='en', use_container_width=True, just_once=True, key='STT')
+# if text:
+# st.write(text)
+
+
+# import streamlit as st
+# import tempfile
+# import base64
+# from deepgram import DeepgramClient, SpeakOptions
+
+# # Deepgram API Key (replace with env variable for security)
+# DEEPGRAM_API_KEY = "37c16f3a101aad2918c257d802f21f1843a9f683"
+
+# st.title("π€ Deepgram TTS Demo")
+
+# # User text input
+# user_text = st.text_area("Enter text to synthesize:", "Hello world! My name is Ankush")
+
+# if st.button("Generate Speech"):
+# try:
+# # Create client
+# deepgram = DeepgramClient(api_key=DEEPGRAM_API_KEY)
+
+# # Speak options
+# options = SpeakOptions(
+# model="aura-2-aries-en", # Choose model
+# encoding="mp3", # Ensure mp3 encoding
+# )
+
+# # Temp file for audio
+# with tempfile.NamedTemporaryFile(delete=False, suffix=".mp3") as tmp_file:
+# temp_path = tmp_file.name
+
+# # Generate speech and save to file
+# response = deepgram.speak.rest.v("1").save(temp_path, {"text": user_text}, options)
+
+# # Play audio in Streamlit
+# st.success("β Speech generated successfully!")
+# st.audio(temp_path, format="audio/mp3")
+
+# # Optionally: Base64 for embedding elsewhere
+# with open(temp_path, "rb") as audio_file:
+# audio_bytes = audio_file.read()
+# audio_base64 = base64.b64encode(audio_bytes).decode("utf-8")
+# # st.text_area("Base64 Audio (for API use):", audio_base64[:200] + "...")
+
+# except Exception as e:
+# st.error(f"β Error: {e}")
diff --git a/simple_ai_agents/Recruitify/ui.py b/simple_ai_agents/Recruitify/ui.py
new file mode 100644
index 00000000..484170cc
--- /dev/null
+++ b/simple_ai_agents/Recruitify/ui.py
@@ -0,0 +1,673 @@
+
+import streamlit as st
+import pandas as pd
+import base64
+import io
+import matplotlib.pyplot as plt
+
+def setup_page():
+ """Apply custom CSS and setup page (without setting page config)"""
+ # Apply custom CSS only
+ apply_custom_css()
+
+ # Add local logo handling via JavaScript
+ st.markdown("""
+
+ """, unsafe_allow_html=True)
+
+def display_header():
+
+ st.header("π Recruitment Agent")
+
+
+
+def apply_custom_css(accent_color="#d32f2f"):
+ st.markdown(f"""
+
+ """, unsafe_allow_html=True)
+
+
+
+
+def setup_sidebar():
+ with st.sidebar:
+ st.header("Configuration")
+
+ st.subheader("API Keys")
+ google_api_key = st.text_input("Google API Key", type="password")
+
+ st.markdown("---")
+
+ return {
+ "google_api_key": google_api_key,
+ # "theme_color": theme_color
+ }
+
+
+
+def role_selection_section(role_requirements):
+ st.markdown('
', unsafe_allow_html=True)
+
+ col1, col2 = st.columns([2, 1])
+
+ with col1:
+ role = st.selectbox("Select the role you're applying for:", list(role_requirements.keys()))
+
+ with col2:
+ upload_jd = st.checkbox("custom job description instead")
+
+ custom_jd = None
+ if upload_jd:
+ custom_jd_file = st.file_uploader("Upload job description (PDF or TXT)", type=["pdf", "txt"])
+ if custom_jd_file:
+ st.success("Custom job description uploaded!")
+ custom_jd = custom_jd_file
+
+ if not upload_jd:
+ st.info(f"Required skills: {', '.join(role_requirements[role])}")
+ st.markdown(f"
', unsafe_allow_html=True)
+ else:
+ st.write("No significant areas for improvement.")
+ st.markdown('
', unsafe_allow_html=True)
+
+ st.markdown('
', unsafe_allow_html=True)
+
+ # Detailed weaknesses section
+ if detailed_weaknesses:
+ st.markdown('', unsafe_allow_html=True)
+ st.subheader("π Detailed Weakness Analysis")
+
+ for weakness in detailed_weaknesses:
+ skill_name = weakness.get('skill', '')
+ score = weakness.get('score', 0)
+
+ with st.expander(f"{skill_name} (Score: {score}/10)"):
+ # Clean detail display
+ detail = weakness.get('detail', 'No specific details provided.')
+ # Clean JSON formatting if it appears in the text
+ if detail.startswith('```json') or '{' in detail:
+ detail = "The resume lacks examples of this skill."
+
+ st.markdown(f'
Issue: {detail}
',
+ unsafe_allow_html=True)
+
+ # Display improvement suggestions if available
+ if 'suggestions' in weakness and weakness['suggestions']:
+ st.markdown("How to improve:", unsafe_allow_html=True)
+ for i, suggestion in enumerate(weakness['suggestions']):
+ st.markdown(f'
{i+1}. {suggestion}
',
+ unsafe_allow_html=True)
+
+ # Display example if available
+ if 'example' in weakness and weakness['example']:
+ st.markdown("Example addition:", unsafe_allow_html=True)
+ st.markdown(f'
{weakness["example"]}
',
+ unsafe_allow_html=True)
+
+ st.markdown("---")
+ col1, col2, col3 = st.columns([1, 2, 1])
+ with col2:
+ report_content = f"""
+# Resume Analysis Report
+
+## Overall Score: {overall_score}/100
+
+Status: {"β Shortlisted" if selected else "β Not Selected"}
+
+## Analysis Reasoning
+{analysis_result.get('reasoning', 'No reasoning provided.')}
+
+## Strengths
+{", ".join(strengths if strengths else ["None identified"])}
+
+## Areas for Improvement
+{", ".join(missing_skills if missing_skills else ["None identified"])}
+
+## Detailed Weakness Analysis
+"""
+ # Add detailed weaknesses to report
+ for weakness in detailed_weaknesses:
+ skill_name = weakness.get('skill', '')
+ score = weakness.get('score', 0)
+ detail = weakness.get('detail', 'No specific details provided.')
+
+ # Clean JSON formatting if it appears in the text
+ if detail.startswith('```json') or '{' in detail:
+ detail = "The resume lacks examples of this skill."
+
+ report_content += f"\n### {skill_name} (Score: {score}/10)\n"
+ report_content += f"Issue: {detail}\n"
+
+ # Add suggestions to report
+ if 'suggestions' in weakness and weakness['suggestions']:
+ report_content += "\nImprovement suggestions:\n"
+ for i, sugg in enumerate(weakness['suggestions']):
+ report_content += f"- {sugg}\n"
+
+ # Add example to report
+ if 'example' in weakness and weakness['example']:
+ report_content += f"\nExample: {weakness['example']}\n"
+
+ report_content += "\n---\nAnalysis provided by Recruitment Agent"
+
+ report_b64 = base64.b64encode(report_content.encode()).decode()
+ href = f'π Download Analysis Report'
+ st.markdown(href, unsafe_allow_html=True)
+
+ st.markdown('
', unsafe_allow_html=True)
+
+
+
+
+
+def resume_qa_section(has_resume, ask_question_func=None):
+ if not has_resume:
+ st.warning("Please upload and analyze a resume first.")
+ return
+
+ st.markdown('
', unsafe_allow_html=True)
+
+ st.subheader("Ask Questions About the Resume")
+ user_question = st.text_input("Enter your question about the resume:", placeholder="What is the candidate's most recent experience?")
+
+ if user_question and ask_question_func:
+ with st.spinner("Searching resume and generating response..."):
+ response = ask_question_func(user_question)
+
+ st.markdown('
', unsafe_allow_html=True)
+
+ # Add example questions
+ with st.expander("Example Questions"):
+ example_questions = [
+ "What is the candidate's most recent role?",
+ "How many years of experience does the candidate have with Python?",
+ "What educational qualifications does the candidate have?",
+ "What are the candidate's key achievements?",
+ "Has the candidate managed teams before?",
+ "What projects has the candidate worked on?",
+ "Does the candidate have experience with cloud technologies?"
+ ]
+
+ for question in example_questions:
+ if st.button(question, key=f"q_{question}"):
+ st.session_state.current_question = question
+ st.experimental_rerun()
+
+ st.markdown('
', unsafe_allow_html=True)
+
+def interview_questions_section(has_resume, generate_questions_func=None):
+ if not has_resume:
+ st.warning("Please upload and analyze a resume first.")
+ return
+
+ st.markdown('