diff --git a/advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/README.md b/advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/README.md
new file mode 100644
index 00000000..2e6afe1e
--- /dev/null
+++ b/advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/README.md
@@ -0,0 +1,145 @@
+# 🧠 AI Workflow Visualizer
+
+**AI Workflow Visualizer** is a powerful tool that automatically **summarizes, parses, and visualizes intelligent agent workflows**.
+It leverages the **Nebius API** to generate structured, human-understandable visual representations of complex AI agent logs.
+
+---
+
+## 🚀 Features
+
+* **Workflow Summarization** – Automatically extracts key steps and decisions from raw agent logs.
+* **Graph Visualization** – Builds clear, interactive graph-based representations of workflows.
+* **Nebius API Integration** – Uses Nebius LLMs for accurate summarization and entity extraction.
+* **Dynamic Parsing** – Flexible parsing logic that adapts to diverse agent log formats.
+* **Sample Logs Included** – Comes with a ready-to-test sample log (`sample_logs/agent_log.json`).
+
+---
+
+## 🧩 Project Structure
+
+```bash
+ai_workflow_visualizer/
+│
+├── assets
+| └── IM_1.png
+| └── IM_2.png
+| └── IM_3.png
+├── app.py
+├── graph_builder.py
+├── nebius_client.py
+├── parser.py
+├── requirements.txt
+└── sample_logs/
+ └── agent_log.json
+
+
+## ⚙️ Installation & Setup
+### 1. Clone the repository
+
+```bash
+git clone https://github.com/Sanjana-m55/awesome-ai-apps.git
+cd awesome-ai-apps/advance_ai_agents/advance_ai_agents/ai_workflow_visualizer
+```
+
+### 2. Create and activate a virtual environment
+
+```bash
+python -m venv venv
+source venv/bin/activate # On macOS/Linux
+venv\Scripts\activate # On Windows
+```
+
+### 3. Install dependencies
+
+```bash
+pip install -r requirements.txt
+```
+
+### 4. Add your Nebius API key
+
+Create a `.env` file and add:
+
+```
+NEBIUS_API_KEY=your_api_key_here
+```
+
+---
+
+## 🧠 Usage
+
+### Run the Visualizer
+
+```bash
+streamlit run app.py
+```
+
+### Example Input
+
+The visualizer uses logs like this:
+
+```json
+{
+ "agent": "ChatAgent",
+ "steps": [
+ {"action": "Search", "description": "Searching for user query..."},
+ {"action": "Analyze", "description": "Analyzing retrieved data..."},
+ {"action": "Respond", "description": "Generating final response..."}
+ ]
+}
+```
+
+### Output
+
+✅ Clean summary of workflow steps
+✅ Visual graph showing relationships and sequence of actions
+✅ Interactive insights into decision paths
+
+---
+
+## 🧠 Technologies Used
+
+* **Python 3.9+**
+* **Nebius API (LLM)**
+* **Graphviz / NetworkX** (for visualization)
+* **dotenv**
+* **JSON Parsing Libraries**
+
+---
+
+## 🤝 Contributing
+
+Contributions are welcome!
+Please fork the repo, create a branch, and submit a pull request:
+
+```bash
+git checkout -b feature/your-feature-name
+git commit -m "feat: describe your feature"
+git push origin feature/your-feature-name
+```
+
+---
+
+## 🧩 Example Commit
+
+```
+feat: implemented intelligent workflow summarization and visualization using Nebius API
+```
+
+---
+
+## 🪪 License
+
+This project is licensed under the **MIT License**.
+See the [LICENSE](../../LICENSE) file for details.
+
+---
+
+## 💡 Acknowledgments
+
+* [Nebius AI](https://nebius.com/) for providing advanced AI summarization APIs
+* [NetworkX](https://networkx.org/) for visualization support
+* All open-source contributors!
+
+---
+
+✨ *Developed with ❤️ by [Sanjana M](https://github.com/Sanjana-m55)*
diff --git a/advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/app.py b/advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/app.py
new file mode 100644
index 00000000..f9e0efb3
--- /dev/null
+++ b/advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/app.py
@@ -0,0 +1,345 @@
+from dotenv import load_dotenv
+load_dotenv()
+import streamlit as st
+from parser import parse_logs
+from graph_builder import build_graph
+from nebius_client import summarize_workflow
+import time
+from datetime import datetime
+
+st.set_page_config(page_title="AI Workflow Visualizer", layout="wide", initial_sidebar_state="collapsed")
+st.markdown("""
+
+""", unsafe_allow_html=True)
+
+st.markdown("
🧠 AI WORKFLOW VISUALIZER
", unsafe_allow_html=True)
+st.markdown("🔴 LIVE⚡ REAL-TIME🤖 AI-POWERED", unsafe_allow_html=True)
+
+st.markdown("---")
+
+
+if 'total_workflows' not in st.session_state:
+ st.session_state.total_workflows = 0
+if 'total_tokens' not in st.session_state:
+ st.session_state.total_tokens = 0
+if 'avg_latency' not in st.session_state:
+ st.session_state.avg_latency = 0
+if 'latency_history' not in st.session_state:
+ st.session_state.latency_history = []
+
+col1, col2, col3, col4, col5 = st.columns(5)
+
+with col1:
+ st.markdown("", unsafe_allow_html=True)
+with col2:
+ agents_count = st.empty()
+with col3:
+ edges_count = st.empty()
+with col4:
+ st.markdown("", unsafe_allow_html=True)
+with col5:
+ timestamp = st.empty()
+
+st.markdown("---")
+
+
+st.markdown("", unsafe_allow_html=True)
+perf_col1, perf_col2, perf_col3, perf_col4 = st.columns(4)
+
+with perf_col1:
+ latency_display = st.empty()
+with perf_col2:
+ tokens_display = st.empty()
+with perf_col3:
+ workflows_display = st.empty()
+with perf_col4:
+ throughput_display = st.empty()
+
+st.markdown("---")
+refresh_rate = st.slider("⏱️ Refresh Rate (seconds)", 2, 15, 5)
+st.markdown("---")
+graph_placeholder = st.empty()
+summary_placeholder = st.empty()
+enable_realtime = st.checkbox("🔥 Enable Real-Time Visualization", value=True)
+
+def update_performance_metrics(latency, tokens):
+ """Update performance metrics in session state."""
+ st.session_state.total_workflows += 1
+ st.session_state.total_tokens += tokens
+ st.session_state.latency_history.append(latency)
+
+
+ if len(st.session_state.latency_history) > 10:
+ st.session_state.latency_history.pop(0)
+
+ st.session_state.avg_latency = sum(st.session_state.latency_history) / len(st.session_state.latency_history)
+
+def display_performance_metrics():
+ """Display current performance metrics."""
+ with latency_display:
+ st.markdown(f"⚡ Avg Latency
{st.session_state.avg_latency:.2f}s
", unsafe_allow_html=True)
+
+ with tokens_display:
+ st.markdown(f"🔤 Total Tokens
{st.session_state.total_tokens:,}
", unsafe_allow_html=True)
+
+ with workflows_display:
+ st.markdown(f"📈 Workflows
{st.session_state.total_workflows}
", unsafe_allow_html=True)
+
+ throughput = (1 / st.session_state.avg_latency) if st.session_state.avg_latency > 0 else 0
+ with throughput_display:
+ st.markdown(f"🚀 Throughput
{throughput:.2f}/s
", unsafe_allow_html=True)
+
+if enable_realtime:
+ st.success(f"🚀 **REAL-TIME MODE ACTIVATED** - Generating new workflows every {refresh_rate} seconds")
+
+ iteration = 0
+ while True:
+ iteration += 1
+ try:
+ parse_start = time.time()
+ nodes, edges, log_text = parse_logs()
+ parse_time = time.time() - parse_start
+
+ with agents_count:
+ st.markdown(f"", unsafe_allow_html=True)
+ with edges_count:
+ st.markdown(f"", unsafe_allow_html=True)
+ with timestamp:
+ current_time = datetime.now().strftime("%H:%M:%S")
+ st.markdown(f"", unsafe_allow_html=True)
+
+ with graph_placeholder.container():
+ fig = build_graph(nodes, edges)
+ st.plotly_chart(fig, use_container_width=True, key=f"graph_{iteration}")
+
+ with summary_placeholder.container():
+ st.markdown("", unsafe_allow_html=True)
+ with st.spinner("🤖 Nebius AI is analyzing..."):
+ ai_start = time.time()
+ result = summarize_workflow(log_text)
+
+
+ if isinstance(result, tuple):
+ summary, token_count = result
+ else:
+ summary = result
+ token_count = 0
+
+ ai_latency = time.time() - ai_start
+ total_latency = parse_time + ai_latency
+ update_performance_metrics(total_latency, token_count)
+ display_performance_metrics()
+
+ st.markdown(f"""
+
+ """, unsafe_allow_html=True)
+
+ st.info(f"🔄 Workflow #{iteration} - Next generation in {refresh_rate} seconds... (Parse: {parse_time:.3f}s | AI: {ai_latency:.3f}s)")
+
+ except Exception as e:
+ st.error(f"❌ Error: {e}")
+ import traceback
+ st.code(traceback.format_exc())
+
+ time.sleep(refresh_rate)
+else:
+ if st.button("🎬 Generate Workflow", type="primary"):
+ try:
+ parse_start = time.time()
+ nodes, edges, log_text = parse_logs()
+ parse_time = time.time() - parse_start
+
+ with agents_count:
+ st.markdown(f"", unsafe_allow_html=True)
+ with edges_count:
+ st.markdown(f"", unsafe_allow_html=True)
+ with timestamp:
+ current_time = datetime.now().strftime("%H:%M:%S")
+ st.markdown(f"", unsafe_allow_html=True)
+
+ fig = build_graph(nodes, edges)
+ st.plotly_chart(fig, use_container_width=True)
+
+ st.markdown("", unsafe_allow_html=True)
+ with st.spinner("🤖 Nebius AI is analyzing..."):
+ ai_start = time.time()
+ result = summarize_workflow(log_text)
+
+ if isinstance(result, tuple):
+ summary, token_count = result
+ else:
+ summary = result
+ token_count = 0
+
+ ai_latency = time.time() - ai_start
+
+
+ total_latency = parse_time + ai_latency
+ update_performance_metrics(total_latency, token_count)
+ display_performance_metrics()
+
+ st.markdown(f"""
+
+ """, unsafe_allow_html=True)
+
+ st.success(f"✅ Workflow generated successfully! (Parse: {parse_time:.3f}s | AI: {ai_latency:.3f}s)")
+ except Exception as e:
+ st.error(f"❌ Error: {e}")
+ import traceback
+ st.code(traceback.format_exc())
\ No newline at end of file
diff --git a/advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/assets/IM_1.png b/advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/assets/IM_1.png
new file mode 100644
index 00000000..14c3e5d1
Binary files /dev/null and b/advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/assets/IM_1.png differ
diff --git a/advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/assets/IM_2.png b/advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/assets/IM_2.png
new file mode 100644
index 00000000..5865a876
Binary files /dev/null and b/advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/assets/IM_2.png differ
diff --git a/advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/assets/IM_3.png b/advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/assets/IM_3.png
new file mode 100644
index 00000000..d6c5bf04
Binary files /dev/null and b/advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/assets/IM_3.png differ
diff --git a/advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/graph_builder.py b/advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/graph_builder.py
new file mode 100644
index 00000000..b2b1b02a
--- /dev/null
+++ b/advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/graph_builder.py
@@ -0,0 +1,66 @@
+import networkx as nx
+import plotly.graph_objects as go
+import random
+
+def build_graph(nodes, edges):
+ """Generate a glowing, animated Plotly network graph."""
+ G = nx.DiGraph()
+ G.add_nodes_from(nodes)
+ G.add_edges_from(edges)
+ pos = nx.spring_layout(G, seed=42, k=1.8)
+
+ # Vibrant dynamic colors
+ neon_colors = ['#FF3CAC', '#784BA0', '#2B86C5', '#00F5A0', '#FF9A8B', '#8EC5FC', '#F9F586']
+
+ edge_traces = []
+ for i, edge in enumerate(G.edges()):
+ x0, y0 = pos[edge[0]]
+ x1, y1 = pos[edge[1]]
+ mid_x = (x0 + x1) / 2 + random.uniform(-0.05, 0.05)
+ mid_y = (y0 + y1) / 2 + random.uniform(-0.05, 0.05)
+ edge_traces.append(
+ go.Scatter(
+ x=[x0, mid_x, x1],
+ y=[y0, mid_y, y1],
+ mode='lines',
+ line=dict(width=4, color=random.choice(neon_colors), shape='spline'),
+ hoverinfo='none',
+ opacity=0.8
+ )
+ )
+ node_x = [pos[n][0] for n in G.nodes()]
+ node_y = [pos[n][1] for n in G.nodes()]
+
+ node_trace = go.Scatter(
+ x=node_x,
+ y=node_y,
+ mode='markers+text',
+ text=list(G.nodes()),
+ textposition='bottom center',
+ textfont=dict(size=16, color='#FFFFFF', family='Poppins, sans-serif'),
+ marker=dict(
+ size=45,
+ color=[random.choice(neon_colors) for _ in G.nodes()],
+ line=dict(width=3, color='white'),
+ symbol='circle'
+ ),
+ hoverinfo='text'
+ )
+
+ fig = go.Figure(data=edge_traces + [node_trace])
+ fig.update_layout(
+ title=dict(
+ text="💡 Live AI Agent Workflow",
+ font=dict(size=30, color='#FFFFFF', family='Poppins, sans-serif'),
+ x=0.5
+ ),
+ showlegend=False,
+ hovermode='closest',
+ paper_bgcolor='#0a0e27',
+ plot_bgcolor='#0a0e27',
+ xaxis=dict(showgrid=False, zeroline=False, showticklabels=False),
+ yaxis=dict(showgrid=False, zeroline=False, showticklabels=False),
+ height=650,
+ margin=dict(t=80, b=40, l=20, r=20)
+ )
+ return fig
diff --git a/advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/nebius_client.py b/advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/nebius_client.py
new file mode 100644
index 00000000..8ad862d1
--- /dev/null
+++ b/advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/nebius_client.py
@@ -0,0 +1,85 @@
+import os
+from openai import OpenAI
+def get_nebius_client():
+ """
+ Initialize Nebius AI client using OpenAI-compatible API.
+ Make sure NEBIUS_API_KEY is set in your environment.
+ """
+ api_key = os.getenv("NEBIUS_API_KEY")
+ if not api_key:
+ raise ValueError("⚠️ Please set NEBIUS_API_KEY in your environment.")
+
+ return OpenAI(
+ base_url="https://api.studio.nebius.com/v1/",
+ api_key=api_key
+ )
+
+def summarize_workflow(log_text: str):
+ """
+ Uses Nebius LLM to provide deep, insightful analysis of agent workflow.
+ Returns tuple of (summary_text, token_count)
+ """
+ client = get_nebius_client()
+ prompt = f"""You are an expert AI systems analyst specializing in multi-agent workflows and distributed AI architectures.
+
+Analyze the following agent workflow log data and provide a DEEP, INSIGHTFUL analysis that goes beyond surface-level observations.
+
+Log Data:
+{log_text}
+
+Your analysis should be structured as follows:
+
+**🧠 AI Insight Summary**
+
+Start with a one-sentence executive summary that captures the essence of this workflow's design philosophy.
+
+**🔹 Key Agents & Their Roles:**
+List the agents involved with `code formatting`, then explain WHAT each agent does and WHY it's positioned where it is in the pipeline.
+
+**🔹 Workflow Architecture Analysis:**
+- Explain the DESIGN PATTERN (sequential, parallel, hub-and-spoke, waterfall, etc.)
+- Discuss WHY this architecture was chosen - what problems does it solve?
+- Identify any SMART design decisions (e.g., separation of concerns, modularity)
+- Point out any POTENTIAL ISSUES with the current design
+
+**🔹 Data Flow & Dependencies:**
+- Trace how information flows between agents
+- Explain WHY certain agents depend on others
+- Identify any BOTTLENECKS or critical path dependencies
+- Discuss whether the flow is optimal or could be improved
+
+**🔹 Performance Characteristics:**
+- Analyze the workflow's efficiency characteristics
+- Discuss latency implications of the sequential/parallel design
+- Suggest specific optimizations with estimated impact (e.g., "parallelizing X and Y could reduce latency by ~40%")
+- Comment on scalability and throughput potential
+
+**🔹 Memory & State Management:**
+- Discuss how state/context flows through the system
+- Identify if there's a RAG pattern, memory loops, or stateless processing
+- Explain the implications for consistency and reproducibility
+
+**🟢 Conclusion:**
+Provide an overall assessment with specific metrics or observations. Be honest about trade-offs and suggest concrete improvements.
+
+IMPORTANT RULES:
+- Be SPECIFIC, not generic
+- Use technical terminology appropriately
+- Provide REASONING for your observations
+- Suggest CONCRETE improvements with estimated impacts
+- Make it engaging and insightful, not a boring bullet list
+- Use markdown formatting with code blocks for agent names
+- Keep the total response under 600 words but pack it with insights"""
+
+ response = client.chat.completions.create(
+ model="meta-llama/Meta-Llama-3.1-8B-Instruct",
+ messages=[{"role": "user", "content": prompt}],
+ temperature=0.8,
+ max_tokens=1200,
+ )
+
+
+ usage = response.usage
+ total_tokens = usage.total_tokens if usage else 0
+ summary_text = response.choices[0].message.content
+ return summary_text, total_tokens
\ No newline at end of file
diff --git a/advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/parser.py b/advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/parser.py
new file mode 100644
index 00000000..ed8f4b9b
--- /dev/null
+++ b/advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/parser.py
@@ -0,0 +1,37 @@
+import json
+import random
+from datetime import datetime
+
+def generate_random_workflow():
+ """Generate random agent workflow events."""
+ agent_pool = [
+ "input_handler", "intent_classifier", "memory_manager",
+ "response_generator", "output_handler", "data_processor",
+ "sentiment_analyzer", "context_builder", "task_planner",
+ "execution_engine", "quality_checker", "feedback_loop"
+ ]
+
+ num_agents = random.randint(4, 8)
+ selected_agents = random.sample(agent_pool, num_agents)
+
+ events = []
+ for i in range(len(selected_agents) - 1):
+ events.append({
+ "agent": selected_agents[i],
+ "next_agent": selected_agents[i + 1],
+ "timestamp": datetime.now().strftime("%H:%M:%S")
+ })
+
+ return {"events": events}
+
+def parse_logs(log_path=None):
+ """Parse logs and extract workflow nodes and edges - generates random data."""
+ logs = generate_random_workflow()
+ nodes, edges = [], []
+ for event in logs.get("events", []):
+ agent = event.get("agent", "unknown")
+ next_agent = event.get("next_agent", None)
+ nodes.append(agent)
+ if next_agent:
+ edges.append((agent, next_agent))
+ return list(set(nodes)), edges, json.dumps(logs, indent=2)
\ No newline at end of file
diff --git a/advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/requirements.txt b/advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/requirements.txt
new file mode 100644
index 00000000..f56547d9
--- /dev/null
+++ b/advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/requirements.txt
@@ -0,0 +1,8 @@
+streamlit==1.38.0
+openai==1.52.0
+python-dotenv==1.0.1
+networkx==3.3
+plotly==5.24.1
+pandas==2.2.3
+numpy==1.26.4
+requests==2.32.3
diff --git a/advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/sample_logs/agent_log.json b/advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/sample_logs/agent_log.json
new file mode 100644
index 00000000..daceb796
--- /dev/null
+++ b/advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/sample_logs/agent_log.json
@@ -0,0 +1,8 @@
+{
+ "events": [
+ {"agent": "input_handler", "next_agent": "intent_classifier"},
+ {"agent": "intent_classifier", "next_agent": "memory_manager"},
+ {"agent": "memory_manager", "next_agent": "response_generator"},
+ {"agent": "response_generator", "next_agent": "output_handler"}
+ ]
+}
diff --git a/advance_ai_agents/advance_ai_agents/assets/IM_1.png b/advance_ai_agents/advance_ai_agents/assets/IM_1.png
new file mode 100644
index 00000000..14c3e5d1
Binary files /dev/null and b/advance_ai_agents/advance_ai_agents/assets/IM_1.png differ
diff --git a/advance_ai_agents/advance_ai_agents/assets/IM_2.png b/advance_ai_agents/advance_ai_agents/assets/IM_2.png
new file mode 100644
index 00000000..5865a876
Binary files /dev/null and b/advance_ai_agents/advance_ai_agents/assets/IM_2.png differ
diff --git a/advance_ai_agents/advance_ai_agents/assets/IM_3.png b/advance_ai_agents/advance_ai_agents/assets/IM_3.png
new file mode 100644
index 00000000..d6c5bf04
Binary files /dev/null and b/advance_ai_agents/advance_ai_agents/assets/IM_3.png differ
diff --git a/assets/DSPy.png b/assets/DSPy.png
deleted file mode 100644
index 629f4f35..00000000
Binary files a/assets/DSPy.png and /dev/null differ
diff --git a/assets/banner_new.png b/assets/banner_new.png
deleted file mode 100644
index d2816885..00000000
Binary files a/assets/banner_new.png and /dev/null differ
diff --git a/assets/gibson.svg b/assets/gibson.svg
deleted file mode 100644
index 763ede42..00000000
--- a/assets/gibson.svg
+++ /dev/null
@@ -1 +0,0 @@
-
\ No newline at end of file
diff --git a/assets/nebius.png b/assets/nebius.png
deleted file mode 100644
index 85681a1f..00000000
Binary files a/assets/nebius.png and /dev/null differ