-
Notifications
You must be signed in to change notification settings - Fork 1k
[FEATURE] Implemented Python-Based AI Workflow Visualizer (#87) #89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
[FEATURE] Implemented Python-Based AI Workflow Visualizer (#87) #89
Conversation
WalkthroughAdds a Streamlit-based AI Workflow Visualizer with real-time UI, log parsing to derive nodes/edges, Plotly network graph rendering, Nebius (OpenAI-compatible) summarization integration, sample logs, dependencies, and README documentation. Includes error handling and performance metrics tracking. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant Streamlit_App as Streamlit App
participant Parser
participant Graph_Builder as Graph Builder
participant Nebius_API as Nebius API
participant UI_State as UI State
User->>Streamlit_App: Enable real-time mode
loop every refresh_rate seconds
Streamlit_App->>Parser: parse_logs()
Parser-->>Streamlit_App: nodes, edges, log_text
Streamlit_App->>UI_State: update counts/timestamps
Streamlit_App->>Graph_Builder: build_graph(nodes, edges)
Graph_Builder-->>Streamlit_App: Plotly Figure
Streamlit_App->>Streamlit_App: render graph
Streamlit_App->>Nebius_API: summarize_workflow(log_text)
Nebius_API-->>Streamlit_App: (summary, token_count)
Streamlit_App->>UI_State: update metrics (latency, tokens, throughput)
Streamlit_App->>Streamlit_App: display summary & logs
end
User->>Streamlit_App: Disable real-time / Click Generate
Streamlit_App->>Parser: parse_logs()
Streamlit_App->>Graph_Builder: build_graph(nodes, edges)
Streamlit_App->>Nebius_API: summarize_workflow(log_text)
Streamlit_App->>Streamlit_App: display single-run results
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes
Suggested labels
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
🧹 Nitpick comments (10)
advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/sample_logs/agent_log.json (1)
1-8: Enrich the sample schema for better demos and metrics.Add event ids and timestamps (and optionally durations/status) to enable latency/throughput examples and clearer Nebius summaries. Example:
{"id":"e1","timestamp":"2025-10-24T15:30:12Z","agent":"input_handler","next_agent":"intent_classifier","duration_ms":18}advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/graph_builder.py (1)
12-47: Remove color flicker; use deterministic colors per node.Random colors on every refresh reduce readability. Hash node names to stable colors and color edges by source.
Apply:
@@ -import random +import random +import hashlib @@ def build_graph(nodes, edges): - neon_colors = ['#FF3CAC', '#784BA0', '#2B86C5', '#00F5A0', '#FF9A8B', '#8EC5FC', '#F9F586'] + neon_colors = ['#FF3CAC', '#784BA0', '#2B86C5', '#00F5A0', '#FF9A8B', '#8EC5FC', '#F9F586'] + def pick_color(key: str) -> str: + h = int(hashlib.sha1(key.encode("utf-8")).hexdigest(), 16) + return neon_colors[h % len(neon_colors)] @@ - for i, edge in enumerate(G.edges()): + for i, edge in enumerate(G.edges()): x0, y0 = pos[edge[0]] x1, y1 = pos[edge[1]] @@ - go.Scatter( + go.Scatter( x=[x0, mid_x, x1], y=[y0, mid_y, y1], mode='lines', - line=dict(width=4, color=random.choice(neon_colors), shape='spline'), + line=dict(width=4, color=pick_color(str(edge[0])), shape='spline'), hoverinfo='none', opacity=0.8 ) ) @@ - marker=dict( + marker=dict( size=45, - color=[random.choice(neon_colors) for _ in G.nodes()], + color=[pick_color(str(n)) for n in G.nodes()], line=dict(width=3, color='white'), symbol='circle' ),advance_ai_agents/advance_ai_agents/README.md (2)
20-30: Add languages to fenced code blocks (markdownlint MD040).Specify languages for blocks to improve formatting:
-``` +```text # tree …@@
-+env
NEBIUS_API_KEY=your_api_key_here@@ -``` +```text feat: implemented intelligent workflow summarization and visualization using Nebius APIAlso applies to: 61-63, 123-125 --- `100-105`: **Align “Technologies Used” with actual code.** Graphviz isn’t used; Plotly is. Update list: ```diff -* **Graphviz / NetworkX** (for visualization) +* **Plotly + NetworkX** (for visualization)advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/nebius_client.py (1)
3-15: Cache and configure the Nebius client via env.Avoid recreating clients and hardcoding URLs. Load .env here for reuse outside Streamlit.
-from openai import OpenAI -def get_nebius_client(): +from openai import OpenAI +from functools import lru_cache +from dotenv import load_dotenv + +@lru_cache(maxsize=1) +def get_nebius_client(): """ Initialize Nebius AI client using OpenAI-compatible API. Make sure NEBIUS_API_KEY is set in your environment. """ - api_key = os.getenv("NEBIUS_API_KEY") + load_dotenv() + api_key = os.getenv("NEBIUS_API_KEY") if not api_key: raise ValueError("⚠️ Please set NEBIUS_API_KEY in your environment.") - - return OpenAI( - base_url="https://api.studio.nebius.com/v1/", - api_key=api_key - ) + base_url = os.getenv("NEBIUS_BASE_URL", "https://api.studio.nebius.com/v1/") + return OpenAI(base_url=base_url, api_key=api_key)advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/app.py (5)
4-6: Avoid stdlib name collision: parser module.from parser import parse_logs can shadow Python’s stdlib parser module in some environments. Prefer a less ambiguous name or a package import.
-from parser import parse_logs +from log_parser import parse_logs # rename file to log_parser.pyAlternatively, make ai_workflow_visualizer a package and use absolute imports.
191-193: Surface the actual model name from config.UI shows “Llama 3.1” while code uses a specific model id. Read NEBIUS_MODEL env and display it to avoid drift.
- st.markdown("<div class='metric-box'><h3>🤖</h3><p>AI Model</p><h2>Llama 3.1</h2></div>", unsafe_allow_html=True) + import os + model_label = os.getenv("NEBIUS_MODEL", "meta-llama/Meta-Llama-3.1-8B-Instruct") + st.markdown(f"<div class='metric-box'><h3>🤖</h3><p>AI Model</p><h2>{model_label}</h2></div>", unsafe_allow_html=True)
215-216: Default real‑time mode off for better UX.Start with manual mode to avoid immediate background API calls and to keep the app responsive on first load.
-enable_realtime = st.checkbox("🔥 Enable Real-Time Visualization", value=True) +enable_realtime = st.checkbox("🔥 Enable Real-Time Visualization", value=False)
269-283: Guard metrics update when Nebius fails.If summarize_workflow raises, ensure metrics don’t reference unset variables. Consider initializing ai_latency=0 and summary="" before the try block and updating only on success.
296-298: Avoid sleeping on the main thread in error paths.On exceptions, consider skipping the sleep and letting the rerun happen sooner to recover faster.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
advance_ai_agents/advance_ai_agents/README.md(1 hunks)advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/app.py(1 hunks)advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/graph_builder.py(1 hunks)advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/nebius_client.py(1 hunks)advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/parser.py(1 hunks)advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/requirements.txt(1 hunks)advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/sample_logs/agent_log.json(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/app.py (3)
advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/parser.py (1)
parse_logs(27-37)advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/graph_builder.py (1)
build_graph(5-66)advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/nebius_client.py (1)
summarize_workflow(17-85)
advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/nebius_client.py (1)
advance_ai_agents/finance_service_agent/routes/agentRoutes.py (1)
chat(92-133)
🪛 markdownlint-cli2 (0.18.1)
advance_ai_agents/advance_ai_agents/README.md
20-20: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
61-61: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
123-123: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
| if enable_realtime: | ||
| st.success(f"🚀 **REAL-TIME MODE ACTIVATED** - Generating new workflows every {refresh_rate} seconds") | ||
|
|
||
| iteration = 0 | ||
| while True: | ||
| iteration += 1 | ||
| try: | ||
| parse_start = time.time() | ||
| nodes, edges, log_text = parse_logs() | ||
| parse_time = time.time() - parse_start | ||
|
|
||
| with agents_count: | ||
| st.markdown(f"<div class='metric-box'><h3>⚡</h3><p>Agents</p><h2>{len(nodes)}</h2></div>", unsafe_allow_html=True) | ||
| with edges_count: | ||
| st.markdown(f"<div class='metric-box'><h3>🔗</h3><p>Connections</p><h2>{len(edges)}</h2></div>", unsafe_allow_html=True) | ||
| with timestamp: | ||
| current_time = datetime.now().strftime("%H:%M:%S") | ||
| st.markdown(f"<div class='metric-box'><h3>⏰</h3><p>Updated</p><h2>{current_time}</h2></div>", unsafe_allow_html=True) | ||
|
|
||
| with graph_placeholder.container(): | ||
| fig = build_graph(nodes, edges) | ||
| st.plotly_chart(fig, use_container_width=True, key=f"graph_{iteration}") | ||
|
|
||
| with summary_placeholder.container(): | ||
| st.markdown("<div class='section-header'>🧠 AI ANALYSIS (LIVE)</div>", unsafe_allow_html=True) | ||
| with st.spinner("🤖 Nebius AI is analyzing..."): | ||
| ai_start = time.time() | ||
| result = summarize_workflow(log_text) | ||
|
|
||
|
|
||
| if isinstance(result, tuple): | ||
| summary, token_count = result | ||
| else: | ||
| summary = result | ||
| token_count = 0 | ||
|
|
||
| ai_latency = time.time() - ai_start | ||
| total_latency = parse_time + ai_latency | ||
| update_performance_metrics(total_latency, token_count) | ||
| display_performance_metrics() | ||
|
|
||
| st.markdown(f""" | ||
| <div class='analysis-box'> | ||
| <p>{summary}</p> | ||
| </div> | ||
| """, unsafe_allow_html=True) | ||
|
|
||
| st.info(f"🔄 Workflow #{iteration} - Next generation in {refresh_rate} seconds... (Parse: {parse_time:.3f}s | AI: {ai_latency:.3f}s)") | ||
|
|
||
| except Exception as e: | ||
| st.error(f"❌ Error: {e}") | ||
| import traceback | ||
| st.code(traceback.format_exc()) | ||
|
|
||
| time.sleep(refresh_rate) | ||
| else: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don’t block Streamlit with an infinite loop; avoid leaking widget keys.
The while True + time.sleep blocks reruns and makes the checkbox ineffective. Also, using a new plotly_chart key each iteration grows memory. Run one cycle then st.rerun(), and reuse a stable placeholder/key.
-if enable_realtime:
- st.success(f"🚀 **REAL-TIME MODE ACTIVATED** - Generating new workflows every {refresh_rate} seconds")
-
- iteration = 0
- while True:
- iteration += 1
- try:
+if enable_realtime:
+ st.success(f"🚀 **REAL-TIME MODE ACTIVATED** - Refreshing every {refresh_rate} seconds")
+ try:
parse_start = time.time()
nodes, edges, log_text = parse_logs()
parse_time = time.time() - parse_start
@@
- with graph_placeholder.container():
- fig = build_graph(nodes, edges)
- st.plotly_chart(fig, use_container_width=True, key=f"graph_{iteration}")
+ with graph_placeholder.container():
+ fig = build_graph(nodes, edges)
+ st.plotly_chart(fig, use_container_width=True, key="graph")
@@
- st.info(f"🔄 Workflow #{iteration} - Next generation in {refresh_rate} seconds... (Parse: {parse_time:.3f}s | AI: {ai_latency:.3f}s)")
-
- except Exception as e:
+ st.info(f"🔄 Next update in {refresh_rate} seconds... (Parse: {parse_time:.3f}s | AI: {ai_latency:.3f}s)")
+ except Exception as e:
st.error(f"❌ Error: {e}")
import traceback
st.code(traceback.format_exc())
-
- time.sleep(refresh_rate)
+ time.sleep(refresh_rate)
+ st.rerun()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if enable_realtime: | |
| st.success(f"🚀 **REAL-TIME MODE ACTIVATED** - Generating new workflows every {refresh_rate} seconds") | |
| iteration = 0 | |
| while True: | |
| iteration += 1 | |
| try: | |
| parse_start = time.time() | |
| nodes, edges, log_text = parse_logs() | |
| parse_time = time.time() - parse_start | |
| with agents_count: | |
| st.markdown(f"<div class='metric-box'><h3>⚡</h3><p>Agents</p><h2>{len(nodes)}</h2></div>", unsafe_allow_html=True) | |
| with edges_count: | |
| st.markdown(f"<div class='metric-box'><h3>🔗</h3><p>Connections</p><h2>{len(edges)}</h2></div>", unsafe_allow_html=True) | |
| with timestamp: | |
| current_time = datetime.now().strftime("%H:%M:%S") | |
| st.markdown(f"<div class='metric-box'><h3>⏰</h3><p>Updated</p><h2>{current_time}</h2></div>", unsafe_allow_html=True) | |
| with graph_placeholder.container(): | |
| fig = build_graph(nodes, edges) | |
| st.plotly_chart(fig, use_container_width=True, key=f"graph_{iteration}") | |
| with summary_placeholder.container(): | |
| st.markdown("<div class='section-header'>🧠 AI ANALYSIS (LIVE)</div>", unsafe_allow_html=True) | |
| with st.spinner("🤖 Nebius AI is analyzing..."): | |
| ai_start = time.time() | |
| result = summarize_workflow(log_text) | |
| if isinstance(result, tuple): | |
| summary, token_count = result | |
| else: | |
| summary = result | |
| token_count = 0 | |
| ai_latency = time.time() - ai_start | |
| total_latency = parse_time + ai_latency | |
| update_performance_metrics(total_latency, token_count) | |
| display_performance_metrics() | |
| st.markdown(f""" | |
| <div class='analysis-box'> | |
| <p>{summary}</p> | |
| </div> | |
| """, unsafe_allow_html=True) | |
| st.info(f"🔄 Workflow #{iteration} - Next generation in {refresh_rate} seconds... (Parse: {parse_time:.3f}s | AI: {ai_latency:.3f}s)") | |
| except Exception as e: | |
| st.error(f"❌ Error: {e}") | |
| import traceback | |
| st.code(traceback.format_exc()) | |
| time.sleep(refresh_rate) | |
| else: | |
| if enable_realtime: | |
| st.success(f"🚀 **REAL-TIME MODE ACTIVATED** - Refreshing every {refresh_rate} seconds") | |
| try: | |
| parse_start = time.time() | |
| nodes, edges, log_text = parse_logs() | |
| parse_time = time.time() - parse_start | |
| with agents_count: | |
| st.markdown(f"<div class='metric-box'><h3>⚡</h3><p>Agents</p><h2>{len(nodes)}</h2></div>", unsafe_allow_html=True) | |
| with edges_count: | |
| st.markdown(f"<div class='metric-box'><h3>🔗</h3><p>Connections</p><h2>{len(edges)}</h2></div>", unsafe_allow_html=True) | |
| with timestamp: | |
| current_time = datetime.now().strftime("%H:%M:%S") | |
| st.markdown(f"<div class='metric-box'><h3>⏰</h3><p>Updated</p><h2>{current_time}</h2></div>", unsafe_allow_html=True) | |
| with graph_placeholder.container(): | |
| fig = build_graph(nodes, edges) | |
| st.plotly_chart(fig, use_container_width=True, key="graph") | |
| with summary_placeholder.container(): | |
| st.markdown("<div class='section-header'>🧠 AI ANALYSIS (LIVE)</div>", unsafe_allow_html=True) | |
| with st.spinner("🤖 Nebius AI is analyzing..."): | |
| ai_start = time.time() | |
| result = summarize_workflow(log_text) | |
| if isinstance(result, tuple): | |
| summary, token_count = result | |
| else: | |
| summary = result | |
| token_count = 0 | |
| ai_latency = time.time() - ai_start | |
| total_latency = parse_time + ai_latency | |
| update_performance_metrics(total_latency, token_count) | |
| display_performance_metrics() | |
| st.markdown(f""" | |
| <div class='analysis-box'> | |
| <p>{summary}</p> | |
| </div> | |
| """, unsafe_allow_html=True) | |
| st.info(f"🔄 Next update in {refresh_rate} seconds... (Parse: {parse_time:.3f}s | AI: {ai_latency:.3f}s)") | |
| except Exception as e: | |
| st.error(f"❌ Error: {e}") | |
| import traceback | |
| st.code(traceback.format_exc()) | |
| time.sleep(refresh_rate) | |
| st.rerun() | |
| else: |
| st.markdown(f""" | ||
| <div class='analysis-box'> | ||
| <p>{summary}</p> | ||
| </div> | ||
| """, unsafe_allow_html=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid injecting LLM output with unsafe_allow_html (XSS risk).
Nebius text is external input. Rendering it inside raw HTML with unsafe_allow_html=True can enable HTML injection. Render the summary as Markdown (safe) and only use unsafe for the wrapper.
- st.markdown(f"""
- <div class='analysis-box'>
- <p>{summary}</p>
- </div>
- """, unsafe_allow_html=True)
+ # Safe: summary rendered as Markdown (no unsafe HTML)
+ st.markdown("<div class='analysis-box'>", unsafe_allow_html=True)
+ st.markdown(summary) # safe Markdown rendering
+ st.markdown("</div>", unsafe_allow_html=True)🤖 Prompt for AI Agents
In advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/app.py around
lines 285-289, the code injects external LLM output directly into raw HTML with
unsafe_allow_html=True which risks XSS; change it to render the wrapper HTML
separately (without inserting the summary) using unsafe_allow_html=True, then
render the summary itself with a safe Markdown call (e.g., st.markdown(summary)
or st.write(summary)) so the LLM content is not passed through unsafe HTML;
ensure any needed styling targets the wrapper element rather than embedding the
summary inside the unsafe HTML string.
| def build_graph(nodes, edges): | ||
| """Generate a glowing, animated Plotly network graph.""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Docstring says “animated” but no animation is implemented.
Either add Plotly frames/transitions, or adjust the docstring/title to avoid implying animation.
🤖 Prompt for AI Agents
In advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/graph_builder.py
around lines 5-6, the docstring claims the graph is "animated" but the function
does not implement any Plotly frames or transitions; either remove/adjust the
word "animated" to accurately describe a static glowing Plotly network graph, or
implement real animation by building a frames list (one frame per
timestep/state), adding layout.updatemenus and layout.sliders for playback
controls, and returning a Figure with frames and animation transition settings
so the docstring is truthful.
| def summarize_workflow(log_text: str): | ||
| """ | ||
| Uses Nebius LLM to provide deep, insightful analysis of agent workflow. | ||
| Returns tuple of (summary_text, token_count) | ||
| """ | ||
| client = get_nebius_client() | ||
| prompt = f"""You are an expert AI systems analyst specializing in multi-agent workflows and distributed AI architectures. | ||
|
|
||
| Analyze the following agent workflow log data and provide a DEEP, INSIGHTFUL analysis that goes beyond surface-level observations. | ||
|
|
||
| Log Data: | ||
| {log_text} | ||
|
|
||
| Your analysis should be structured as follows: | ||
|
|
||
| **🧠 AI Insight Summary** | ||
|
|
||
| Start with a one-sentence executive summary that captures the essence of this workflow's design philosophy. | ||
|
|
||
| **🔹 Key Agents & Their Roles:** | ||
| List the agents involved with `code formatting`, then explain WHAT each agent does and WHY it's positioned where it is in the pipeline. | ||
|
|
||
| **🔹 Workflow Architecture Analysis:** | ||
| - Explain the DESIGN PATTERN (sequential, parallel, hub-and-spoke, waterfall, etc.) | ||
| - Discuss WHY this architecture was chosen - what problems does it solve? | ||
| - Identify any SMART design decisions (e.g., separation of concerns, modularity) | ||
| - Point out any POTENTIAL ISSUES with the current design | ||
|
|
||
| **🔹 Data Flow & Dependencies:** | ||
| - Trace how information flows between agents | ||
| - Explain WHY certain agents depend on others | ||
| - Identify any BOTTLENECKS or critical path dependencies | ||
| - Discuss whether the flow is optimal or could be improved | ||
|
|
||
| **🔹 Performance Characteristics:** | ||
| - Analyze the workflow's efficiency characteristics | ||
| - Discuss latency implications of the sequential/parallel design | ||
| - Suggest specific optimizations with estimated impact (e.g., "parallelizing X and Y could reduce latency by ~40%") | ||
| - Comment on scalability and throughput potential | ||
|
|
||
| **🔹 Memory & State Management:** | ||
| - Discuss how state/context flows through the system | ||
| - Identify if there's a RAG pattern, memory loops, or stateless processing | ||
| - Explain the implications for consistency and reproducibility | ||
|
|
||
| **🟢 Conclusion:** | ||
| Provide an overall assessment with specific metrics or observations. Be honest about trade-offs and suggest concrete improvements. | ||
|
|
||
| IMPORTANT RULES: | ||
| - Be SPECIFIC, not generic | ||
| - Use technical terminology appropriately | ||
| - Provide REASONING for your observations | ||
| - Suggest CONCRETE improvements with estimated impacts | ||
| - Make it engaging and insightful, not a boring bullet list | ||
| - Use markdown formatting with code blocks for agent names | ||
| - Keep the total response under 600 words but pack it with insights""" | ||
|
|
||
| response = client.chat.completions.create( | ||
| model="meta-llama/Meta-Llama-3.1-8B-Instruct", | ||
| messages=[{"role": "user", "content": prompt}], | ||
| temperature=0.8, | ||
| max_tokens=1200, | ||
| ) | ||
|
|
||
|
|
||
| usage = response.usage | ||
| total_tokens = usage.total_tokens if usage else 0 | ||
| summary_text = response.choices[0].message.content | ||
| return summary_text, total_tokens No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Add resilience and make model/limits configurable.
Handle API errors and align max tokens with the 600‑word target. Parameterize model, temperature, and max_tokens via env.
def summarize_workflow(log_text: str):
@@
- client = get_nebius_client()
+ client = get_nebius_client()
+ model = os.getenv("NEBIUS_MODEL", "meta-llama/Meta-Llama-3.1-8B-Instruct")
+ temperature = float(os.getenv("NEBIUS_TEMPERATURE", "0.7"))
+ max_tokens = int(os.getenv("NEBIUS_MAX_TOKENS", "800")) # ~600 words
@@
- response = client.chat.completions.create(
- model="meta-llama/Meta-Llama-3.1-8B-Instruct",
- messages=[{"role": "user", "content": prompt}],
- temperature=0.8,
- max_tokens=1200,
- )
-
-
- usage = response.usage
- total_tokens = usage.total_tokens if usage else 0
- summary_text = response.choices[0].message.content
- return summary_text, total_tokens
+ try:
+ response = client.chat.completions.create(
+ model=model,
+ messages=[{"role": "user", "content": prompt}],
+ temperature=temperature,
+ max_tokens=max_tokens,
+ timeout=60,
+ )
+ usage = getattr(response, "usage", None)
+ total_tokens = usage.total_tokens if usage else 0
+ summary_text = response.choices[0].message.content
+ return summary_text, total_tokens
+ except Exception as e:
+ # Log and degrade gracefully
+ fallback = f"Analysis unavailable: {e}"
+ return fallback, 0🌐 Web query:
Confirm Nebius’ OpenAI‑compatible base URL and supported model id format (e.g., meta-llama/Meta-Llama-3.1-8B-Instruct). Please provide official docs references.
💡 Result:
-
Official OpenAI‑compatible base URL: https://api.studio.nebius.com/v1/ (used in Nebius docs/quickstart and API examples). [1]
-
Supported model id format: Nebius uses OpenAI‑style model IDs such as
- meta-llama/Meta-Llama-3.1-8B-Instruct
- meta-llama/Meta-Llama-3.1-8B-Instruct-fast
- Qwen/Qwen3-235B-A22B, etc.
(i.e., vendor/model-name/version — examples appear throughout Nebius docs and blog posts). [2][3]
References
- Nebius Quickstart (API examples, base_url + model field). [1]
- Nebius Function‑calling / API examples (shows base_url and model IDs like meta-llama/Meta-Llama-3.1-8B-Instruct-fast). [2]
- Nebius blog / deployment examples (explicit example using base_model "meta-llama/Meta-Llama-3.1-8B-Instruct"). [3]
If you want, I can fetch the exact doc pages (linked) for direct copy/paste.
Proceeding with codebase verification to confirm the review concerns are valid.
---
🏁 Script executed:
```shell
#!/bin/bash
# Check imports and get_nebius_client() implementation
cat advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/nebius_client.py | head -20
Length of output: 615
🏁 Script executed:
#!/bin/bash
# Find get_nebius_client implementation
rg -nA 10 'def get_nebius_client' advance_ai_agents/Length of output: 1238
🏁 Script executed:
#!/bin/bash
# Check for try-except patterns in this file or related files
rg -n 'try:|except' advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/nebius_client.pyLength of output: 0
🏁 Script executed:
#!/bin/bash
# Check if os module is already imported
rg -n '^import os|^from os' advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/nebius_client.pyLength of output: 11
🏁 Script executed:
#!/bin/bash
# Check if environment variables are used elsewhere in this file
rg -n 'os\.getenv|environ' advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/nebius_client.pyLength of output: 183
Add error handling and make model/configuration parameterizable via environment variables.
The function currently lacks resilience and hard-codes model, temperature, and max_tokens. Since os is already imported and get_nebius_client() uses environment variables for API key configuration, extend this pattern to model parameters. Reduce max_tokens from 1200 to align with the stated "under 600 words" target (~800 tokens allows margin). Add a try-except block to handle API failures gracefully.
def summarize_workflow(log_text: str):
"""
Uses Nebius LLM to provide deep, insightful analysis of agent workflow.
Returns tuple of (summary_text, token_count)
"""
client = get_nebius_client()
+ model = os.getenv("NEBIUS_MODEL", "meta-llama/Meta-Llama-3.1-8B-Instruct")
+ temperature = float(os.getenv("NEBIUS_TEMPERATURE", "0.7"))
+ max_tokens = int(os.getenv("NEBIUS_MAX_TOKENS", "800"))
prompt = f"""You are an expert AI systems analyst specializing in multi-agent workflows and distributed AI architectures.
...
- response = client.chat.completions.create(
- model="meta-llama/Meta-Llama-3.1-8B-Instruct",
- messages=[{"role": "user", "content": prompt}],
- temperature=0.8,
- max_tokens=1200,
- )
-
-
- usage = response.usage
- total_tokens = usage.total_tokens if usage else 0
- summary_text = response.choices[0].message.content
- return summary_text, total_tokens
+ try:
+ response = client.chat.completions.create(
+ model=model,
+ messages=[{"role": "user", "content": prompt}],
+ temperature=temperature,
+ max_tokens=max_tokens,
+ timeout=60,
+ )
+ usage = getattr(response, "usage", None)
+ total_tokens = usage.total_tokens if usage else 0
+ summary_text = response.choices[0].message.content
+ return summary_text, total_tokens
+ except Exception as e:
+ fallback = f"Analysis unavailable: {e}"
+ return fallback, 0Nebius' OpenAI-compatible API supports the model ID format meta-llama/Meta-Llama-3.1-8B-Instruct and uses base URL https://api.studio.nebius.com/v1/.
🤖 Prompt for AI Agents
In advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/nebius_client.py
around lines 17-85, the summarize_workflow function hardcodes model, temperature
and max_tokens and lacks error handling; make these configurable via environment
variables (e.g., NEBIUS_MODEL default "meta-llama/Meta-Llama-3.1-8B-Instruct",
NEBIUS_TEMPERATURE default 0.8, NEBIUS_MAX_TOKENS default 800) and replace the
literal values with these variables, reduce the default max_tokens to 800 to
match the response target, and wrap the client.chat.completions.create call in a
try/except that catches API/network errors, logs the exception, and returns a
safe fallback (e.g., empty summary and 0 tokens) so the caller can handle
failures gracefully.
| def parse_logs(log_path=None): | ||
| """Parse logs and extract workflow nodes and edges - generates random data.""" | ||
| logs = generate_random_workflow() | ||
| nodes, edges = [], [] | ||
| for event in logs.get("events", []): | ||
| agent = event.get("agent", "unknown") | ||
| next_agent = event.get("next_agent", None) | ||
| nodes.append(agent) | ||
| if next_agent: | ||
| edges.append((agent, next_agent)) | ||
| return list(set(nodes)), edges, json.dumps(logs, indent=2) No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
parse_logs ignores log_path and loses node order; dedup edges.
Honor the parameter, preserve insertion order for stable layouts, and avoid duplicate edges.
Apply:
def parse_logs(log_path=None):
- """Parse logs and extract workflow nodes and edges - generates random data."""
- logs = generate_random_workflow()
- nodes, edges = [], []
+ """Parse logs and extract workflow nodes and edges.
+ If log_path is provided, read JSON from that file; otherwise generate random data."""
+ if log_path:
+ with open(log_path, "r", encoding="utf-8") as f:
+ logs = json.load(f)
+ else:
+ logs = generate_random_workflow()
+ nodes, edges_set = [], set()
for event in logs.get("events", []):
agent = event.get("agent", "unknown")
next_agent = event.get("next_agent", None)
nodes.append(agent)
if next_agent:
- edges.append((agent, next_agent))
- return list(set(nodes)), edges, json.dumps(logs, indent=2)
+ edges_set.add((agent, next_agent))
+ nodes_unique = list(dict.fromkeys(nodes)) # preserve order
+ edges = list(edges_set)
+ return nodes_unique, edges, json.dumps(logs, indent=2)🤖 Prompt for AI Agents
In advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/parser.py around
lines 27 to 37, parse_logs currently ignores the log_path, de-duplicates nodes
with set (losing insertion order), and can produce duplicate edges; modify it so
that if log_path is provided it loads/parses the logs from that file (fall back
to generate_random_workflow() on missing/invalid path), build nodes as a list
that preserves first-seen order by appending only if not already present, and
collect edges while skipping duplicates by keeping a seen_edges set to avoid
adding the same (agent, next_agent) tuple more than once; keep returning (nodes,
edges, json.dumps(logs, indent=2)).
| streamlit==1.38.0 | ||
| openai==1.52.0 | ||
| python-dotenv==1.0.1 | ||
| networkx==3.3 | ||
| plotly==5.24.1 | ||
| pandas==2.2.3 | ||
| numpy==1.26.4 | ||
| requests==2.32.3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Confirm Python runtime vs NumPy pin (possible 3.13 incompat).
NumPy 1.26.x likely doesn’t support Python 3.13. If you intend Python 3.13+, either bump NumPy (and align pandas compat) or constrain supported Python to <=3.12 for this tool. Consider a constraints/lock approach to avoid repo‑wide conflicts.
🌐 Web query:
Does NumPy 1.26.4 support Python 3.13? If not, what NumPy version first supports 3.13, and which pandas versions are compatible with it?
💡 Result:
Short answers:
- No — NumPy 1.26.4 supports Python 3.9–3.12, not 3.13. [1]
- NumPy added official Python 3.13 support in the 2.1.0 series (NumPy 2.1.0+). [2]
- Pandas first published general compatibility with Python 3.13 in pandas 2.2.3; later pandas 2.x releases also provide 3.13 wheels. (So use pandas >= 2.2.3 for Python 3.13.) [3]
Sources:
[1] NumPy 1.26.4 release notes.
[2] NumPy news (2.1.0 announcement: “Support for Python 3.13”).
[3] pandas 2.2.3 “compatible with Python 3.13” release notes.
---
🏁 Script executed:
```shell
#!/bin/bash
# Check for README and Python version constraints
fd -t f -e md -e txt -e yml -e yaml -e toml -e cfg -e ini | head -20
Length of output: 996
🏁 Script executed:
#!/bin/bash
# Look for Python version constraints in setup files or README
rg -i "python" -A2 -B2 --type md --type txt --type toml --type cfg | head -40Length of output: 27
🌐 Web query:
streamlit 1.38.0 Python 3.13 support
💡 Result:
Short answer: No — Streamlit 1.38.0 does not support Python 3.13. PyPI metadata for 1.38.0 lists "Requires: Python >=3.8, !=3.9.7" and classifiers up through Python 3.12 only. [1] The Anaconda package build for 1.38.0 also constrains Python to "<3.13.0a0" (i.e., Python 3.12 max). [2]
Sources:
- PyPI — streamlit 1.38.0 (requires Python >=3.8; classifiers up to 3.12). [1]
- Anaconda package metadata for streamlit‑1.38.0 (python >=3.12,<3.13.0a0). [2]
🌐 Web query:
openai 1.52.0 Python 3.13 support
💡 Result:
Short answer: PyPI for openai 1.52.0 lists "Requires: Python >=3.7.1" (so it should install on Python 3.13), but the release classifiers on that page do not list Python 3.13 explicitly (they stop at 3.12). See the PyPI release metadata. [1]
Source:
- openai 1.52.0 PyPI release page. [1]
---
🏁 Script executed:
```shell
#!/bin/bash
# Check README in the ai_workflow_visualizer tool directory
cat "advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/README.md" 2>/dev/null | rg -i "python" -C2
Length of output: 0
🏁 Script executed:
#!/bin/bash
# Check main README
cat "README.md" 2>/dev/null | rg -i "python" -C2 | head -30Length of output: 1010
🏁 Script executed:
#!/bin/bash
# Check for pyproject.toml or setup.py in ai_workflow_visualizer
fd -t f \( -name "pyproject.toml" -o -name "setup.py" -o -name "setup.cfg" \) "advance_ai_agents/advance_ai_agents/ai_workflow_visualizer" 2>/dev/nullLength of output: 0
Resolve Python 3.13 incompatibility: upgrade NumPy and Streamlit, or constrain tool to Python ≤3.12.
The repository README specifies "Python 3.10 or higher" without an upper bound, but this tool's requirements block Python 3.13:
- NumPy 1.26.4: Supports only Python 3.9–3.12 (not 3.13)
- Streamlit 1.38.0: Supports only up to Python 3.12 (not 3.13)
To support Python 3.13:
- Upgrade NumPy to 2.1.0+ (first version with Python 3.13 support)
- Upgrade Streamlit to a version with Python 3.13 support (check PyPI for latest)
- Pandas 2.2.3 is already compatible with Python 3.13, so no change needed there
Alternatively, document Python ≤3.12 as the constraint for this tool to avoid creating a false expectation that conflicts with the repo's stated Python support.
🤖 Prompt for AI Agents
In advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/requirements.txt
lines 1-8, NumPy (1.26.4) and Streamlit (1.38.0) do not support Python 3.13;
update NumPy to 2.1.0 or newer and bump Streamlit to a release that declares
Python 3.13 support (check PyPI for the minimal compatible version) in this
requirements file, or alternatively add a note / pin in project metadata (e.g.,
README or pyproject) restricting supported Python to ≤3.12 to avoid claiming
compatibility with 3.13.
Arindam200
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR.
Please check the PR Reviews by Coderabbit.
And also Try to add an image of the app working in the README
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/README.md (1)
131-133: Fix the relative LICENSE link path.The README is nested 2 levels from the repository root, so the LICENSE link must use the correct relative path
../../LICENSErather thanLICENSE.Apply this diff:
-See the [LICENSE](LICENSE) file for details. +See the [LICENSE](../../LICENSE) file for details.
🧹 Nitpick comments (1)
advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/README.md (1)
20-20: Add language specifiers to fenced code blocks.Three code blocks are missing language identifiers, which impacts markdown rendering and syntax highlighting:
- Line 20: Project structure block (no language)
- Line 61: Environment variable block (should be
envorbash)- Line 123: Example commit block (should be
textor left as plain code)Apply this diff:
-``` +``` ai_workflow_visualizer/-``` +```env NEBIUS_API_KEY=your_api_key_here-``` +```text feat: implemented intelligent workflow summarization and visualization using Nebius APIAlso applies to: 61-61, 123-123
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (7)
advance_ai_agents/advance_ai_agents/assets/IM_1.pngis excluded by!**/*.pngadvance_ai_agents/advance_ai_agents/assets/IM_2.pngis excluded by!**/*.pngadvance_ai_agents/advance_ai_agents/assets/IM_3.pngis excluded by!**/*.pngassets/DSPy.pngis excluded by!**/*.pngassets/banner_new.pngis excluded by!**/*.pngassets/gibson.svgis excluded by!**/*.svgassets/nebius.pngis excluded by!**/*.png
📒 Files selected for processing (1)
advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/README.md(1 hunks)
🧰 Additional context used
🪛 markdownlint-cli2 (0.18.1)
advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/README.md
20-20: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
61-61: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
123-123: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🔇 Additional comments (2)
advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/README.md (2)
38-41: ✅ Previously flagged issue addressed.The cd path has been correctly updated to include the nested
advance_ai_agents/directory structure. Good catch on the fix.
71-73: ✅ Previously flagged issue addressed.The startup command has been correctly updated to use
streamlit run app.pyinstead of plain Python.
|
@Arindam200 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/README.md (2)
62-64: Add language specifier to fenced code block.The code block containing the
.envconfiguration is missing a language specifier. This helps markdown renderers apply proper syntax highlighting.Apply this diff:
-``` +```bash NEBIUS_API_KEY=your_api_key_here -``` +```Alternatively, use
envif preferred:-``` +```env NEBIUS_API_KEY=your_api_key_here -``` +```
124-126: Add language specifier to commit message code block.The code block for the example commit message is missing a language specifier.
Apply this diff:
-``` +``` feat: implemented intelligent workflow summarization and visualization using Nebius API -``` +```Or use a generic text format:
-``` +```text feat: implemented intelligent workflow summarization and visualization using Nebius API -``` +```
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (3)
advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/assets/IM_1.pngis excluded by!**/*.pngadvance_ai_agents/advance_ai_agents/ai_workflow_visualizer/assets/IM_2.pngis excluded by!**/*.pngadvance_ai_agents/advance_ai_agents/ai_workflow_visualizer/assets/IM_3.pngis excluded by!**/*.png
📒 Files selected for processing (1)
advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/README.md(1 hunks)
🧰 Additional context used
🪛 markdownlint-cli2 (0.18.1)
advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/README.md
62-62: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
124-124: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🔇 Additional comments (3)
advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/README.md (3)
39-42: ✅ Correct cd path applied.The repository path now properly reflects the nested directory structure. The previous correction to
cd awesome-ai-apps/advance_ai_agents/advance_ai_agents/ai_workflow_visualizerhas been successfully applied.
72-74: ✅ Streamlit runner applied.The instruction to run the app using
streamlit run app.pyis now correct. The previous suggestion to use Streamlit's runner instead of plain Python has been successfully implemented.
132-134: ✅ Relative LICENSE link path corrected.The LICENSE link now correctly points to
../../LICENSE, reflecting the nested directory depth. This resolves the earlier issue where the relative path was incorrect.
|
Hi @Arindam200 👋 Thanks again for your earlier feedback and suggestions! |
|
🔒 Entelligence AI Vulnerability Scanner ✅ No security vulnerabilities found! Your code passed our comprehensive security analysis. 📊 Files Analyzed: 4 files |
WalkthroughThis PR introduces a comprehensive AI Workflow Visualizer tool as a new module within the advance_ai_agents package. The implementation includes a Streamlit-based web application with a cyberpunk-themed UI that visualizes AI agent workflows from JSON logs using interactive network graphs. The tool integrates with Nebius AI API for automated workflow summarization using Llama 3.1 8B model, features real-time monitoring capabilities with performance metrics tracking, and uses NetworkX and Plotly for graph visualization. The PR also includes complete documentation, sample data, image assets, and removes several deprecated asset files from the root assets directory. Changes
Sequence DiagramThis diagram shows the interactions between components: sequenceDiagram
actor User
participant StreamlitApp as Streamlit App
participant SessionState as Session State
participant Parser as parser.parse_logs()
participant GraphBuilder as graph_builder.build_graph()
participant NebiusClient as nebius_client.summarize_workflow()
User->>StreamlitApp: Launch Application
StreamlitApp->>StreamlitApp: Initialize UI & Metrics
StreamlitApp->>SessionState: Initialize metrics (workflows=0, tokens=0, latency=[])
alt Real-Time Mode Enabled
loop Every refresh_rate seconds
StreamlitApp->>StreamlitApp: Start timing (parse_start)
StreamlitApp->>Parser: parse_logs()
Parser-->>StreamlitApp: Return (nodes, edges, log_text)
StreamlitApp->>StreamlitApp: Calculate parse_time
StreamlitApp->>StreamlitApp: Update UI metrics (agents count, edges count, timestamp)
StreamlitApp->>GraphBuilder: build_graph(nodes, edges)
GraphBuilder-->>StreamlitApp: Return plotly figure
StreamlitApp->>StreamlitApp: Display graph with unique key
StreamlitApp->>StreamlitApp: Start AI timing (ai_start)
StreamlitApp->>NebiusClient: summarize_workflow(log_text)
NebiusClient-->>StreamlitApp: Return (summary, token_count)
StreamlitApp->>StreamlitApp: Calculate ai_latency
StreamlitApp->>StreamlitApp: Calculate total_latency = parse_time + ai_latency
StreamlitApp->>SessionState: update_performance_metrics(total_latency, token_count)
SessionState->>SessionState: Increment total_workflows
SessionState->>SessionState: Add total_tokens
SessionState->>SessionState: Update latency_history (keep last 10)
SessionState->>SessionState: Calculate avg_latency
StreamlitApp->>StreamlitApp: display_performance_metrics()
StreamlitApp->>User: Show AI analysis summary
StreamlitApp->>StreamlitApp: Sleep for refresh_rate seconds
end
else Manual Mode
User->>StreamlitApp: Click "Generate Workflow" button
StreamlitApp->>StreamlitApp: Start timing (parse_start)
StreamlitApp->>Parser: parse_logs()
Parser-->>StreamlitApp: Return (nodes, edges, log_text)
StreamlitApp->>StreamlitApp: Calculate parse_time
StreamlitApp->>StreamlitApp: Update UI metrics
StreamlitApp->>GraphBuilder: build_graph(nodes, edges)
GraphBuilder-->>StreamlitApp: Return plotly figure
StreamlitApp->>User: Display graph
StreamlitApp->>StreamlitApp: Start AI timing (ai_start)
StreamlitApp->>NebiusClient: summarize_workflow(log_text)
NebiusClient-->>StreamlitApp: Return (summary, token_count)
StreamlitApp->>StreamlitApp: Calculate ai_latency
StreamlitApp->>StreamlitApp: Calculate total_latency
StreamlitApp->>SessionState: update_performance_metrics(total_latency, token_count)
SessionState->>SessionState: Update all metrics
StreamlitApp->>StreamlitApp: display_performance_metrics()
StreamlitApp->>User: Show AI analysis & success message
end
Note for WindsurfPlease change the default marketplace provider to the following in the windsurf settings:Marketplace Extension Gallery Service URL: https://marketplace.visualstudio.com/_apis/public/gallery Marketplace Gallery Item URL: https://marketplace.visualstudio.com/items Entelligence.ai can learn from your feedback. Simply add 👍 / 👎 emojis to teach it your preferences. More shortcuts belowEmoji Descriptions:
Interact with the Bot:
Also you can trigger various commands with the bot by doing The current supported commands are
More commands to be added soon. |
| while True: | ||
| iteration += 1 | ||
| try: | ||
| parse_start = time.time() | ||
| nodes, edges, log_text = parse_logs() | ||
| parse_time = time.time() - parse_start | ||
|
|
||
| with agents_count: | ||
| st.markdown(f"<div class='metric-box'><h3>⚡</h3><p>Agents</p><h2>{len(nodes)}</h2></div>", unsafe_allow_html=True) | ||
| with edges_count: | ||
| st.markdown(f"<div class='metric-box'><h3>🔗</h3><p>Connections</p><h2>{len(edges)}</h2></div>", unsafe_allow_html=True) | ||
| with timestamp: | ||
| current_time = datetime.now().strftime("%H:%M:%S") | ||
| st.markdown(f"<div class='metric-box'><h3>⏰</h3><p>Updated</p><h2>{current_time}</h2></div>", unsafe_allow_html=True) | ||
|
|
||
| with graph_placeholder.container(): | ||
| fig = build_graph(nodes, edges) | ||
| st.plotly_chart(fig, use_container_width=True, key=f"graph_{iteration}") | ||
|
|
||
| with summary_placeholder.container(): | ||
| st.markdown("<div class='section-header'>🧠 AI ANALYSIS (LIVE)</div>", unsafe_allow_html=True) | ||
| with st.spinner("🤖 Nebius AI is analyzing..."): | ||
| ai_start = time.time() | ||
| result = summarize_workflow(log_text) | ||
|
|
||
|
|
||
| if isinstance(result, tuple): | ||
| summary, token_count = result | ||
| else: | ||
| summary = result | ||
| token_count = 0 | ||
|
|
||
| ai_latency = time.time() - ai_start | ||
| total_latency = parse_time + ai_latency | ||
| update_performance_metrics(total_latency, token_count) | ||
| display_performance_metrics() | ||
|
|
||
| st.markdown(f""" | ||
| <div class='analysis-box'> | ||
| <p>{summary}</p> | ||
| </div> | ||
| """, unsafe_allow_html=True) | ||
|
|
||
| st.info(f"🔄 Workflow #{iteration} - Next generation in {refresh_rate} seconds... (Parse: {parse_time:.3f}s | AI: {ai_latency:.3f}s)") | ||
|
|
||
| except Exception as e: | ||
| st.error(f"❌ Error: {e}") | ||
| import traceback | ||
| st.code(traceback.format_exc()) | ||
|
|
||
| time.sleep(refresh_rate) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
correctness: while True: loop in real-time mode causes infinite loop, freezing Streamlit UI and making the app unresponsive to user input or stopping (Streamlit is not designed for infinite loops in the main thread).
🤖 AI Agent Prompt for Cursor/Windsurf
📋 Copy this prompt to your AI coding assistant (Cursor, Windsurf, etc.) to get help fixing this issue
In advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/app.py, lines 248-298, the use of `while True:` for real-time mode causes the Streamlit UI to freeze and become unresponsive, as Streamlit apps should not run infinite loops in the main thread. Refactor this section to use a background thread or Streamlit's session state/timer pattern to allow the UI to remain interactive and responsive, and provide a way to stop the real-time updates cleanly.
📝 Committable Code Suggestion
‼️ Ensure you review the code suggestion before committing it to the branch. Make sure it replaces the highlighted code, contains no missing lines, and has no issues with indentation.
| while True: | |
| iteration += 1 | |
| try: | |
| parse_start = time.time() | |
| nodes, edges, log_text = parse_logs() | |
| parse_time = time.time() - parse_start | |
| with agents_count: | |
| st.markdown(f"<div class='metric-box'><h3>⚡</h3><p>Agents</p><h2>{len(nodes)}</h2></div>", unsafe_allow_html=True) | |
| with edges_count: | |
| st.markdown(f"<div class='metric-box'><h3>🔗</h3><p>Connections</p><h2>{len(edges)}</h2></div>", unsafe_allow_html=True) | |
| with timestamp: | |
| current_time = datetime.now().strftime("%H:%M:%S") | |
| st.markdown(f"<div class='metric-box'><h3>⏰</h3><p>Updated</p><h2>{current_time}</h2></div>", unsafe_allow_html=True) | |
| with graph_placeholder.container(): | |
| fig = build_graph(nodes, edges) | |
| st.plotly_chart(fig, use_container_width=True, key=f"graph_{iteration}") | |
| with summary_placeholder.container(): | |
| st.markdown("<div class='section-header'>🧠 AI ANALYSIS (LIVE)</div>", unsafe_allow_html=True) | |
| with st.spinner("🤖 Nebius AI is analyzing..."): | |
| ai_start = time.time() | |
| result = summarize_workflow(log_text) | |
| if isinstance(result, tuple): | |
| summary, token_count = result | |
| else: | |
| summary = result | |
| token_count = 0 | |
| ai_latency = time.time() - ai_start | |
| total_latency = parse_time + ai_latency | |
| update_performance_metrics(total_latency, token_count) | |
| display_performance_metrics() | |
| st.markdown(f""" | |
| <div class='analysis-box'> | |
| <p>{summary}</p> | |
| </div> | |
| """, unsafe_allow_html=True) | |
| st.info(f"🔄 Workflow #{iteration} - Next generation in {refresh_rate} seconds... (Parse: {parse_time:.3f}s | AI: {ai_latency:.3f}s)") | |
| except Exception as e: | |
| st.error(f"❌ Error: {e}") | |
| import traceback | |
| st.code(traceback.format_exc()) | |
| time.sleep(refresh_rate) | |
| import threading | |
| def run_realtime(): | |
| iteration = 0 | |
| while st.session_state.get('realtime_enabled', True): | |
| iteration += 1 | |
| try: | |
| parse_start = time.time() | |
| nodes, edges, log_text = parse_logs() | |
| parse_time = time.time() - parse_start | |
| with agents_count: | |
| st.markdown(f"<div class='metric-box'><h3>⚡</h3><p>Agents</p><h2>{len(nodes)}</h2></div>", unsafe_allow_html=True) | |
| with edges_count: | |
| st.markdown(f"<div class='metric-box'><h3>🔗</h3><p>Connections</p><h2>{len(edges)}</h2></div>", unsafe_allow_html=True) | |
| with timestamp: | |
| current_time = datetime.now().strftime("%H:%M:%S") | |
| st.markdown(f"<div class='metric-box'><h3>⏰</h3><p>Updated</p><h2>{current_time}</h2></div>", unsafe_allow_html=True) | |
| with graph_placeholder.container(): | |
| fig = build_graph(nodes, edges) | |
| st.plotly_chart(fig, use_container_width=True, key=f"graph_{iteration}") | |
| with summary_placeholder.container(): | |
| st.markdown("<div class='section-header'>🧠 AI ANALYSIS (LIVE)</div>", unsafe_allow_html=True) | |
| with st.spinner("🤖 Nebius AI is analyzing..."): | |
| ai_start = time.time() | |
| result = summarize_workflow(log_text) | |
| if isinstance(result, tuple): | |
| summary, token_count = result | |
| else: | |
| summary = result | |
| token_count = 0 | |
| ai_latency = time.time() - ai_start | |
| total_latency = parse_time + ai_latency | |
| update_performance_metrics(total_latency, token_count) | |
| display_performance_metrics() | |
| st.markdown(f""" | |
| <div class='analysis-box'> | |
| <p>{summary}</p> | |
| </div> | |
| """, unsafe_allow_html=True) | |
| st.info(f"🔄 Workflow #{iteration} - Next generation in {refresh_rate} seconds... (Parse: {parse_time:.3f}s | AI: {ai_latency:.3f}s)") | |
| except Exception as e: | |
| st.error(f"❌ Error: {e}") | |
| import traceback | |
| st.code(traceback.format_exc()) | |
| time.sleep(refresh_rate) | |
| if 'realtime_thread' not in st.session_state or not st.session_state['realtime_thread'].is_alive(): | |
| st.session_state['realtime_enabled'] = True | |
| st.session_state['realtime_thread'] = threading.Thread(target=run_realtime, daemon=True) | |
| st.session_state['realtime_thread'].start() |
| st.error(f"❌ Error: {e}") | ||
| import traceback | ||
| st.code(traceback.format_exc()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
security: st.error(f"❌ Error: {e}") and st.code(traceback.format_exc()) directly display exception messages and stack traces to users, potentially exposing sensitive internal information or secrets if an error occurs.
🤖 AI Agent Prompt for Cursor/Windsurf
📋 Copy this prompt to your AI coding assistant (Cursor, Windsurf, etc.) to get help fixing this issue
In advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/app.py, lines 294-296, the code displays raw exception messages and stack traces to users via Streamlit (`st.error(f"❌ Error: {e}")` and `st.code(traceback.format_exc())`). This can leak sensitive internal information. Replace these lines with a generic error message for users, and (optionally) log the detailed exception securely on the server side instead.
📝 Committable Code Suggestion
‼️ Ensure you review the code suggestion before committing it to the branch. Make sure it replaces the highlighted code, contains no missing lines, and has no issues with indentation.
| st.error(f"❌ Error: {e}") | |
| import traceback | |
| st.code(traceback.format_exc()) | |
| st.error("❌ An unexpected error occurred. Please contact the administrator.") | |
| # Optionally log the exception details securely on the server side for debugging | |
| # import traceback | |
| # log_error(traceback.format_exc()) |
| def build_graph(nodes, edges): | ||
| """Generate a glowing, animated Plotly network graph.""" | ||
| G = nx.DiGraph() | ||
| G.add_nodes_from(nodes) | ||
| G.add_edges_from(edges) | ||
| pos = nx.spring_layout(G, seed=42, k=1.8) | ||
|
|
||
| # Vibrant dynamic colors | ||
| neon_colors = ['#FF3CAC', '#784BA0', '#2B86C5', '#00F5A0', '#FF9A8B', '#8EC5FC', '#F9F586'] | ||
|
|
||
| edge_traces = [] | ||
| for i, edge in enumerate(G.edges()): | ||
| x0, y0 = pos[edge[0]] | ||
| x1, y1 = pos[edge[1]] | ||
| mid_x = (x0 + x1) / 2 + random.uniform(-0.05, 0.05) | ||
| mid_y = (y0 + y1) / 2 + random.uniform(-0.05, 0.05) | ||
| edge_traces.append( | ||
| go.Scatter( | ||
| x=[x0, mid_x, x1], | ||
| y=[y0, mid_y, y1], | ||
| mode='lines', | ||
| line=dict(width=4, color=random.choice(neon_colors), shape='spline'), | ||
| hoverinfo='none', | ||
| opacity=0.8 | ||
| ) | ||
| ) | ||
| node_x = [pos[n][0] for n in G.nodes()] | ||
| node_y = [pos[n][1] for n in G.nodes()] | ||
|
|
||
| node_trace = go.Scatter( | ||
| x=node_x, | ||
| y=node_y, | ||
| mode='markers+text', | ||
| text=list(G.nodes()), | ||
| textposition='bottom center', | ||
| textfont=dict(size=16, color='#FFFFFF', family='Poppins, sans-serif'), | ||
| marker=dict( | ||
| size=45, | ||
| color=[random.choice(neon_colors) for _ in G.nodes()], | ||
| line=dict(width=3, color='white'), | ||
| symbol='circle' | ||
| ), | ||
| hoverinfo='text' | ||
| ) | ||
|
|
||
| fig = go.Figure(data=edge_traces + [node_trace]) | ||
| fig.update_layout( | ||
| title=dict( | ||
| text="💡 Live AI Agent Workflow", | ||
| font=dict(size=30, color='#FFFFFF', family='Poppins, sans-serif'), | ||
| x=0.5 | ||
| ), | ||
| showlegend=False, | ||
| hovermode='closest', | ||
| paper_bgcolor='#0a0e27', | ||
| plot_bgcolor='#0a0e27', | ||
| xaxis=dict(showgrid=False, zeroline=False, showticklabels=False), | ||
| yaxis=dict(showgrid=False, zeroline=False, showticklabels=False), | ||
| height=650, | ||
| margin=dict(t=80, b=40, l=20, r=20) | ||
| ) | ||
| return fig |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
correctness: build_graph does not check for empty nodes or edges, which can cause a crash or an empty plot if called with empty data.
🤖 AI Agent Prompt for Cursor/Windsurf
📋 Copy this prompt to your AI coding assistant (Cursor, Windsurf, etc.) to get help fixing this issue
In advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/graph_builder.py, lines 5-66, the function `build_graph` does not check for empty `nodes` or `edges`, which can cause a crash or an empty plot if called with empty data. Add a check at the start of the function to raise a ValueError if `nodes` is empty, to prevent runtime errors and ensure the function contract is upheld.
📝 Committable Code Suggestion
‼️ Ensure you review the code suggestion before committing it to the branch. Make sure it replaces the highlighted code, contains no missing lines, and has no issues with indentation.
| def build_graph(nodes, edges): | |
| """Generate a glowing, animated Plotly network graph.""" | |
| G = nx.DiGraph() | |
| G.add_nodes_from(nodes) | |
| G.add_edges_from(edges) | |
| pos = nx.spring_layout(G, seed=42, k=1.8) | |
| # Vibrant dynamic colors | |
| neon_colors = ['#FF3CAC', '#784BA0', '#2B86C5', '#00F5A0', '#FF9A8B', '#8EC5FC', '#F9F586'] | |
| edge_traces = [] | |
| for i, edge in enumerate(G.edges()): | |
| x0, y0 = pos[edge[0]] | |
| x1, y1 = pos[edge[1]] | |
| mid_x = (x0 + x1) / 2 + random.uniform(-0.05, 0.05) | |
| mid_y = (y0 + y1) / 2 + random.uniform(-0.05, 0.05) | |
| edge_traces.append( | |
| go.Scatter( | |
| x=[x0, mid_x, x1], | |
| y=[y0, mid_y, y1], | |
| mode='lines', | |
| line=dict(width=4, color=random.choice(neon_colors), shape='spline'), | |
| hoverinfo='none', | |
| opacity=0.8 | |
| ) | |
| ) | |
| node_x = [pos[n][0] for n in G.nodes()] | |
| node_y = [pos[n][1] for n in G.nodes()] | |
| node_trace = go.Scatter( | |
| x=node_x, | |
| y=node_y, | |
| mode='markers+text', | |
| text=list(G.nodes()), | |
| textposition='bottom center', | |
| textfont=dict(size=16, color='#FFFFFF', family='Poppins, sans-serif'), | |
| marker=dict( | |
| size=45, | |
| color=[random.choice(neon_colors) for _ in G.nodes()], | |
| line=dict(width=3, color='white'), | |
| symbol='circle' | |
| ), | |
| hoverinfo='text' | |
| ) | |
| fig = go.Figure(data=edge_traces + [node_trace]) | |
| fig.update_layout( | |
| title=dict( | |
| text="💡 Live AI Agent Workflow", | |
| font=dict(size=30, color='#FFFFFF', family='Poppins, sans-serif'), | |
| x=0.5 | |
| ), | |
| showlegend=False, | |
| hovermode='closest', | |
| paper_bgcolor='#0a0e27', | |
| plot_bgcolor='#0a0e27', | |
| xaxis=dict(showgrid=False, zeroline=False, showticklabels=False), | |
| yaxis=dict(showgrid=False, zeroline=False, showticklabels=False), | |
| height=650, | |
| margin=dict(t=80, b=40, l=20, r=20) | |
| ) | |
| return fig | |
| def build_graph(nodes, edges): | |
| """Generate a glowing, animated Plotly network graph.""" | |
| if not nodes: | |
| raise ValueError("No nodes provided for graph visualization.") | |
| G = nx.DiGraph() | |
| G.add_nodes_from(nodes) | |
| G.add_edges_from(edges) | |
| pos = nx.spring_layout(G, seed=42, k=1.8) | |
| # Vibrant dynamic colors | |
| neon_colors = ['#FF3CAC', '#784BA0', '#2B86C5', '#00F5A0', '#FF9A8B', '#8EC5FC', '#F9F586'] | |
| edge_traces = [] | |
| for i, edge in enumerate(G.edges()): | |
| x0, y0 = pos[edge[0]] | |
| x1, y1 = pos[edge[1]] | |
| mid_x = (x0 + x1) / 2 + random.uniform(-0.05, 0.05) | |
| mid_y = (y0 + y1) / 2 + random.uniform(-0.05, 0.05) | |
| edge_traces.append( | |
| go.Scatter( | |
| x=[x0, mid_x, x1], | |
| y=[y0, mid_y, y1], | |
| mode='lines', | |
| line=dict(width=4, color=random.choice(neon_colors), shape='spline'), | |
| hoverinfo='none', | |
| opacity=0.8 | |
| ) | |
| ) | |
| node_x = [pos[n][0] for n in G.nodes()] | |
| node_y = [pos[n][1] for n in G.nodes()] | |
| node_trace = go.Scatter( | |
| x=node_x, | |
| y=node_y, | |
| mode='markers+text', | |
| text=list(G.nodes()), | |
| textposition='bottom center', | |
| textfont=dict(size=16, color='#FFFFFF', family='Poppins, sans-serif'), | |
| marker=dict( | |
| size=45, | |
| color=[random.choice(neon_colors) for _ in G.nodes()], | |
| line=dict(width=3, color='white'), | |
| symbol='circle' | |
| ), | |
| hoverinfo='text' | |
| ) | |
| fig = go.Figure(data=edge_traces + [node_trace]) | |
| fig.update_layout( | |
| title=dict( | |
| text="💡 Live AI Agent Workflow", | |
| font=dict(size=30, color='#FFFFFF', family='Poppins, sans-serif'), | |
| x=0.5 | |
| ), | |
| showlegend=False, | |
| hovermode='closest', | |
| paper_bgcolor='#0a0e27', | |
| plot_bgcolor='#0a0e27', | |
| xaxis=dict(showgrid=False, zeroline=False, showticklabels=False), | |
| yaxis=dict(showgrid=False, zeroline=False, showticklabels=False), | |
| height=650, | |
| margin=dict(t=80, b=40, l=20, r=20) | |
| ) | |
| return fig |
| def summarize_workflow(log_text: str): | ||
| """ | ||
| Uses Nebius LLM to provide deep, insightful analysis of agent workflow. | ||
| Returns tuple of (summary_text, token_count) | ||
| """ | ||
| client = get_nebius_client() | ||
| prompt = f"""You are an expert AI systems analyst specializing in multi-agent workflows and distributed AI architectures. | ||
|
|
||
| Analyze the following agent workflow log data and provide a DEEP, INSIGHTFUL analysis that goes beyond surface-level observations. | ||
|
|
||
| Log Data: | ||
| {log_text} | ||
|
|
||
| Your analysis should be structured as follows: | ||
|
|
||
| **🧠 AI Insight Summary** | ||
|
|
||
| Start with a one-sentence executive summary that captures the essence of this workflow's design philosophy. | ||
|
|
||
| **🔹 Key Agents & Their Roles:** | ||
| List the agents involved with `code formatting`, then explain WHAT each agent does and WHY it's positioned where it is in the pipeline. | ||
|
|
||
| **🔹 Workflow Architecture Analysis:** | ||
| - Explain the DESIGN PATTERN (sequential, parallel, hub-and-spoke, waterfall, etc.) | ||
| - Discuss WHY this architecture was chosen - what problems does it solve? | ||
| - Identify any SMART design decisions (e.g., separation of concerns, modularity) | ||
| - Point out any POTENTIAL ISSUES with the current design | ||
|
|
||
| **🔹 Data Flow & Dependencies:** | ||
| - Trace how information flows between agents | ||
| - Explain WHY certain agents depend on others | ||
| - Identify any BOTTLENECKS or critical path dependencies | ||
| - Discuss whether the flow is optimal or could be improved | ||
|
|
||
| **🔹 Performance Characteristics:** | ||
| - Analyze the workflow's efficiency characteristics | ||
| - Discuss latency implications of the sequential/parallel design | ||
| - Suggest specific optimizations with estimated impact (e.g., "parallelizing X and Y could reduce latency by ~40%") | ||
| - Comment on scalability and throughput potential | ||
|
|
||
| **🔹 Memory & State Management:** | ||
| - Discuss how state/context flows through the system | ||
| - Identify if there's a RAG pattern, memory loops, or stateless processing | ||
| - Explain the implications for consistency and reproducibility | ||
|
|
||
| **🟢 Conclusion:** | ||
| Provide an overall assessment with specific metrics or observations. Be honest about trade-offs and suggest concrete improvements. | ||
|
|
||
| IMPORTANT RULES: | ||
| - Be SPECIFIC, not generic | ||
| - Use technical terminology appropriately | ||
| - Provide REASONING for your observations | ||
| - Suggest CONCRETE improvements with estimated impacts | ||
| - Make it engaging and insightful, not a boring bullet list | ||
| - Use markdown formatting with code blocks for agent names | ||
| - Keep the total response under 600 words but pack it with insights""" | ||
|
|
||
| response = client.chat.completions.create( | ||
| model="meta-llama/Meta-Llama-3.1-8B-Instruct", | ||
| messages=[{"role": "user", "content": prompt}], | ||
| temperature=0.8, | ||
| max_tokens=1200, | ||
| ) | ||
|
|
||
|
|
||
| usage = response.usage | ||
| total_tokens = usage.total_tokens if usage else 0 | ||
| summary_text = response.choices[0].message.content | ||
| return summary_text, total_tokens No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
correctness: summarize_workflow does not handle exceptions from Nebius API calls, so any network/API error will crash the application.
🤖 AI Agent Prompt for Cursor/Windsurf
📋 Copy this prompt to your AI coding assistant (Cursor, Windsurf, etc.) to get help fixing this issue
In advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/nebius_client.py, lines 17-85, the function `summarize_workflow` does not handle exceptions from Nebius API calls, so any network or API error will crash the application. Please wrap the Nebius API call and response parsing in a try/except block, and return a meaningful error message and token count 0 if an exception occurs.
📝 Committable Code Suggestion
‼️ Ensure you review the code suggestion before committing it to the branch. Make sure it replaces the highlighted code, contains no missing lines, and has no issues with indentation.
| def summarize_workflow(log_text: str): | |
| """ | |
| Uses Nebius LLM to provide deep, insightful analysis of agent workflow. | |
| Returns tuple of (summary_text, token_count) | |
| """ | |
| client = get_nebius_client() | |
| prompt = f"""You are an expert AI systems analyst specializing in multi-agent workflows and distributed AI architectures. | |
| Analyze the following agent workflow log data and provide a DEEP, INSIGHTFUL analysis that goes beyond surface-level observations. | |
| Log Data: | |
| {log_text} | |
| Your analysis should be structured as follows: | |
| **🧠 AI Insight Summary** | |
| Start with a one-sentence executive summary that captures the essence of this workflow's design philosophy. | |
| **🔹 Key Agents & Their Roles:** | |
| List the agents involved with `code formatting`, then explain WHAT each agent does and WHY it's positioned where it is in the pipeline. | |
| **🔹 Workflow Architecture Analysis:** | |
| - Explain the DESIGN PATTERN (sequential, parallel, hub-and-spoke, waterfall, etc.) | |
| - Discuss WHY this architecture was chosen - what problems does it solve? | |
| - Identify any SMART design decisions (e.g., separation of concerns, modularity) | |
| - Point out any POTENTIAL ISSUES with the current design | |
| **🔹 Data Flow & Dependencies:** | |
| - Trace how information flows between agents | |
| - Explain WHY certain agents depend on others | |
| - Identify any BOTTLENECKS or critical path dependencies | |
| - Discuss whether the flow is optimal or could be improved | |
| **🔹 Performance Characteristics:** | |
| - Analyze the workflow's efficiency characteristics | |
| - Discuss latency implications of the sequential/parallel design | |
| - Suggest specific optimizations with estimated impact (e.g., "parallelizing X and Y could reduce latency by ~40%") | |
| - Comment on scalability and throughput potential | |
| **🔹 Memory & State Management:** | |
| - Discuss how state/context flows through the system | |
| - Identify if there's a RAG pattern, memory loops, or stateless processing | |
| - Explain the implications for consistency and reproducibility | |
| **🟢 Conclusion:** | |
| Provide an overall assessment with specific metrics or observations. Be honest about trade-offs and suggest concrete improvements. | |
| IMPORTANT RULES: | |
| - Be SPECIFIC, not generic | |
| - Use technical terminology appropriately | |
| - Provide REASONING for your observations | |
| - Suggest CONCRETE improvements with estimated impacts | |
| - Make it engaging and insightful, not a boring bullet list | |
| - Use markdown formatting with code blocks for agent names | |
| - Keep the total response under 600 words but pack it with insights""" | |
| response = client.chat.completions.create( | |
| model="meta-llama/Meta-Llama-3.1-8B-Instruct", | |
| messages=[{"role": "user", "content": prompt}], | |
| temperature=0.8, | |
| max_tokens=1200, | |
| ) | |
| usage = response.usage | |
| total_tokens = usage.total_tokens if usage else 0 | |
| summary_text = response.choices[0].message.content | |
| return summary_text, total_tokens | |
| def summarize_workflow(log_text: str): | |
| """ | |
| Uses Nebius LLM to provide deep, insightful analysis of agent workflow. | |
| Returns tuple of (summary_text, token_count) | |
| """ | |
| client = get_nebius_client() | |
| prompt = f"""You are an expert AI systems analyst specializing in multi-agent workflows and distributed AI architectures. | |
| Analyze the following agent workflow log data and provide a DEEP, INSIGHTFUL analysis that goes beyond surface-level observations. | |
| Log Data: | |
| {log_text} | |
| Your analysis should be structured as follows: | |
| **🧠 AI Insight Summary** | |
| Start with a one-sentence executive summary that captures the essence of this workflow's design philosophy. | |
| **🔹 Key Agents & Their Roles:** | |
| List the agents involved with `code formatting`, then explain WHAT each agent does and WHY it's positioned where it is in the pipeline. | |
| **🔹 Workflow Architecture Analysis:** | |
| - Explain the DESIGN PATTERN (sequential, parallel, hub-and-spoke, waterfall, etc.) | |
| - Discuss WHY this architecture was chosen - what problems does it solve? | |
| - Identify any SMART design decisions (e.g., separation of concerns, modularity) | |
| - Point out any POTENTIAL ISSUES with the current design | |
| **🔹 Data Flow & Dependencies:** | |
| - Trace how information flows between agents | |
| - Explain WHY certain agents depend on others | |
| - Identify any BOTTLENECKS or critical path dependencies | |
| - Discuss whether the flow is optimal or could be improved | |
| **🔹 Performance Characteristics:** | |
| - Analyze the workflow's efficiency characteristics | |
| - Discuss latency implications of the sequential/parallel design | |
| - Suggest specific optimizations with estimated impact (e.g., "parallelizing X and Y could reduce latency by ~40%") | |
| - Comment on scalability and throughput potential | |
| **🔹 Memory & State Management:** | |
| - Discuss how state/context flows through the system | |
| - Identify if there's a RAG pattern, memory loops, or stateless processing | |
| - Explain the implications for consistency and reproducibility | |
| **🟢 Conclusion:** | |
| Provide an overall assessment with specific metrics or observations. Be honest about trade-offs and suggest concrete improvements. | |
| IMPORTANT RULES: | |
| - Be SPECIFIC, not generic | |
| - Use technical terminology appropriately | |
| - Provide REASONING for your observations | |
| - Suggest CONCRETE improvements with estimated impacts | |
| - Make it engaging and insightful, not a boring bullet list | |
| - Use markdown formatting with code blocks for agent names | |
| - Keep the total response under 600 words but pack it with insights""" | |
| try: | |
| response = client.chat.completions.create( | |
| model="meta-llama/Meta-Llama-3.1-8B-Instruct", | |
| messages=[{"role": "user", "content": prompt}], | |
| temperature=0.8, | |
| max_tokens=1200, | |
| ) | |
| usage = response.usage | |
| total_tokens = usage.total_tokens if usage else 0 | |
| summary_text = response.choices[0].message.content | |
| return summary_text, total_tokens | |
| except Exception as e: | |
| return f"Nebius API error: {e}", 0 |
| def summarize_workflow(log_text: str): | ||
| """ | ||
| Uses Nebius LLM to provide deep, insightful analysis of agent workflow. | ||
| Returns tuple of (summary_text, token_count) | ||
| """ | ||
| client = get_nebius_client() | ||
| prompt = f"""You are an expert AI systems analyst specializing in multi-agent workflows and distributed AI architectures. | ||
|
|
||
| Analyze the following agent workflow log data and provide a DEEP, INSIGHTFUL analysis that goes beyond surface-level observations. | ||
|
|
||
| Log Data: | ||
| {log_text} | ||
|
|
||
| Your analysis should be structured as follows: | ||
|
|
||
| **🧠 AI Insight Summary** | ||
|
|
||
| Start with a one-sentence executive summary that captures the essence of this workflow's design philosophy. | ||
|
|
||
| **🔹 Key Agents & Their Roles:** | ||
| List the agents involved with `code formatting`, then explain WHAT each agent does and WHY it's positioned where it is in the pipeline. | ||
|
|
||
| **🔹 Workflow Architecture Analysis:** | ||
| - Explain the DESIGN PATTERN (sequential, parallel, hub-and-spoke, waterfall, etc.) | ||
| - Discuss WHY this architecture was chosen - what problems does it solve? | ||
| - Identify any SMART design decisions (e.g., separation of concerns, modularity) | ||
| - Point out any POTENTIAL ISSUES with the current design | ||
|
|
||
| **🔹 Data Flow & Dependencies:** | ||
| - Trace how information flows between agents | ||
| - Explain WHY certain agents depend on others | ||
| - Identify any BOTTLENECKS or critical path dependencies | ||
| - Discuss whether the flow is optimal or could be improved | ||
|
|
||
| **🔹 Performance Characteristics:** | ||
| - Analyze the workflow's efficiency characteristics | ||
| - Discuss latency implications of the sequential/parallel design | ||
| - Suggest specific optimizations with estimated impact (e.g., "parallelizing X and Y could reduce latency by ~40%") | ||
| - Comment on scalability and throughput potential | ||
|
|
||
| **🔹 Memory & State Management:** | ||
| - Discuss how state/context flows through the system | ||
| - Identify if there's a RAG pattern, memory loops, or stateless processing | ||
| - Explain the implications for consistency and reproducibility | ||
|
|
||
| **🟢 Conclusion:** | ||
| Provide an overall assessment with specific metrics or observations. Be honest about trade-offs and suggest concrete improvements. | ||
|
|
||
| IMPORTANT RULES: | ||
| - Be SPECIFIC, not generic | ||
| - Use technical terminology appropriately | ||
| - Provide REASONING for your observations | ||
| - Suggest CONCRETE improvements with estimated impacts | ||
| - Make it engaging and insightful, not a boring bullet list | ||
| - Use markdown formatting with code blocks for agent names | ||
| - Keep the total response under 600 words but pack it with insights""" | ||
|
|
||
| response = client.chat.completions.create( | ||
| model="meta-llama/Meta-Llama-3.1-8B-Instruct", | ||
| messages=[{"role": "user", "content": prompt}], | ||
| temperature=0.8, | ||
| max_tokens=1200, | ||
| ) | ||
|
|
||
|
|
||
| usage = response.usage | ||
| total_tokens = usage.total_tokens if usage else 0 | ||
| summary_text = response.choices[0].message.content | ||
| return summary_text, total_tokens No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
performance: summarize_workflow always creates a new Nebius client on every call, causing repeated authentication and connection overhead for frequent summarizations.
🤖 AI Agent Prompt for Cursor/Windsurf
📋 Copy this prompt to your AI coding assistant (Cursor, Windsurf, etc.) to get help fixing this issue
In advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/nebius_client.py, lines 17-85, the summarize_workflow function creates a new Nebius client on every call, causing repeated authentication and connection overhead. Refactor so that the Nebius client is cached and reused across calls (e.g., using functools.lru_cache or a module-level singleton). Update summarize_workflow to use the cached client.
📝 Committable Code Suggestion
‼️ Ensure you review the code suggestion before committing it to the branch. Make sure it replaces the highlighted code, contains no missing lines, and has no issues with indentation.
| def summarize_workflow(log_text: str): | |
| """ | |
| Uses Nebius LLM to provide deep, insightful analysis of agent workflow. | |
| Returns tuple of (summary_text, token_count) | |
| """ | |
| client = get_nebius_client() | |
| prompt = f"""You are an expert AI systems analyst specializing in multi-agent workflows and distributed AI architectures. | |
| Analyze the following agent workflow log data and provide a DEEP, INSIGHTFUL analysis that goes beyond surface-level observations. | |
| Log Data: | |
| {log_text} | |
| Your analysis should be structured as follows: | |
| **🧠 AI Insight Summary** | |
| Start with a one-sentence executive summary that captures the essence of this workflow's design philosophy. | |
| **🔹 Key Agents & Their Roles:** | |
| List the agents involved with `code formatting`, then explain WHAT each agent does and WHY it's positioned where it is in the pipeline. | |
| **🔹 Workflow Architecture Analysis:** | |
| - Explain the DESIGN PATTERN (sequential, parallel, hub-and-spoke, waterfall, etc.) | |
| - Discuss WHY this architecture was chosen - what problems does it solve? | |
| - Identify any SMART design decisions (e.g., separation of concerns, modularity) | |
| - Point out any POTENTIAL ISSUES with the current design | |
| **🔹 Data Flow & Dependencies:** | |
| - Trace how information flows between agents | |
| - Explain WHY certain agents depend on others | |
| - Identify any BOTTLENECKS or critical path dependencies | |
| - Discuss whether the flow is optimal or could be improved | |
| **🔹 Performance Characteristics:** | |
| - Analyze the workflow's efficiency characteristics | |
| - Discuss latency implications of the sequential/parallel design | |
| - Suggest specific optimizations with estimated impact (e.g., "parallelizing X and Y could reduce latency by ~40%") | |
| - Comment on scalability and throughput potential | |
| **🔹 Memory & State Management:** | |
| - Discuss how state/context flows through the system | |
| - Identify if there's a RAG pattern, memory loops, or stateless processing | |
| - Explain the implications for consistency and reproducibility | |
| **🟢 Conclusion:** | |
| Provide an overall assessment with specific metrics or observations. Be honest about trade-offs and suggest concrete improvements. | |
| IMPORTANT RULES: | |
| - Be SPECIFIC, not generic | |
| - Use technical terminology appropriately | |
| - Provide REASONING for your observations | |
| - Suggest CONCRETE improvements with estimated impacts | |
| - Make it engaging and insightful, not a boring bullet list | |
| - Use markdown formatting with code blocks for agent names | |
| - Keep the total response under 600 words but pack it with insights""" | |
| response = client.chat.completions.create( | |
| model="meta-llama/Meta-Llama-3.1-8B-Instruct", | |
| messages=[{"role": "user", "content": prompt}], | |
| temperature=0.8, | |
| max_tokens=1200, | |
| ) | |
| usage = response.usage | |
| total_tokens = usage.total_tokens if usage else 0 | |
| summary_text = response.choices[0].message.content | |
| return summary_text, total_tokens | |
| from functools import lru_cache | |
| def get_nebius_client(): | |
| ... | |
| @lru_cache(maxsize=1) | |
| def get_cached_nebius_client(): | |
| return get_nebius_client() | |
| def summarize_workflow(log_text: str): | |
| ... | |
| client = get_cached_nebius_client() | |
| ... | |
| response = client.chat.completions.create( | |
| model="meta-llama/Meta-Llama-3.1-8B-Instruct", | |
| messages=[{"role": "user", "content": prompt}], | |
| temperature=0.8, | |
| max_tokens=1200, | |
| ) | |
| ... | |
| for event in logs.get("events", []): | ||
| agent = event.get("agent", "unknown") | ||
| next_agent = event.get("next_agent", None) | ||
| nodes.append(agent) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
correctness: nodes list may contain duplicate agent names, causing redundant nodes in the workflow graph and incorrect visualization.
🤖 AI Agent Prompt for Cursor/Windsurf
📋 Copy this prompt to your AI coding assistant (Cursor, Windsurf, etc.) to get help fixing this issue
In advance_ai_agents/advance_ai_agents/ai_workflow_visualizer/parser.py, lines 34-34, the code appends agent names to the `nodes` list without checking for duplicates, which can result in redundant nodes and incorrect workflow visualization. Please update the code so that each agent is only added once to the `nodes` list.
📝 Committable Code Suggestion
‼️ Ensure you review the code suggestion before committing it to the branch. Make sure it replaces the highlighted code, contains no missing lines, and has no issues with indentation.
| nodes.append(agent) | |
| if agent not in nodes: | |
| nodes.append(agent) |
✨ Feature: AI Workflow Visualizer
This pull request adds a Python-based AI Workflow Visualizer — a tool that intelligently summarizes and visualizes AI agent workflows in real time using Nebius API, Streamlit, and Plotly.
🧩 What’s New
requirements.txtfor easy environment setup.💡 Why This Feature
Understanding how AI agents interact and pass information between prompts, memory, and responses can be challenging.
This visualizer helps developers:
🛠️ Implementation Overview
🧠 Outcome
A clean, interactive dashboard that makes understanding and debugging AI agent workflows much easier — all in Python, without needing any external visualization tools.
✅ Checklist
requirements.txt📁 Directory
advance_ai_agents/ai_workflow_visualizerSummary by CodeRabbit
New Features
Documentation
EntelligenceAI PR Summary
This PR adds a complete AI Workflow Visualizer module with Streamlit UI, Nebius AI integration, and interactive graph visualization capabilities, while cleaning up deprecated root assets.