Skip to content

Conversation

@3rd-Son
Copy link
Contributor

@3rd-Son 3rd-Son commented Dec 22, 2025

πŸ”— Linked Issue

Closes #

βœ… Type of Change

  • ✨ New Project/Feature
  • 🐞 Bug Fix
  • πŸ“š Documentation Update
  • πŸ”¨ Refactor or Other

πŸ“ Summary

πŸ“– README Checklist

  • I have created a README.md file for my project.
  • My README.md follows the official .github/README_TEMPLATE.md.
  • I have included clear installation and usage instructions in my README.md.
  • I have added a GIF or screenshot to the assets folder and included it in my README.md.

βœ”οΈ Contributor Checklist

  • I have read the CONTRIBUTING.md document.
  • My code follows the project's coding standards.
  • I have placed my project in the correct directory (e.g., advance_ai_agents, rag_apps).
  • I have included a requirements.txt or pyproject.toml for dependencies.
  • I have added a .env.example file if environment variables are needed and ensured no secrets are committed.
  • My pull request is focused on a single project or change.

πŸ’¬ Additional Comments


EntelligenceAI PR Summary

This PR migrates the YouTube Trend Analysis Agent from Nebius/Agno to MiniMax using OpenAI-compatible clients.

  • Replaced all NEBIUS_API_KEY references with OPENAI_API_KEY across app.py and core.py
  • Added configurable OPENAI_BASE_URL support defaulting to MiniMax's API endpoint
  • Refactored LLM interactions from Agno's Agent.run() to direct OpenAI chat completions API with reasoning_split
  • Changed default model from 'moonshotai/Kimi-K2-Instruct' to 'MiniMax-M2.1' in both app.py and core.py
  • Added .env.example template file with OPENAI_API_KEY, EXA_API_KEY, and MEMORI_API_KEY placeholders
  • Updated README.md with MiniMax-focused documentation, prerequisites, and detailed Streamlit UI usage instructions
  • Removed Nebius branding and logo from the Streamlit interface

@entelligence-ai-pr-reviews
Copy link

Walkthrough

This PR migrates the YouTube Trend Analysis Agent from Nebius/Agno SDK to MiniMax using OpenAI-compatible clients. The changes replace all Nebius-specific API references with generic OpenAI client implementations, update environment variable names from NEBIUS_API_KEY to OPENAI_API_KEY, and switch the default model from 'moonshotai/Kimi-K2-Instruct' to 'MiniMax-M2.1'. The refactoring maintains existing functionality while making the codebase provider-agnostic. Documentation is updated to reflect the new MiniMax integration, including a new environment template file and comprehensive setup instructions. The Streamlit UI is modified to support configurable base URLs for MiniMax's endpoint.

Changes

File(s) Summary
memory_agents/youtube_trend_agent/.env.example Added environment configuration template with placeholders for OPENAI_API_KEY (MiniMax), EXA_API_KEY, and MEMORI_API_KEY.
memory_agents/youtube_trend_agent/README.md Rebranded documentation to emphasize Memori v3 and MiniMax integration; added Prerequisites section (Python 3.11+, uv, MiniMax account); expanded 'Using the App' guide with Streamlit UI instructions; updated environment variable documentation to reference .env.example and sidebar configuration.
memory_agents/youtube_trend_agent/app.py Migrated from Agno SDK/Nebius to OpenAI-compatible client for MiniMax; replaced NEBIUS_API_KEY with OPENAI_API_KEY; added configurable base URL field; removed Nebius logo; refactored LLM interaction from Agent.run() to direct chat completions API with reasoning_split enabled; changed default model to 'MiniMax-M2.1'.
memory_agents/youtube_trend_agent/core.py Refactored Memori initialization to use generic OpenAI-compatible client; changed environment variables from NEBIUS_API_KEY to OPENAI_API_KEY and added OPENAI_BASE_URL support (defaulting to MiniMax endpoint); updated comments to reflect provider-agnostic approach; changed default model in ingest_channel_into_memori to 'MiniMax-M2.1'.

Sequence Diagram

This diagram shows the interactions between components:

sequenceDiagram
    participant App as Application
    participant Minimax as Minimax API
    participant Exa as ExaAI API
    participant Memori as Memori API
    
    Note over App: Configuration Update:<br/>API keys added for<br/>external services
    
    App->>Minimax: Authenticate with OPENAI_API_KEY
    Minimax-->>App: Authentication response
    
    App->>Exa: Authenticate with EXA_API_KEY
    Exa-->>App: Authentication response
    
    App->>Memori: Authenticate with MEMORI_API_KEY
    Memori-->>App: Authentication response
Loading

πŸ”— Cross-Repository Impact Analysis

Enable automatic detection of breaking changes across your dependent repositories. β†’ Set up now

Learn more about Cross-Repository Analysis

What It Does

  • Automatically identifies repositories that depend on this code
  • Analyzes potential breaking changes across your entire codebase
  • Provides risk assessment before merging to prevent cross-repo issues

How to Enable

  1. Visit Settings β†’ Code Management
  2. Configure repository dependencies
  3. Future PRs will automatically include cross-repo impact analysis!

Benefits

  • πŸ›‘οΈ Prevent breaking changes across repositories
  • πŸ” Catch integration issues before they reach production
  • πŸ“Š Better visibility into your multi-repo architecture

▢️ ⚑ AI Code Reviews for VS Code, Cursor, Windsurf
Install the extension

Note for Windsurf Please change the default marketplace provider to the following in the windsurf settings:

Marketplace Extension Gallery Service URL: https://marketplace.visualstudio.com/_apis/public/gallery

Marketplace Gallery Item URL: https://marketplace.visualstudio.com/items

Entelligence.ai can learn from your feedback. Simply add πŸ‘ / πŸ‘Ž emojis to teach it your preferences. More shortcuts below

Emoji Descriptions:

  • ⚠️ Potential Issue - May require further investigation.
  • πŸ”’ Security Vulnerability - Fix to ensure system safety.
  • πŸ’» Code Improvement - Suggestions to enhance code quality.
  • πŸ”¨ Refactor Suggestion - Recommendations for restructuring code.
  • ℹ️ Others - General comments and information.

Interact with the Bot:

  • Send a message or request using the format:
    @entelligenceai + *your message*
Example: @entelligenceai Can you suggest improvements for this code?
  • Help the Bot learn by providing feedback on its responses.
    @entelligenceai + *feedback*
Example: @entelligenceai Do not comment on `save_auth` function !

Also you can trigger various commands with the bot by doing
@entelligenceai command

The current supported commands are

  1. config - shows the current config
  2. retrigger_review - retriggers the review

More commands to be added soon.

Comment on lines +1 to +3
OPENAI_API_KEY=Your Minimax api key here
EXA_API_KEY=Your exaAI api key here
MEMORI_API_KEY=Your Memori API key here No newline at end of file

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correctness: Storing API keys directly in the .env.example file can lead to accidental exposure of sensitive information. Consider using placeholders like OPENAI_API_KEY=your_openai_api_key_here to prevent this risk.

πŸ“ Committable Code Suggestion

‼️ Ensure you review the code suggestion before committing it to the branch. Make sure it replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
OPENAI_API_KEY=Your Minimax api key here
EXA_API_KEY=Your exaAI api key here
MEMORI_API_KEY=Your Memori API key here
OPENAI_API_KEY=your_openai_api_key_here
EXA_API_KEY=your_exaai_api_key_here
MEMORI_API_KEY=your_memori_api_key_here

Comment on lines +1 to 39
## YouTube Trend Analysis Agent with Memori & MiniMax

YouTube channel analysis agent powered by **Memori**, **Agno (Nebius)**, **Exa**, and **yt-dlp**.
Paste your YouTube channel URL, ingest recent videos into Memori, then chat with an agent that surfaces trends and concrete new video ideas grounded in your own content.
An AI-powered **YouTube Trend Coach** that uses **Memori v3** as long‑term memory and **MiniMax (OpenAI‑compatible)** for reasoning.

- **Scrapes your channel** with `yt-dlp` and stores video metadata in Memori.
- Uses **MiniMax** to analyze your channel history plus **Exa** web trends.
- Provides a **Streamlit chat UI** to ask for trends and concrete new video ideas grounded in your own content.

---

### Features

- **Direct YouTube scraping**: Uses `yt-dlp` to scrape your channel or playlist from YouTube and collect titles, tags, dates, views, and descriptions.
- **Memori memory store**: Stores each video as a Memori memory (via OpenAI) for fast semantic search and reuse across chats.
- **Web trend context with Exa**: Calls Exa to pull recent articles and topics for your niche and blends them with your own channel history.
- **Streamlit UI**: Sidebar for API keys + channel URL and a chat area for asking about trends and ideas.
- **Direct YouTube scraping**
- Uses `yt-dlp` to scrape a channel or playlist URL (titles, tags, dates, views, descriptions).
- Stores each video as a Memori document for later semantic search.

- **Memori memory store**
- Uses `Memori` + a MiniMax/OpenAI‑compatible client to persist β€œmemories” of your videos.
- Ingestion happens via `ingest_channel_into_memori` in `core.py`, which calls `client.chat.completions.create(...)` so Memori can automatically capture documents.

- **Web trend context with Exa (optional)**
- If `EXA_API_KEY` is set, fetches web articles and topics for your niche via `Exa`.
- Blends Exa trends with your channel history when generating ideas.

- **Streamlit UI**
- Sidebar for API keys, MiniMax base URL, and channel URL.
- Main area provides a chat interface for asking about trends and ideas.

---

### Prerequisites

- Python 3.11+
- [`uv`](https://github.com/astral-sh/uv) (recommended) or `pip`
- MiniMax account + API key (used via the OpenAI SDK)
- Optional: Exa and Memori API keys

---

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correctness: Clarify if both MiniMax and OpenAI accounts are needed or if they are interchangeable, as the 'Memori memory store' mentions both but prerequisites list only MiniMax.

Comment on lines 54 to 65

This will create a virtual environment (if needed) and install all dependencies declared in `pyproject.toml`.

3. **Environment variables** (set in your shell or a local `.env` file in this folder):
3. **Environment variables**

- `NEBIUS_API_KEY` – required (used both for Memori ingestion and the Agno-powered advisor).
- `EXA_API_KEY` – optional but recommended (for external trend context via Exa).
- `MEMORI_API_KEY` – optional, for Memori Advanced Augmentation / higher quotas.
- `SQLITE_DB_PATH` – optional, defaults to `./memori.sqlite` if unset.
You can either:

- Set these in your `.env`, (see .env.example) **or**
- Enter them in the Streamlit **sidebar** (the app writes them into `os.environ` for the current process).

---

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correctness: Re-add the list of environment variables with their descriptions to ensure users understand each variable's purpose.

πŸ“ Committable Code Suggestion

‼️ Ensure you review the code suggestion before committing it to the branch. Make sure it replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
This will create a virtual environment (if needed) and install all dependencies declared in `pyproject.toml`.
3. **Environment variables** (set in your shell or a local `.env` file in this folder):
3. **Environment variables**
- `NEBIUS_API_KEY` – required (used both for Memori ingestion and the Agno-powered advisor).
- `EXA_API_KEY` – optional but recommended (for external trend context via Exa).
- `MEMORI_API_KEY` – optional, for Memori Advanced Augmentation / higher quotas.
- `SQLITE_DB_PATH` – optional, defaults to `./memori.sqlite` if unset.
You can either:
- Set these in your `.env`, (see .env.example) **or**
- Enter them in the Streamlit **sidebar** (the app writes them into `os.environ` for the current process).
---
3. **Environment variables**
You can either:
- Set these in your `.env`, (see .env.example) **or**
- Enter them in the Streamlit **sidebar** (the app writes them into `os.environ` for the current process).
The following environment variables are used:
- `NEBIUS_API_KEY` – required (used both for Memori ingestion and the Agno-powered advisor).
- `EXA_API_KEY` – optional but recommended (for external trend context via Exa).
- `MEMORI_API_KEY` – optional, for Memori Advanced Augmentation / higher quotas).
- `SQLITE_DB_PATH` – optional, defaults to `./memori.sqlite` if unset.
---

Comment on lines 71 to +98
uv run streamlit run app.py
```

---

### Using the App

In the **sidebar**:

1. Enter your **Nebius**, optional **Exa**, and optional **Memori** API keys.
2. Paste your **YouTube channel (or playlist) URL**.
3. Click **β€œIngest channel into Memori”** to scrape and store recent videos.
1. Enter your **MiniMax API Key** and (optionally) **MiniMax Base URL**.
2. Optionally enter **Exa** and **Memori** API keys.
3. Paste your **YouTube channel (or playlist) URL**.
4. Click **β€œSave Settings”** to store the keys for this session.
5. Click **β€œIngest channel into Memori”** to scrape and store recent videos.

Then, in the main chat:

- Ask things like:
- β€œSuggest 5 new video ideas that build on my existing content and current trends.”
- β€œWhat trends am I missing in my current uploads?”
- β€œWhich topics seem to perform best on my channel?”

Then use the main chat box to ask things like:
The agent will:

- β€œSuggest 5 new video ideas that build on my existing content and current trends.”
- β€œWhat trends am I missing in my current uploads?”
- Pull context from **Memori** (your stored video history),
- Use **MiniMax** (`MiniMax-M2.1` by default, configurable),
- Optionally incorporate **Exa** web trends,
- And respond with specific, actionable ideas and analysis.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correctness: Clarify the order and optionality of API keys in the sidebar to match the updated instructions, ensuring consistency to avoid user confusion.

Comment on lines 68 to 84
with st.sidebar:
st.subheader("πŸ”‘ API Keys & Channel")

# Nebius logo above the Nebius API key field
try:
st.image("assets/Nebius_Logo.png", width=120)
except Exception:
# Non-fatal if the logo is missing
pass

nebius_api_key_input = st.text_input(
"Nebius API Key",
value=os.getenv("NEBIUS_API_KEY", ""),
minimax_api_key_input = st.text_input(
"MiniMax API Key",
value=os.getenv("OPENAI_API_KEY", ""),
type="password",
help="Your Nebius API key (used for both Memori and Agno).",
help="Your MiniMax API key (used via the OpenAI-compatible SDK).",
)

minimax_base_url_input = st.text_input(
"MiniMax Base URL",
value=os.getenv("OPENAI_BASE_URL", "https://api.minimax.io/v1"),
help="Base URL for MiniMax's OpenAI-compatible API.",
)

exa_api_key_input = st.text_input(

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correctness: The environment variable OPENAI_API_KEY is used for the MiniMax API key, which may cause confusion or errors if the variable is expected to be specific to OpenAI. Consider using a distinct environment variable name for clarity and to avoid potential conflicts.

Comment on lines 115 to 122
st.markdown("---")

if st.button("Ingest channel into Memori"):
if not os.getenv("NEBIUS_API_KEY"):
st.warning("NEBIUS_API_KEY is required before ingestion.")
if not os.getenv("OPENAI_API_KEY"):
st.warning("OPENAI_API_KEY (MiniMax) is required before ingestion.")
elif not channel_url_input.strip():
st.warning("Please enter a YouTube channel or playlist URL.")
else:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correctness: Update all related documentation and environment setups to reflect the change from NEBIUS_API_KEY to OPENAI_API_KEY.

Comment on lines 139 to 161
)

# Get keys for main app logic
nebius_key = os.getenv("NEBIUS_API_KEY", "")
if not nebius_key:
api_key = os.getenv("OPENAI_API_KEY", "")
base_url = os.getenv("OPENAI_BASE_URL", "https://api.minimax.io/v1")
if not api_key:
st.warning(
"⚠️ Please enter your Nebius API key in the sidebar to start chatting!"
"⚠️ Please enter your MiniMax API key in the sidebar to start chatting!"
)
st.stop()

# Initialize Nebius model for the advisor (once)
if "nebius_model" not in st.session_state:
# Initialize MiniMax/OpenAI client for the advisor (once)
if "openai_client" not in st.session_state:
try:
st.session_state.nebius_model = Nebius(
id=os.getenv(
"YOUTUBE_TREND_MODEL",
"moonshotai/Kimi-K2-Instruct",
),
api_key=nebius_key,
st.session_state.openai_client = OpenAI(
base_url=base_url,
api_key=api_key,
)
except Exception as e:
st.error(f"Failed to initialize Nebius model: {e}")
st.error(f"Failed to initialize MiniMax client: {e}")
st.stop()

# Display chat history

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correctness: Add error handling for missing 'OPENAI_API_KEY' and 'OPENAI_BASE_URL' to prevent runtime issues.

Comment on lines 250 to 279
{exa_trends}
"""

advisor = Agent(
name="YouTube Trend Advisor",
model=st.session_state.nebius_model,
markdown=True,
client = st.session_state.openai_client
completion = client.chat.completions.create(
model=os.getenv(
"YOUTUBE_TREND_MODEL",
"MiniMax-M2.1",
),
messages=[
{
"role": "system",
"content": (
"You are a YouTube strategy assistant that analyzes a creator's "
"channel and suggests specific, actionable video ideas."
),
},
{
"role": "user",
"content": full_prompt,
},
],
extra_body={"reasoning_split": True},
)

result = advisor.run(full_prompt)
response_text = (
str(result.content)
if hasattr(result, "content")
else str(result)
)
message = completion.choices[0].message
response_text = getattr(message, "content", "") or str(message)

st.session_state.messages.append(
{"role": "assistant", "content": response_text}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correctness: The current implementation does not handle potential exceptions from client.chat.completions.create. This could lead to unhandled exceptions if the API call fails. Consider wrapping the call in a try-except block to handle errors gracefully and provide user feedback.

πŸ“ Committable Code Suggestion

‼️ Ensure you review the code suggestion before committing it to the branch. Make sure it replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
{exa_trends}
"""
advisor = Agent(
name="YouTube Trend Advisor",
model=st.session_state.nebius_model,
markdown=True,
client = st.session_state.openai_client
completion = client.chat.completions.create(
model=os.getenv(
"YOUTUBE_TREND_MODEL",
"MiniMax-M2.1",
),
messages=[
{
"role": "system",
"content": (
"You are a YouTube strategy assistant that analyzes a creator's "
"channel and suggests specific, actionable video ideas."
),
},
{
"role": "user",
"content": full_prompt,
},
],
extra_body={"reasoning_split": True},
)
result = advisor.run(full_prompt)
response_text = (
str(result.content)
if hasattr(result, "content")
else str(result)
)
message = completion.choices[0].message
response_text = getattr(message, "content", "") or str(message)
st.session_state.messages.append(
{"role": "assistant", "content": response_text}
External web trends for this niche (may be partial):
{exa_trends}
"""
try:
client = st.session_state.openai_client
completion = client.chat.completions.create(
model=os.getenv(
"YOUTUBE_TREND_MODEL",
"MiniMax-M2.1",
),
messages=[
{
"role": "system",
"content": (
"You are a YouTube strategy assistant that analyzes a creator's "
"channel and suggests specific, actionable video ideas."
),
},
{
"role": "user",
"content": full_prompt,
},
],
extra_body={"reasoning_split": True},
)
message = completion.choices[0].message
response_text = getattr(message, "content", "") or str(message)
st.session_state.messages.append(
{"role": "assistant", "content": response_text}
)
except Exception as e:
st.error(f"Failed to create completion: {e}")
return

Comment on lines 2 to 8
Core logic for the YouTube Trend Analysis Agent.
This module contains:
- Memori + Nebius initialization helpers.
- Memori initialization helpers (using an OpenAI-compatible client, e.g. MiniMax).
- YouTube scraping utilities.
- Exa-based trend fetching.
- Channel ingestion into Memori.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correctness: The updated module description removes 'Nebius' from the initialization helpers, but the function init_memori_with_nebius still references Nebius. Ensure the documentation accurately reflects the current implementation to avoid confusion.

Comment on lines 43 to 62
Initialize Memori v3 + Nebius client (via the OpenAI SDK).
This is used so Memori can automatically persist "memories" when we send
documents through the registered Nebius-backed client. Agno + Nebius power all
YouTube analysis and idea generation.
documents through the registered OpenAI-compatible client.
NOTE:
- To use MiniMax, set:
OPENAI_BASE_URL = "https://api.minimax.io/v1"
OPENAI_API_KEY = "<your-minimax-api-key>"
"""
nebius_key = os.getenv("NEBIUS_API_KEY", "")
if not nebius_key:
# MiniMax (or other OpenAI-compatible) configuration via standard OpenAI env vars
base_url = os.getenv("OPENAI_BASE_URL", "https://api.minimax.io/v1")
api_key = os.getenv("OPENAI_API_KEY", "")

if not api_key:
st.warning(
"NEBIUS_API_KEY is not set – Memori v3 ingestion will not be active."
"OPENAI_API_KEY is not set – Memori v3 ingestion will not be active."
)
return None

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correctness: Add a warning if 'OPENAI_BASE_URL' is not set to inform users of the default 'https://api.minimax.io/v1' being used.

πŸ“ Committable Code Suggestion

‼️ Ensure you review the code suggestion before committing it to the branch. Make sure it replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
Initialize Memori v3 + Nebius client (via the OpenAI SDK).
This is used so Memori can automatically persist "memories" when we send
documents through the registered Nebius-backed client. Agno + Nebius power all
YouTube analysis and idea generation.
documents through the registered OpenAI-compatible client.
NOTE:
- To use MiniMax, set:
OPENAI_BASE_URL = "https://api.minimax.io/v1"
OPENAI_API_KEY = "<your-minimax-api-key>"
"""
nebius_key = os.getenv("NEBIUS_API_KEY", "")
if not nebius_key:
# MiniMax (or other OpenAI-compatible) configuration via standard OpenAI env vars
base_url = os.getenv("OPENAI_BASE_URL", "https://api.minimax.io/v1")
api_key = os.getenv("OPENAI_API_KEY", "")
if not api_key:
st.warning(
"NEBIUS_API_KEY is not set – Memori v3 ingestion will not be active."
"OPENAI_API_KEY is not set – Memori v3 ingestion will not be active."
)
return None
"""
Initialize Memori v3 + Nebius client (via the OpenAI SDK).
This is used so Memori can automatically persist "memories" when we send
documents through the registered OpenAI-compatible client.
NOTE:
- To use MiniMax, set:
OPENAI_BASE_URL = "https://api.minimax.io/v1"
OPENAI_API_KEY = "<your-minimax-api-key>"
"""
# MiniMax (or other OpenAI-compatible) configuration via standard OpenAI env vars
base_url = os.getenv("OPENAI_BASE_URL", "https://api.minimax.io/v1")
api_key = os.getenv("OPENAI_API_KEY", "")
if not api_key:
st.warning(
"OPENAI_API_KEY is not set – Memori v3 ingestion will not be active."
)
return None
if not base_url:
st.warning(
"OPENAI_BASE_URL is not set – defaulting to https://api.minimax.io/v1."
)

Comment on lines 304 to 310
_ = client.chat.completions.create(
model=os.getenv(
"YOUTUBE_TREND_INGEST_MODEL",
"moonshotai/Kimi-K2-Instruct",
"MiniMax-M2.1",
),
messages=[
{

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correctness: Verify that 'MiniMax-M2.1' is compatible with the existing logic and update any dependencies or expectations related to its output.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR migrates the YouTube Trend Analysis Agent from using Nebius/Agno to MiniMax via OpenAI-compatible clients. The migration replaces the Agno framework's Agent abstraction with direct OpenAI SDK chat completion calls, updates all API key references from NEBIUS_API_KEY to OPENAI_API_KEY, and adds support for configurable base URLs to work with MiniMax's endpoints.

Key changes:

  • Replaced Agno's Agent.run() pattern with direct OpenAI client chat completion API calls using MiniMax endpoints
  • Migrated all environment variables and UI elements from NEBIUS_API_KEY to OPENAI_API_KEY/OPENAI_BASE_URL
  • Updated default model from 'moonshotai/Kimi-K2-Instruct' to 'MiniMax-M2.1' throughout the codebase

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 7 comments.

File Description
memory_agents/youtube_trend_agent/core.py Updated Memori initialization to use MiniMax via OPENAI_API_KEY/OPENAI_BASE_URL environment variables; changed default ingestion model to MiniMax-M2.1
memory_agents/youtube_trend_agent/app.py Replaced Agno Agent implementation with OpenAI client; updated sidebar UI to accept MiniMax credentials; refactored chat completion logic to use direct API calls with reasoning_split parameter
memory_agents/youtube_trend_agent/README.md Comprehensive documentation update reflecting MiniMax migration, including updated prerequisites, setup instructions, and usage examples
memory_agents/youtube_trend_agent/.env.example Added new template file with OPENAI_API_KEY, EXA_API_KEY, and MEMORI_API_KEY placeholders for MiniMax-based configuration

πŸ’‘ Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@@ -0,0 +1,3 @@
OPENAI_API_KEY=Your Minimax api key here
Copy link

Copilot AI Dec 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The spelling of "MiniMax" is inconsistent with the actual branding. According to MiniMax's official documentation, the company name should be "MiniMax" (capital M twice), but in the comment it's written as "Minimax" (only first M capitalized). This should be corrected to "MiniMax" for consistency with the branding used elsewhere in the PR.

Suggested change
OPENAI_API_KEY=Your Minimax api key here
OPENAI_API_KEY=Your MiniMax api key here

Copilot uses AI. Check for mistakes.
@@ -0,0 +1,3 @@
OPENAI_API_KEY=Your Minimax api key here
EXA_API_KEY=Your exaAI api key here
Copy link

Copilot AI Dec 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The spelling of "ExaAI" is inconsistent. The company name is "Exa" (as shown throughout the codebase in variable names like EXA_API_KEY and references in the README). This should be corrected to "Exa" or "Exa AI" (with a space) for consistency.

Suggested change
EXA_API_KEY=Your exaAI api key here
EXA_API_KEY=Your Exa API key here

Copilot uses AI. Check for mistakes.
completion = client.chat.completions.create(
model=os.getenv(
"YOUTUBE_TREND_MODEL",
"MiniMax-M2.1",
Copy link

Copilot AI Dec 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default model name "MiniMax-M2.1" may not be a valid model identifier for the MiniMax API. Based on common naming conventions for OpenAI-compatible APIs, model names typically use lowercase with hyphens (e.g., "minimax-m2.1" or similar). Please verify this is the correct model identifier according to MiniMax's API documentation, as using an incorrect model name will cause API calls to fail.

Suggested change
"MiniMax-M2.1",
"minimax-m2.1",

Copilot uses AI. Check for mistakes.
model=os.getenv(
"YOUTUBE_TREND_INGEST_MODEL",
"moonshotai/Kimi-K2-Instruct",
"MiniMax-M2.1",
Copy link

Copilot AI Dec 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default model name "MiniMax-M2.1" may not be a valid model identifier for the MiniMax API. Based on common naming conventions for OpenAI-compatible APIs, model names typically use lowercase with hyphens (e.g., "minimax-m2.1" or similar). Please verify this is the correct model identifier according to MiniMax's API documentation, as using an incorrect model name will cause API calls to fail.

Suggested change
"MiniMax-M2.1",
"minimax-m2.1",

Copilot uses AI. Check for mistakes.
"role": "user",
"content": full_prompt,
},
],
Copy link

Copilot AI Dec 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The parameter extra_body={"reasoning_split": True} appears to be a MiniMax-specific API feature. However, this is not documented in the code comments or README, and may cause confusion if users switch to a different OpenAI-compatible provider (as the code suggests is possible). Consider adding a comment explaining what this parameter does and that it's MiniMax-specific, or handling it conditionally based on the provider being used.

Suggested change
],
],
# MiniMax-specific extension: enables "reasoning_split" output.
# If you switch to a different OpenAI-compatible provider/model,
# you may need to remove or adjust this extra_body parameter.

Copilot uses AI. Check for mistakes.
Comment on lines +275 to +276
message = completion.choices[0].message
response_text = getattr(message, "content", "") or str(message)
Copy link

Copilot AI Dec 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The response extraction logic uses getattr(message, "content", "") or str(message), which could produce unhelpful output if the content is an empty string but the message object exists. If MiniMax returns an empty content field for some reason (e.g., during errors or when using reasoning_split), calling str(message) might produce a technical object representation rather than a user-friendly error message. Consider adding explicit error handling or checking the response status before extracting content.

Suggested change
message = completion.choices[0].message
response_text = getattr(message, "content", "") or str(message)
# Safely extract the assistant's message content and handle empty responses.
if not completion or not getattr(completion, "choices", None):
response_text = (
"⚠️ The model did not return any choices. Please try again."
)
else:
message = completion.choices[0].message
content = getattr(message, "content", None)
if isinstance(content, str) and content.strip():
response_text = content
else:
response_text = (
"⚠️ The model returned an empty response. Please try again "
"or rephrase your question."
)

Copilot uses AI. Check for mistakes.

def init_memori_with_nebius() -> Memori | None:
"""
Initialize Memori v3 + Nebius client (via the OpenAI SDK).
Copy link

Copilot AI Dec 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The docstring still references "Nebius client" but this has been replaced with an OpenAI-compatible client (MiniMax). The docstring should be updated to remove the "Nebius" reference for consistency with the migration. Consider updating it to just say "Initialize Memori v3 with an OpenAI-compatible client (e.g., MiniMax)."

Suggested change
Initialize Memori v3 + Nebius client (via the OpenAI SDK).
Initialize Memori v3 with an OpenAI-compatible client (e.g., MiniMax).

Copilot uses AI. Check for mistakes.
@Arindam200 Arindam200 merged commit 72536e8 into Arindam200:main Dec 29, 2025
7 of 10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants