-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Replaced openAI with minimax #117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThis PR migrates the YouTube Trend Analysis Agent from Nebius/Agno SDK to MiniMax using OpenAI-compatible clients. The changes replace all Nebius-specific API references with generic OpenAI client implementations, update environment variable names from NEBIUS_API_KEY to OPENAI_API_KEY, and switch the default model from 'moonshotai/Kimi-K2-Instruct' to 'MiniMax-M2.1'. The refactoring maintains existing functionality while making the codebase provider-agnostic. Documentation is updated to reflect the new MiniMax integration, including a new environment template file and comprehensive setup instructions. The Streamlit UI is modified to support configurable base URLs for MiniMax's endpoint. Changes
Sequence DiagramThis diagram shows the interactions between components: sequenceDiagram
participant App as Application
participant Minimax as Minimax API
participant Exa as ExaAI API
participant Memori as Memori API
Note over App: Configuration Update:<br/>API keys added for<br/>external services
App->>Minimax: Authenticate with OPENAI_API_KEY
Minimax-->>App: Authentication response
App->>Exa: Authenticate with EXA_API_KEY
Exa-->>App: Authentication response
App->>Memori: Authenticate with MEMORI_API_KEY
Memori-->>App: Authentication response
π Cross-Repository Impact AnalysisEnable automatic detection of breaking changes across your dependent repositories. β Set up now Learn more about Cross-Repository AnalysisWhat It Does
How to Enable
Benefits
Note for WindsurfPlease change the default marketplace provider to the following in the windsurf settings:Marketplace Extension Gallery Service URL: https://marketplace.visualstudio.com/_apis/public/gallery Marketplace Gallery Item URL: https://marketplace.visualstudio.com/items Entelligence.ai can learn from your feedback. Simply add π / π emojis to teach it your preferences. More shortcuts belowEmoji Descriptions:
Interact with the Bot:
Also you can trigger various commands with the bot by doing The current supported commands are
More commands to be added soon. |
| OPENAI_API_KEY=Your Minimax api key here | ||
| EXA_API_KEY=Your exaAI api key here | ||
| MEMORI_API_KEY=Your Memori API key here No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correctness: Storing API keys directly in the .env.example file can lead to accidental exposure of sensitive information. Consider using placeholders like OPENAI_API_KEY=your_openai_api_key_here to prevent this risk.
π Committable Code Suggestion
βΌοΈ Ensure you review the code suggestion before committing it to the branch. Make sure it replaces the highlighted code, contains no missing lines, and has no issues with indentation.
| OPENAI_API_KEY=Your Minimax api key here | |
| EXA_API_KEY=Your exaAI api key here | |
| MEMORI_API_KEY=Your Memori API key here | |
| OPENAI_API_KEY=your_openai_api_key_here | |
| EXA_API_KEY=your_exaai_api_key_here | |
| MEMORI_API_KEY=your_memori_api_key_here | |
| ## YouTube Trend Analysis Agent with Memori & MiniMax | ||
|
|
||
| YouTube channel analysis agent powered by **Memori**, **Agno (Nebius)**, **Exa**, and **yt-dlp**. | ||
| Paste your YouTube channel URL, ingest recent videos into Memori, then chat with an agent that surfaces trends and concrete new video ideas grounded in your own content. | ||
| An AI-powered **YouTube Trend Coach** that uses **Memori v3** as longβterm memory and **MiniMax (OpenAIβcompatible)** for reasoning. | ||
|
|
||
| - **Scrapes your channel** with `yt-dlp` and stores video metadata in Memori. | ||
| - Uses **MiniMax** to analyze your channel history plus **Exa** web trends. | ||
| - Provides a **Streamlit chat UI** to ask for trends and concrete new video ideas grounded in your own content. | ||
|
|
||
| --- | ||
|
|
||
| ### Features | ||
|
|
||
| - **Direct YouTube scraping**: Uses `yt-dlp` to scrape your channel or playlist from YouTube and collect titles, tags, dates, views, and descriptions. | ||
| - **Memori memory store**: Stores each video as a Memori memory (via OpenAI) for fast semantic search and reuse across chats. | ||
| - **Web trend context with Exa**: Calls Exa to pull recent articles and topics for your niche and blends them with your own channel history. | ||
| - **Streamlit UI**: Sidebar for API keys + channel URL and a chat area for asking about trends and ideas. | ||
| - **Direct YouTube scraping** | ||
| - Uses `yt-dlp` to scrape a channel or playlist URL (titles, tags, dates, views, descriptions). | ||
| - Stores each video as a Memori document for later semantic search. | ||
|
|
||
| - **Memori memory store** | ||
| - Uses `Memori` + a MiniMax/OpenAIβcompatible client to persist βmemoriesβ of your videos. | ||
| - Ingestion happens via `ingest_channel_into_memori` in `core.py`, which calls `client.chat.completions.create(...)` so Memori can automatically capture documents. | ||
|
|
||
| - **Web trend context with Exa (optional)** | ||
| - If `EXA_API_KEY` is set, fetches web articles and topics for your niche via `Exa`. | ||
| - Blends Exa trends with your channel history when generating ideas. | ||
|
|
||
| - **Streamlit UI** | ||
| - Sidebar for API keys, MiniMax base URL, and channel URL. | ||
| - Main area provides a chat interface for asking about trends and ideas. | ||
|
|
||
| --- | ||
|
|
||
| ### Prerequisites | ||
|
|
||
| - Python 3.11+ | ||
| - [`uv`](https://github.com/astral-sh/uv) (recommended) or `pip` | ||
| - MiniMax account + API key (used via the OpenAI SDK) | ||
| - Optional: Exa and Memori API keys | ||
|
|
||
| --- | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correctness: Clarify if both MiniMax and OpenAI accounts are needed or if they are interchangeable, as the 'Memori memory store' mentions both but prerequisites list only MiniMax.
|
|
||
| This will create a virtual environment (if needed) and install all dependencies declared in `pyproject.toml`. | ||
|
|
||
| 3. **Environment variables** (set in your shell or a local `.env` file in this folder): | ||
| 3. **Environment variables** | ||
|
|
||
| - `NEBIUS_API_KEY` β required (used both for Memori ingestion and the Agno-powered advisor). | ||
| - `EXA_API_KEY` β optional but recommended (for external trend context via Exa). | ||
| - `MEMORI_API_KEY` β optional, for Memori Advanced Augmentation / higher quotas. | ||
| - `SQLITE_DB_PATH` β optional, defaults to `./memori.sqlite` if unset. | ||
| You can either: | ||
|
|
||
| - Set these in your `.env`, (see .env.example) **or** | ||
| - Enter them in the Streamlit **sidebar** (the app writes them into `os.environ` for the current process). | ||
|
|
||
| --- | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correctness: Re-add the list of environment variables with their descriptions to ensure users understand each variable's purpose.
π Committable Code Suggestion
βΌοΈ Ensure you review the code suggestion before committing it to the branch. Make sure it replaces the highlighted code, contains no missing lines, and has no issues with indentation.
| This will create a virtual environment (if needed) and install all dependencies declared in `pyproject.toml`. | |
| 3. **Environment variables** (set in your shell or a local `.env` file in this folder): | |
| 3. **Environment variables** | |
| - `NEBIUS_API_KEY` β required (used both for Memori ingestion and the Agno-powered advisor). | |
| - `EXA_API_KEY` β optional but recommended (for external trend context via Exa). | |
| - `MEMORI_API_KEY` β optional, for Memori Advanced Augmentation / higher quotas. | |
| - `SQLITE_DB_PATH` β optional, defaults to `./memori.sqlite` if unset. | |
| You can either: | |
| - Set these in your `.env`, (see .env.example) **or** | |
| - Enter them in the Streamlit **sidebar** (the app writes them into `os.environ` for the current process). | |
| --- | |
| 3. **Environment variables** | |
| You can either: | |
| - Set these in your `.env`, (see .env.example) **or** | |
| - Enter them in the Streamlit **sidebar** (the app writes them into `os.environ` for the current process). | |
| The following environment variables are used: | |
| - `NEBIUS_API_KEY` β required (used both for Memori ingestion and the Agno-powered advisor). | |
| - `EXA_API_KEY` β optional but recommended (for external trend context via Exa). | |
| - `MEMORI_API_KEY` β optional, for Memori Advanced Augmentation / higher quotas). | |
| - `SQLITE_DB_PATH` β optional, defaults to `./memori.sqlite` if unset. | |
| --- |
| uv run streamlit run app.py | ||
| ``` | ||
|
|
||
| --- | ||
|
|
||
| ### Using the App | ||
|
|
||
| In the **sidebar**: | ||
|
|
||
| 1. Enter your **Nebius**, optional **Exa**, and optional **Memori** API keys. | ||
| 2. Paste your **YouTube channel (or playlist) URL**. | ||
| 3. Click **βIngest channel into Memoriβ** to scrape and store recent videos. | ||
| 1. Enter your **MiniMax API Key** and (optionally) **MiniMax Base URL**. | ||
| 2. Optionally enter **Exa** and **Memori** API keys. | ||
| 3. Paste your **YouTube channel (or playlist) URL**. | ||
| 4. Click **βSave Settingsβ** to store the keys for this session. | ||
| 5. Click **βIngest channel into Memoriβ** to scrape and store recent videos. | ||
|
|
||
| Then, in the main chat: | ||
|
|
||
| - Ask things like: | ||
| - βSuggest 5 new video ideas that build on my existing content and current trends.β | ||
| - βWhat trends am I missing in my current uploads?β | ||
| - βWhich topics seem to perform best on my channel?β | ||
|
|
||
| Then use the main chat box to ask things like: | ||
| The agent will: | ||
|
|
||
| - βSuggest 5 new video ideas that build on my existing content and current trends.β | ||
| - βWhat trends am I missing in my current uploads?β | ||
| - Pull context from **Memori** (your stored video history), | ||
| - Use **MiniMax** (`MiniMax-M2.1` by default, configurable), | ||
| - Optionally incorporate **Exa** web trends, | ||
| - And respond with specific, actionable ideas and analysis. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correctness: Clarify the order and optionality of API keys in the sidebar to match the updated instructions, ensuring consistency to avoid user confusion.
| with st.sidebar: | ||
| st.subheader("π API Keys & Channel") | ||
|
|
||
| # Nebius logo above the Nebius API key field | ||
| try: | ||
| st.image("assets/Nebius_Logo.png", width=120) | ||
| except Exception: | ||
| # Non-fatal if the logo is missing | ||
| pass | ||
|
|
||
| nebius_api_key_input = st.text_input( | ||
| "Nebius API Key", | ||
| value=os.getenv("NEBIUS_API_KEY", ""), | ||
| minimax_api_key_input = st.text_input( | ||
| "MiniMax API Key", | ||
| value=os.getenv("OPENAI_API_KEY", ""), | ||
| type="password", | ||
| help="Your Nebius API key (used for both Memori and Agno).", | ||
| help="Your MiniMax API key (used via the OpenAI-compatible SDK).", | ||
| ) | ||
|
|
||
| minimax_base_url_input = st.text_input( | ||
| "MiniMax Base URL", | ||
| value=os.getenv("OPENAI_BASE_URL", "https://api.minimax.io/v1"), | ||
| help="Base URL for MiniMax's OpenAI-compatible API.", | ||
| ) | ||
|
|
||
| exa_api_key_input = st.text_input( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correctness: The environment variable OPENAI_API_KEY is used for the MiniMax API key, which may cause confusion or errors if the variable is expected to be specific to OpenAI. Consider using a distinct environment variable name for clarity and to avoid potential conflicts.
| st.markdown("---") | ||
|
|
||
| if st.button("Ingest channel into Memori"): | ||
| if not os.getenv("NEBIUS_API_KEY"): | ||
| st.warning("NEBIUS_API_KEY is required before ingestion.") | ||
| if not os.getenv("OPENAI_API_KEY"): | ||
| st.warning("OPENAI_API_KEY (MiniMax) is required before ingestion.") | ||
| elif not channel_url_input.strip(): | ||
| st.warning("Please enter a YouTube channel or playlist URL.") | ||
| else: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correctness: Update all related documentation and environment setups to reflect the change from NEBIUS_API_KEY to OPENAI_API_KEY.
| ) | ||
|
|
||
| # Get keys for main app logic | ||
| nebius_key = os.getenv("NEBIUS_API_KEY", "") | ||
| if not nebius_key: | ||
| api_key = os.getenv("OPENAI_API_KEY", "") | ||
| base_url = os.getenv("OPENAI_BASE_URL", "https://api.minimax.io/v1") | ||
| if not api_key: | ||
| st.warning( | ||
| "β οΈ Please enter your Nebius API key in the sidebar to start chatting!" | ||
| "β οΈ Please enter your MiniMax API key in the sidebar to start chatting!" | ||
| ) | ||
| st.stop() | ||
|
|
||
| # Initialize Nebius model for the advisor (once) | ||
| if "nebius_model" not in st.session_state: | ||
| # Initialize MiniMax/OpenAI client for the advisor (once) | ||
| if "openai_client" not in st.session_state: | ||
| try: | ||
| st.session_state.nebius_model = Nebius( | ||
| id=os.getenv( | ||
| "YOUTUBE_TREND_MODEL", | ||
| "moonshotai/Kimi-K2-Instruct", | ||
| ), | ||
| api_key=nebius_key, | ||
| st.session_state.openai_client = OpenAI( | ||
| base_url=base_url, | ||
| api_key=api_key, | ||
| ) | ||
| except Exception as e: | ||
| st.error(f"Failed to initialize Nebius model: {e}") | ||
| st.error(f"Failed to initialize MiniMax client: {e}") | ||
| st.stop() | ||
|
|
||
| # Display chat history |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correctness: Add error handling for missing 'OPENAI_API_KEY' and 'OPENAI_BASE_URL' to prevent runtime issues.
| {exa_trends} | ||
| """ | ||
|
|
||
| advisor = Agent( | ||
| name="YouTube Trend Advisor", | ||
| model=st.session_state.nebius_model, | ||
| markdown=True, | ||
| client = st.session_state.openai_client | ||
| completion = client.chat.completions.create( | ||
| model=os.getenv( | ||
| "YOUTUBE_TREND_MODEL", | ||
| "MiniMax-M2.1", | ||
| ), | ||
| messages=[ | ||
| { | ||
| "role": "system", | ||
| "content": ( | ||
| "You are a YouTube strategy assistant that analyzes a creator's " | ||
| "channel and suggests specific, actionable video ideas." | ||
| ), | ||
| }, | ||
| { | ||
| "role": "user", | ||
| "content": full_prompt, | ||
| }, | ||
| ], | ||
| extra_body={"reasoning_split": True}, | ||
| ) | ||
|
|
||
| result = advisor.run(full_prompt) | ||
| response_text = ( | ||
| str(result.content) | ||
| if hasattr(result, "content") | ||
| else str(result) | ||
| ) | ||
| message = completion.choices[0].message | ||
| response_text = getattr(message, "content", "") or str(message) | ||
|
|
||
| st.session_state.messages.append( | ||
| {"role": "assistant", "content": response_text} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correctness: The current implementation does not handle potential exceptions from client.chat.completions.create. This could lead to unhandled exceptions if the API call fails. Consider wrapping the call in a try-except block to handle errors gracefully and provide user feedback.
π Committable Code Suggestion
βΌοΈ Ensure you review the code suggestion before committing it to the branch. Make sure it replaces the highlighted code, contains no missing lines, and has no issues with indentation.
| {exa_trends} | |
| """ | |
| advisor = Agent( | |
| name="YouTube Trend Advisor", | |
| model=st.session_state.nebius_model, | |
| markdown=True, | |
| client = st.session_state.openai_client | |
| completion = client.chat.completions.create( | |
| model=os.getenv( | |
| "YOUTUBE_TREND_MODEL", | |
| "MiniMax-M2.1", | |
| ), | |
| messages=[ | |
| { | |
| "role": "system", | |
| "content": ( | |
| "You are a YouTube strategy assistant that analyzes a creator's " | |
| "channel and suggests specific, actionable video ideas." | |
| ), | |
| }, | |
| { | |
| "role": "user", | |
| "content": full_prompt, | |
| }, | |
| ], | |
| extra_body={"reasoning_split": True}, | |
| ) | |
| result = advisor.run(full_prompt) | |
| response_text = ( | |
| str(result.content) | |
| if hasattr(result, "content") | |
| else str(result) | |
| ) | |
| message = completion.choices[0].message | |
| response_text = getattr(message, "content", "") or str(message) | |
| st.session_state.messages.append( | |
| {"role": "assistant", "content": response_text} | |
| External web trends for this niche (may be partial): | |
| {exa_trends} | |
| """ | |
| try: | |
| client = st.session_state.openai_client | |
| completion = client.chat.completions.create( | |
| model=os.getenv( | |
| "YOUTUBE_TREND_MODEL", | |
| "MiniMax-M2.1", | |
| ), | |
| messages=[ | |
| { | |
| "role": "system", | |
| "content": ( | |
| "You are a YouTube strategy assistant that analyzes a creator's " | |
| "channel and suggests specific, actionable video ideas." | |
| ), | |
| }, | |
| { | |
| "role": "user", | |
| "content": full_prompt, | |
| }, | |
| ], | |
| extra_body={"reasoning_split": True}, | |
| ) | |
| message = completion.choices[0].message | |
| response_text = getattr(message, "content", "") or str(message) | |
| st.session_state.messages.append( | |
| {"role": "assistant", "content": response_text} | |
| ) | |
| except Exception as e: | |
| st.error(f"Failed to create completion: {e}") | |
| return | |
| Core logic for the YouTube Trend Analysis Agent. | ||
| This module contains: | ||
| - Memori + Nebius initialization helpers. | ||
| - Memori initialization helpers (using an OpenAI-compatible client, e.g. MiniMax). | ||
| - YouTube scraping utilities. | ||
| - Exa-based trend fetching. | ||
| - Channel ingestion into Memori. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correctness: The updated module description removes 'Nebius' from the initialization helpers, but the function init_memori_with_nebius still references Nebius. Ensure the documentation accurately reflects the current implementation to avoid confusion.
| Initialize Memori v3 + Nebius client (via the OpenAI SDK). | ||
| This is used so Memori can automatically persist "memories" when we send | ||
| documents through the registered Nebius-backed client. Agno + Nebius power all | ||
| YouTube analysis and idea generation. | ||
| documents through the registered OpenAI-compatible client. | ||
| NOTE: | ||
| - To use MiniMax, set: | ||
| OPENAI_BASE_URL = "https://api.minimax.io/v1" | ||
| OPENAI_API_KEY = "<your-minimax-api-key>" | ||
| """ | ||
| nebius_key = os.getenv("NEBIUS_API_KEY", "") | ||
| if not nebius_key: | ||
| # MiniMax (or other OpenAI-compatible) configuration via standard OpenAI env vars | ||
| base_url = os.getenv("OPENAI_BASE_URL", "https://api.minimax.io/v1") | ||
| api_key = os.getenv("OPENAI_API_KEY", "") | ||
|
|
||
| if not api_key: | ||
| st.warning( | ||
| "NEBIUS_API_KEY is not set β Memori v3 ingestion will not be active." | ||
| "OPENAI_API_KEY is not set β Memori v3 ingestion will not be active." | ||
| ) | ||
| return None | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correctness: Add a warning if 'OPENAI_BASE_URL' is not set to inform users of the default 'https://api.minimax.io/v1' being used.
π Committable Code Suggestion
βΌοΈ Ensure you review the code suggestion before committing it to the branch. Make sure it replaces the highlighted code, contains no missing lines, and has no issues with indentation.
| Initialize Memori v3 + Nebius client (via the OpenAI SDK). | |
| This is used so Memori can automatically persist "memories" when we send | |
| documents through the registered Nebius-backed client. Agno + Nebius power all | |
| YouTube analysis and idea generation. | |
| documents through the registered OpenAI-compatible client. | |
| NOTE: | |
| - To use MiniMax, set: | |
| OPENAI_BASE_URL = "https://api.minimax.io/v1" | |
| OPENAI_API_KEY = "<your-minimax-api-key>" | |
| """ | |
| nebius_key = os.getenv("NEBIUS_API_KEY", "") | |
| if not nebius_key: | |
| # MiniMax (or other OpenAI-compatible) configuration via standard OpenAI env vars | |
| base_url = os.getenv("OPENAI_BASE_URL", "https://api.minimax.io/v1") | |
| api_key = os.getenv("OPENAI_API_KEY", "") | |
| if not api_key: | |
| st.warning( | |
| "NEBIUS_API_KEY is not set β Memori v3 ingestion will not be active." | |
| "OPENAI_API_KEY is not set β Memori v3 ingestion will not be active." | |
| ) | |
| return None | |
| """ | |
| Initialize Memori v3 + Nebius client (via the OpenAI SDK). | |
| This is used so Memori can automatically persist "memories" when we send | |
| documents through the registered OpenAI-compatible client. | |
| NOTE: | |
| - To use MiniMax, set: | |
| OPENAI_BASE_URL = "https://api.minimax.io/v1" | |
| OPENAI_API_KEY = "<your-minimax-api-key>" | |
| """ | |
| # MiniMax (or other OpenAI-compatible) configuration via standard OpenAI env vars | |
| base_url = os.getenv("OPENAI_BASE_URL", "https://api.minimax.io/v1") | |
| api_key = os.getenv("OPENAI_API_KEY", "") | |
| if not api_key: | |
| st.warning( | |
| "OPENAI_API_KEY is not set β Memori v3 ingestion will not be active." | |
| ) | |
| return None | |
| if not base_url: | |
| st.warning( | |
| "OPENAI_BASE_URL is not set β defaulting to https://api.minimax.io/v1." | |
| ) | |
| _ = client.chat.completions.create( | ||
| model=os.getenv( | ||
| "YOUTUBE_TREND_INGEST_MODEL", | ||
| "moonshotai/Kimi-K2-Instruct", | ||
| "MiniMax-M2.1", | ||
| ), | ||
| messages=[ | ||
| { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correctness: Verify that 'MiniMax-M2.1' is compatible with the existing logic and update any dependencies or expectations related to its output.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR migrates the YouTube Trend Analysis Agent from using Nebius/Agno to MiniMax via OpenAI-compatible clients. The migration replaces the Agno framework's Agent abstraction with direct OpenAI SDK chat completion calls, updates all API key references from NEBIUS_API_KEY to OPENAI_API_KEY, and adds support for configurable base URLs to work with MiniMax's endpoints.
Key changes:
- Replaced Agno's Agent.run() pattern with direct OpenAI client chat completion API calls using MiniMax endpoints
- Migrated all environment variables and UI elements from NEBIUS_API_KEY to OPENAI_API_KEY/OPENAI_BASE_URL
- Updated default model from 'moonshotai/Kimi-K2-Instruct' to 'MiniMax-M2.1' throughout the codebase
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 7 comments.
| File | Description |
|---|---|
| memory_agents/youtube_trend_agent/core.py | Updated Memori initialization to use MiniMax via OPENAI_API_KEY/OPENAI_BASE_URL environment variables; changed default ingestion model to MiniMax-M2.1 |
| memory_agents/youtube_trend_agent/app.py | Replaced Agno Agent implementation with OpenAI client; updated sidebar UI to accept MiniMax credentials; refactored chat completion logic to use direct API calls with reasoning_split parameter |
| memory_agents/youtube_trend_agent/README.md | Comprehensive documentation update reflecting MiniMax migration, including updated prerequisites, setup instructions, and usage examples |
| memory_agents/youtube_trend_agent/.env.example | Added new template file with OPENAI_API_KEY, EXA_API_KEY, and MEMORI_API_KEY placeholders for MiniMax-based configuration |
π‘ Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| @@ -0,0 +1,3 @@ | |||
| OPENAI_API_KEY=Your Minimax api key here | |||
Copilot
AI
Dec 22, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The spelling of "MiniMax" is inconsistent with the actual branding. According to MiniMax's official documentation, the company name should be "MiniMax" (capital M twice), but in the comment it's written as "Minimax" (only first M capitalized). This should be corrected to "MiniMax" for consistency with the branding used elsewhere in the PR.
| OPENAI_API_KEY=Your Minimax api key here | |
| OPENAI_API_KEY=Your MiniMax api key here |
| @@ -0,0 +1,3 @@ | |||
| OPENAI_API_KEY=Your Minimax api key here | |||
| EXA_API_KEY=Your exaAI api key here | |||
Copilot
AI
Dec 22, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The spelling of "ExaAI" is inconsistent. The company name is "Exa" (as shown throughout the codebase in variable names like EXA_API_KEY and references in the README). This should be corrected to "Exa" or "Exa AI" (with a space) for consistency.
| EXA_API_KEY=Your exaAI api key here | |
| EXA_API_KEY=Your Exa API key here |
| completion = client.chat.completions.create( | ||
| model=os.getenv( | ||
| "YOUTUBE_TREND_MODEL", | ||
| "MiniMax-M2.1", |
Copilot
AI
Dec 22, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default model name "MiniMax-M2.1" may not be a valid model identifier for the MiniMax API. Based on common naming conventions for OpenAI-compatible APIs, model names typically use lowercase with hyphens (e.g., "minimax-m2.1" or similar). Please verify this is the correct model identifier according to MiniMax's API documentation, as using an incorrect model name will cause API calls to fail.
| "MiniMax-M2.1", | |
| "minimax-m2.1", |
| model=os.getenv( | ||
| "YOUTUBE_TREND_INGEST_MODEL", | ||
| "moonshotai/Kimi-K2-Instruct", | ||
| "MiniMax-M2.1", |
Copilot
AI
Dec 22, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default model name "MiniMax-M2.1" may not be a valid model identifier for the MiniMax API. Based on common naming conventions for OpenAI-compatible APIs, model names typically use lowercase with hyphens (e.g., "minimax-m2.1" or similar). Please verify this is the correct model identifier according to MiniMax's API documentation, as using an incorrect model name will cause API calls to fail.
| "MiniMax-M2.1", | |
| "minimax-m2.1", |
| "role": "user", | ||
| "content": full_prompt, | ||
| }, | ||
| ], |
Copilot
AI
Dec 22, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The parameter extra_body={"reasoning_split": True} appears to be a MiniMax-specific API feature. However, this is not documented in the code comments or README, and may cause confusion if users switch to a different OpenAI-compatible provider (as the code suggests is possible). Consider adding a comment explaining what this parameter does and that it's MiniMax-specific, or handling it conditionally based on the provider being used.
| ], | |
| ], | |
| # MiniMax-specific extension: enables "reasoning_split" output. | |
| # If you switch to a different OpenAI-compatible provider/model, | |
| # you may need to remove or adjust this extra_body parameter. |
| message = completion.choices[0].message | ||
| response_text = getattr(message, "content", "") or str(message) |
Copilot
AI
Dec 22, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The response extraction logic uses getattr(message, "content", "") or str(message), which could produce unhelpful output if the content is an empty string but the message object exists. If MiniMax returns an empty content field for some reason (e.g., during errors or when using reasoning_split), calling str(message) might produce a technical object representation rather than a user-friendly error message. Consider adding explicit error handling or checking the response status before extracting content.
| message = completion.choices[0].message | |
| response_text = getattr(message, "content", "") or str(message) | |
| # Safely extract the assistant's message content and handle empty responses. | |
| if not completion or not getattr(completion, "choices", None): | |
| response_text = ( | |
| "β οΈ The model did not return any choices. Please try again." | |
| ) | |
| else: | |
| message = completion.choices[0].message | |
| content = getattr(message, "content", None) | |
| if isinstance(content, str) and content.strip(): | |
| response_text = content | |
| else: | |
| response_text = ( | |
| "β οΈ The model returned an empty response. Please try again " | |
| "or rephrase your question." | |
| ) |
|
|
||
| def init_memori_with_nebius() -> Memori | None: | ||
| """ | ||
| Initialize Memori v3 + Nebius client (via the OpenAI SDK). |
Copilot
AI
Dec 22, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The docstring still references "Nebius client" but this has been replaced with an OpenAI-compatible client (MiniMax). The docstring should be updated to remove the "Nebius" reference for consistency with the migration. Consider updating it to just say "Initialize Memori v3 with an OpenAI-compatible client (e.g., MiniMax)."
| Initialize Memori v3 + Nebius client (via the OpenAI SDK). | |
| Initialize Memori v3 with an OpenAI-compatible client (e.g., MiniMax). |
π Linked Issue
Closes #
β Type of Change
π Summary
π README Checklist
README.mdfile for my project.README.mdfollows the official.github/README_TEMPLATE.md.README.md.assetsfolder and included it in myREADME.md.βοΈ Contributor Checklist
advance_ai_agents,rag_apps).requirements.txtorpyproject.tomlfor dependencies..env.examplefile if environment variables are needed and ensured no secrets are committed.π¬ Additional Comments
EntelligenceAI PR Summary
This PR migrates the YouTube Trend Analysis Agent from Nebius/Agno to MiniMax using OpenAI-compatible clients.