Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions memory_agents/youtube_trend_agent/.env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
OPENAI_API_KEY=Your Minimax api key here
Copy link

Copilot AI Dec 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The spelling of "MiniMax" is inconsistent with the actual branding. According to MiniMax's official documentation, the company name should be "MiniMax" (capital M twice), but in the comment it's written as "Minimax" (only first M capitalized). This should be corrected to "MiniMax" for consistency with the branding used elsewhere in the PR.

Suggested change
OPENAI_API_KEY=Your Minimax api key here
OPENAI_API_KEY=Your MiniMax api key here

Copilot uses AI. Check for mistakes.
EXA_API_KEY=Your exaAI api key here
Copy link

Copilot AI Dec 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The spelling of "ExaAI" is inconsistent. The company name is "Exa" (as shown throughout the codebase in variable names like EXA_API_KEY and references in the README). This should be corrected to "Exa" or "Exa AI" (with a space) for consistency.

Suggested change
EXA_API_KEY=Your exaAI api key here
EXA_API_KEY=Your Exa API key here

Copilot uses AI. Check for mistakes.
MEMORI_API_KEY=Your Memori API key here
Comment on lines +1 to +3

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correctness: Storing API keys directly in the .env.example file can lead to accidental exposure of sensitive information. Consider using placeholders like OPENAI_API_KEY=your_openai_api_key_here to prevent this risk.

πŸ“ Committable Code Suggestion

‼️ Ensure you review the code suggestion before committing it to the branch. Make sure it replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
OPENAI_API_KEY=Your Minimax api key here
EXA_API_KEY=Your exaAI api key here
MEMORI_API_KEY=Your Memori API key here
OPENAI_API_KEY=your_openai_api_key_here
EXA_API_KEY=your_exaai_api_key_here
MEMORI_API_KEY=your_memori_api_key_here

76 changes: 58 additions & 18 deletions memory_agents/youtube_trend_agent/README.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,39 @@
## YouTube Trend Analysis Agent
## YouTube Trend Analysis Agent with Memori & MiniMax

YouTube channel analysis agent powered by **Memori**, **Agno (Nebius)**, **Exa**, and **yt-dlp**.
Paste your YouTube channel URL, ingest recent videos into Memori, then chat with an agent that surfaces trends and concrete new video ideas grounded in your own content.
An AI-powered **YouTube Trend Coach** that uses **Memori v3** as long‑term memory and **MiniMax (OpenAI‑compatible)** for reasoning.

- **Scrapes your channel** with `yt-dlp` and stores video metadata in Memori.
- Uses **MiniMax** to analyze your channel history plus **Exa** web trends.
- Provides a **Streamlit chat UI** to ask for trends and concrete new video ideas grounded in your own content.

---

### Features

- **Direct YouTube scraping**: Uses `yt-dlp` to scrape your channel or playlist from YouTube and collect titles, tags, dates, views, and descriptions.
- **Memori memory store**: Stores each video as a Memori memory (via OpenAI) for fast semantic search and reuse across chats.
- **Web trend context with Exa**: Calls Exa to pull recent articles and topics for your niche and blends them with your own channel history.
- **Streamlit UI**: Sidebar for API keys + channel URL and a chat area for asking about trends and ideas.
- **Direct YouTube scraping**
- Uses `yt-dlp` to scrape a channel or playlist URL (titles, tags, dates, views, descriptions).
- Stores each video as a Memori document for later semantic search.

- **Memori memory store**
- Uses `Memori` + a MiniMax/OpenAI‑compatible client to persist β€œmemories” of your videos.
- Ingestion happens via `ingest_channel_into_memori` in `core.py`, which calls `client.chat.completions.create(...)` so Memori can automatically capture documents.

- **Web trend context with Exa (optional)**
- If `EXA_API_KEY` is set, fetches web articles and topics for your niche via `Exa`.
- Blends Exa trends with your channel history when generating ideas.

- **Streamlit UI**
- Sidebar for API keys, MiniMax base URL, and channel URL.
- Main area provides a chat interface for asking about trends and ideas.

---

### Prerequisites

- Python 3.11+
- [`uv`](https://github.com/astral-sh/uv) (recommended) or `pip`
- MiniMax account + API key (used via the OpenAI SDK)
- Optional: Exa and Memori API keys

---

Comment on lines +1 to 39

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correctness: Clarify if both MiniMax and OpenAI accounts are needed or if they are interchangeable, as the 'Memori memory store' mentions both but prerequisites list only MiniMax.

Expand All @@ -29,12 +54,12 @@ uv sync

This will create a virtual environment (if needed) and install all dependencies declared in `pyproject.toml`.

3. **Environment variables** (set in your shell or a local `.env` file in this folder):
3. **Environment variables**

- `NEBIUS_API_KEY` – required (used both for Memori ingestion and the Agno-powered advisor).
- `EXA_API_KEY` – optional but recommended (for external trend context via Exa).
- `MEMORI_API_KEY` – optional, for Memori Advanced Augmentation / higher quotas.
- `SQLITE_DB_PATH` – optional, defaults to `./memori.sqlite` if unset.
You can either:

- Set these in your `.env`, (see .env.example) **or**
- Enter them in the Streamlit **sidebar** (the app writes them into `os.environ` for the current process).

---

Comment on lines 54 to 65

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correctness: Re-add the list of environment variables with their descriptions to ensure users understand each variable's purpose.

πŸ“ Committable Code Suggestion

‼️ Ensure you review the code suggestion before committing it to the branch. Make sure it replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
This will create a virtual environment (if needed) and install all dependencies declared in `pyproject.toml`.
3. **Environment variables** (set in your shell or a local `.env` file in this folder):
3. **Environment variables**
- `NEBIUS_API_KEY` – required (used both for Memori ingestion and the Agno-powered advisor).
- `EXA_API_KEY` – optional but recommended (for external trend context via Exa).
- `MEMORI_API_KEY` – optional, for Memori Advanced Augmentation / higher quotas.
- `SQLITE_DB_PATH` – optional, defaults to `./memori.sqlite` if unset.
You can either:
- Set these in your `.env`, (see .env.example) **or**
- Enter them in the Streamlit **sidebar** (the app writes them into `os.environ` for the current process).
---
3. **Environment variables**
You can either:
- Set these in your `.env`, (see .env.example) **or**
- Enter them in the Streamlit **sidebar** (the app writes them into `os.environ` for the current process).
The following environment variables are used:
- `NEBIUS_API_KEY` – required (used both for Memori ingestion and the Agno-powered advisor).
- `EXA_API_KEY` – optional but recommended (for external trend context via Exa).
- `MEMORI_API_KEY` – optional, for Memori Advanced Augmentation / higher quotas).
- `SQLITE_DB_PATH` – optional, defaults to `./memori.sqlite` if unset.
---

Expand All @@ -46,13 +71,28 @@ From the `youtube_trend_agent` directory:
uv run streamlit run app.py
```

---

### Using the App

In the **sidebar**:

1. Enter your **Nebius**, optional **Exa**, and optional **Memori** API keys.
2. Paste your **YouTube channel (or playlist) URL**.
3. Click **β€œIngest channel into Memori”** to scrape and store recent videos.
1. Enter your **MiniMax API Key** and (optionally) **MiniMax Base URL**.
2. Optionally enter **Exa** and **Memori** API keys.
3. Paste your **YouTube channel (or playlist) URL**.
4. Click **β€œSave Settings”** to store the keys for this session.
5. Click **β€œIngest channel into Memori”** to scrape and store recent videos.

Then, in the main chat:

- Ask things like:
- β€œSuggest 5 new video ideas that build on my existing content and current trends.”
- β€œWhat trends am I missing in my current uploads?”
- β€œWhich topics seem to perform best on my channel?”

Then use the main chat box to ask things like:
The agent will:

- β€œSuggest 5 new video ideas that build on my existing content and current trends.”
- β€œWhat trends am I missing in my current uploads?”
- Pull context from **Memori** (your stored video history),
- Use **MiniMax** (`MiniMax-M2.1` by default, configurable),
- Optionally incorporate **Exa** web trends,
- And respond with specific, actionable ideas and analysis.
Comment on lines 71 to +98

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correctness: Clarify the order and optionality of API keys in the sidebar to match the updated instructions, ensuring consistency to avoid user confusion.

92 changes: 51 additions & 41 deletions memory_agents/youtube_trend_agent/app.py
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
"""
YouTube Trend Analysis Agent with Memori, Agno (Nebius), and YouTube scraping.
YouTube Trend Analysis Agent with Memori, MiniMax (OpenAI-compatible), and YouTube scraping.

Streamlit app:
- Sidebar: API keys + YouTube channel URL + "Ingest channel into Memori" button.
- Main: Chat interface to ask about trends and get new video ideas.

This app uses:
- Nebius (via both the OpenAI SDK and Agno's Nebius model) for LLM reasoning.
- MiniMax (via the OpenAI SDK) for LLM reasoning.
- yt-dlp to scrape YouTube channel/playlist videos.
- Memori to store and search your channel's video history.
"""
Expand All @@ -15,9 +15,8 @@
import os

import streamlit as st
from agno.agent import Agent
from agno.models.nebius import Nebius
from dotenv import load_dotenv
from openai import OpenAI

from core import fetch_exa_trends, ingest_channel_into_memori

Expand Down Expand Up @@ -69,18 +68,17 @@ def main():
with st.sidebar:
st.subheader("πŸ”‘ API Keys & Channel")

# Nebius logo above the Nebius API key field
try:
st.image("assets/Nebius_Logo.png", width=120)
except Exception:
# Non-fatal if the logo is missing
pass

nebius_api_key_input = st.text_input(
"Nebius API Key",
value=os.getenv("NEBIUS_API_KEY", ""),
minimax_api_key_input = st.text_input(
"MiniMax API Key",
value=os.getenv("OPENAI_API_KEY", ""),
type="password",
help="Your Nebius API key (used for both Memori and Agno).",
help="Your MiniMax API key (used via the OpenAI-compatible SDK).",
)

minimax_base_url_input = st.text_input(
"MiniMax Base URL",
value=os.getenv("OPENAI_BASE_URL", "https://api.minimax.io/v1"),
help="Base URL for MiniMax's OpenAI-compatible API.",
)

exa_api_key_input = st.text_input(
Comment on lines 68 to 84

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correctness: The environment variable OPENAI_API_KEY is used for the MiniMax API key, which may cause confusion or errors if the variable is expected to be specific to OpenAI. Consider using a distinct environment variable name for clarity and to avoid potential conflicts.

Expand All @@ -103,8 +101,10 @@ def main():
)

if st.button("Save Settings"):
if nebius_api_key_input:
os.environ["NEBIUS_API_KEY"] = nebius_api_key_input
if minimax_api_key_input:
os.environ["OPENAI_API_KEY"] = minimax_api_key_input
if minimax_base_url_input:
os.environ["OPENAI_BASE_URL"] = minimax_base_url_input
if exa_api_key_input:
os.environ["EXA_API_KEY"] = exa_api_key_input
if memori_api_key_input:
Expand All @@ -115,8 +115,8 @@ def main():
st.markdown("---")

if st.button("Ingest channel into Memori"):
if not os.getenv("NEBIUS_API_KEY"):
st.warning("NEBIUS_API_KEY is required before ingestion.")
if not os.getenv("OPENAI_API_KEY"):
st.warning("OPENAI_API_KEY (MiniMax) is required before ingestion.")
elif not channel_url_input.strip():
st.warning("Please enter a YouTube channel or playlist URL.")
else:
Comment on lines 115 to 122

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correctness: Update all related documentation and environment setups to reflect the change from NEBIUS_API_KEY to OPENAI_API_KEY.

Expand All @@ -139,25 +139,23 @@ def main():
)

# Get keys for main app logic
nebius_key = os.getenv("NEBIUS_API_KEY", "")
if not nebius_key:
api_key = os.getenv("OPENAI_API_KEY", "")
base_url = os.getenv("OPENAI_BASE_URL", "https://api.minimax.io/v1")
if not api_key:
st.warning(
"⚠️ Please enter your Nebius API key in the sidebar to start chatting!"
"⚠️ Please enter your MiniMax API key in the sidebar to start chatting!"
)
st.stop()

# Initialize Nebius model for the advisor (once)
if "nebius_model" not in st.session_state:
# Initialize MiniMax/OpenAI client for the advisor (once)
if "openai_client" not in st.session_state:
try:
st.session_state.nebius_model = Nebius(
id=os.getenv(
"YOUTUBE_TREND_MODEL",
"moonshotai/Kimi-K2-Instruct",
),
api_key=nebius_key,
st.session_state.openai_client = OpenAI(
base_url=base_url,
api_key=api_key,
)
except Exception as e:
st.error(f"Failed to initialize Nebius model: {e}")
st.error(f"Failed to initialize MiniMax client: {e}")
st.stop()

# Display chat history
Comment on lines 139 to 161

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correctness: Add error handling for missing 'OPENAI_API_KEY' and 'OPENAI_BASE_URL' to prevent runtime issues.

Expand Down Expand Up @@ -252,18 +250,30 @@ def main():
{exa_trends}
"""

advisor = Agent(
name="YouTube Trend Advisor",
model=st.session_state.nebius_model,
markdown=True,
client = st.session_state.openai_client
completion = client.chat.completions.create(
model=os.getenv(
"YOUTUBE_TREND_MODEL",
"MiniMax-M2.1",
Copy link

Copilot AI Dec 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default model name "MiniMax-M2.1" may not be a valid model identifier for the MiniMax API. Based on common naming conventions for OpenAI-compatible APIs, model names typically use lowercase with hyphens (e.g., "minimax-m2.1" or similar). Please verify this is the correct model identifier according to MiniMax's API documentation, as using an incorrect model name will cause API calls to fail.

Suggested change
"MiniMax-M2.1",
"minimax-m2.1",

Copilot uses AI. Check for mistakes.
),
messages=[
{
"role": "system",
"content": (
"You are a YouTube strategy assistant that analyzes a creator's "
"channel and suggests specific, actionable video ideas."
),
},
{
"role": "user",
"content": full_prompt,
},
],
Copy link

Copilot AI Dec 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The parameter extra_body={"reasoning_split": True} appears to be a MiniMax-specific API feature. However, this is not documented in the code comments or README, and may cause confusion if users switch to a different OpenAI-compatible provider (as the code suggests is possible). Consider adding a comment explaining what this parameter does and that it's MiniMax-specific, or handling it conditionally based on the provider being used.

Suggested change
],
],
# MiniMax-specific extension: enables "reasoning_split" output.
# If you switch to a different OpenAI-compatible provider/model,
# you may need to remove or adjust this extra_body parameter.

Copilot uses AI. Check for mistakes.
extra_body={"reasoning_split": True},
)

result = advisor.run(full_prompt)
response_text = (
str(result.content)
if hasattr(result, "content")
else str(result)
)
message = completion.choices[0].message
response_text = getattr(message, "content", "") or str(message)
Comment on lines +275 to +276
Copy link

Copilot AI Dec 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The response extraction logic uses getattr(message, "content", "") or str(message), which could produce unhelpful output if the content is an empty string but the message object exists. If MiniMax returns an empty content field for some reason (e.g., during errors or when using reasoning_split), calling str(message) might produce a technical object representation rather than a user-friendly error message. Consider adding explicit error handling or checking the response status before extracting content.

Suggested change
message = completion.choices[0].message
response_text = getattr(message, "content", "") or str(message)
# Safely extract the assistant's message content and handle empty responses.
if not completion or not getattr(completion, "choices", None):
response_text = (
"⚠️ The model did not return any choices. Please try again."
)
else:
message = completion.choices[0].message
content = getattr(message, "content", None)
if isinstance(content, str) and content.strip():
response_text = content
else:
response_text = (
"⚠️ The model returned an empty response. Please try again "
"or rephrase your question."
)

Copilot uses AI. Check for mistakes.

st.session_state.messages.append(
{"role": "assistant", "content": response_text}
Comment on lines 250 to 279

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correctness: The current implementation does not handle potential exceptions from client.chat.completions.create. This could lead to unhandled exceptions if the API call fails. Consider wrapping the call in a try-except block to handle errors gracefully and provide user feedback.

πŸ“ Committable Code Suggestion

‼️ Ensure you review the code suggestion before committing it to the branch. Make sure it replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
{exa_trends}
"""
advisor = Agent(
name="YouTube Trend Advisor",
model=st.session_state.nebius_model,
markdown=True,
client = st.session_state.openai_client
completion = client.chat.completions.create(
model=os.getenv(
"YOUTUBE_TREND_MODEL",
"MiniMax-M2.1",
),
messages=[
{
"role": "system",
"content": (
"You are a YouTube strategy assistant that analyzes a creator's "
"channel and suggests specific, actionable video ideas."
),
},
{
"role": "user",
"content": full_prompt,
},
],
extra_body={"reasoning_split": True},
)
result = advisor.run(full_prompt)
response_text = (
str(result.content)
if hasattr(result, "content")
else str(result)
)
message = completion.choices[0].message
response_text = getattr(message, "content", "") or str(message)
st.session_state.messages.append(
{"role": "assistant", "content": response_text}
External web trends for this niche (may be partial):
{exa_trends}
"""
try:
client = st.session_state.openai_client
completion = client.chat.completions.create(
model=os.getenv(
"YOUTUBE_TREND_MODEL",
"MiniMax-M2.1",
),
messages=[
{
"role": "system",
"content": (
"You are a YouTube strategy assistant that analyzes a creator's "
"channel and suggests specific, actionable video ideas."
),
},
{
"role": "user",
"content": full_prompt,
},
],
extra_body={"reasoning_split": True},
)
message = completion.choices[0].message
response_text = getattr(message, "content", "") or str(message)
st.session_state.messages.append(
{"role": "assistant", "content": response_text}
)
except Exception as e:
st.error(f"Failed to create completion: {e}")
return

Expand Down
27 changes: 17 additions & 10 deletions memory_agents/youtube_trend_agent/core.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
Core logic for the YouTube Trend Analysis Agent.

This module contains:
- Memori + Nebius initialization helpers.
- Memori initialization helpers (using an OpenAI-compatible client, e.g. MiniMax).
- YouTube scraping utilities.
- Exa-based trend fetching.
- Channel ingestion into Memori.
Comment on lines 2 to 8

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correctness: The updated module description removes 'Nebius' from the initialization helpers, but the function init_memori_with_nebius still references Nebius. Ensure the documentation accurately reflects the current implementation to avoid confusion.

Expand Down Expand Up @@ -43,13 +43,20 @@ def init_memori_with_nebius() -> Memori | None:
Initialize Memori v3 + Nebius client (via the OpenAI SDK).
Copy link

Copilot AI Dec 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The docstring still references "Nebius client" but this has been replaced with an OpenAI-compatible client (MiniMax). The docstring should be updated to remove the "Nebius" reference for consistency with the migration. Consider updating it to just say "Initialize Memori v3 with an OpenAI-compatible client (e.g., MiniMax)."

Suggested change
Initialize Memori v3 + Nebius client (via the OpenAI SDK).
Initialize Memori v3 with an OpenAI-compatible client (e.g., MiniMax).

Copilot uses AI. Check for mistakes.

This is used so Memori can automatically persist "memories" when we send
documents through the registered Nebius-backed client. Agno + Nebius power all
YouTube analysis and idea generation.
documents through the registered OpenAI-compatible client.

NOTE:
- To use MiniMax, set:
OPENAI_BASE_URL = "https://api.minimax.io/v1"
OPENAI_API_KEY = "<your-minimax-api-key>"
"""
nebius_key = os.getenv("NEBIUS_API_KEY", "")
if not nebius_key:
# MiniMax (or other OpenAI-compatible) configuration via standard OpenAI env vars
base_url = os.getenv("OPENAI_BASE_URL", "https://api.minimax.io/v1")
api_key = os.getenv("OPENAI_API_KEY", "")

if not api_key:
st.warning(
"NEBIUS_API_KEY is not set – Memori v3 ingestion will not be active."
"OPENAI_API_KEY is not set – Memori v3 ingestion will not be active."
)
return None

Comment on lines 43 to 62

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correctness: Add a warning if 'OPENAI_BASE_URL' is not set to inform users of the default 'https://api.minimax.io/v1' being used.

πŸ“ Committable Code Suggestion

‼️ Ensure you review the code suggestion before committing it to the branch. Make sure it replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
Initialize Memori v3 + Nebius client (via the OpenAI SDK).
This is used so Memori can automatically persist "memories" when we send
documents through the registered Nebius-backed client. Agno + Nebius power all
YouTube analysis and idea generation.
documents through the registered OpenAI-compatible client.
NOTE:
- To use MiniMax, set:
OPENAI_BASE_URL = "https://api.minimax.io/v1"
OPENAI_API_KEY = "<your-minimax-api-key>"
"""
nebius_key = os.getenv("NEBIUS_API_KEY", "")
if not nebius_key:
# MiniMax (or other OpenAI-compatible) configuration via standard OpenAI env vars
base_url = os.getenv("OPENAI_BASE_URL", "https://api.minimax.io/v1")
api_key = os.getenv("OPENAI_API_KEY", "")
if not api_key:
st.warning(
"NEBIUS_API_KEY is not set – Memori v3 ingestion will not be active."
"OPENAI_API_KEY is not set – Memori v3 ingestion will not be active."
)
return None
"""
Initialize Memori v3 + Nebius client (via the OpenAI SDK).
This is used so Memori can automatically persist "memories" when we send
documents through the registered OpenAI-compatible client.
NOTE:
- To use MiniMax, set:
OPENAI_BASE_URL = "https://api.minimax.io/v1"
OPENAI_API_KEY = "<your-minimax-api-key>"
"""
# MiniMax (or other OpenAI-compatible) configuration via standard OpenAI env vars
base_url = os.getenv("OPENAI_BASE_URL", "https://api.minimax.io/v1")
api_key = os.getenv("OPENAI_API_KEY", "")
if not api_key:
st.warning(
"OPENAI_API_KEY is not set – Memori v3 ingestion will not be active."
)
return None
if not base_url:
st.warning(
"OPENAI_BASE_URL is not set – defaulting to https://api.minimax.io/v1."
)

Expand All @@ -69,10 +76,10 @@ def init_memori_with_nebius() -> Memori | None:
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)

client = OpenAI(
base_url="https://api.studio.nebius.com/v1/",
api_key=nebius_key,
base_url=base_url,
api_key=api_key,
)
# Use the OpenAI-compatible registration API; the client itself points to Nebius.
# Use the OpenAI-compatible registration API; the client itself points to MiniMax (or any compatible provider).
mem = Memori(conn=SessionLocal).openai.register(client)
# Attribution so Memori can attach memories to this process/entity.
mem.attribution(entity_id="youtube-channel", process_id="youtube-trend-agent")
Expand Down Expand Up @@ -297,7 +304,7 @@ def ingest_channel_into_memori(channel_url: str) -> int:
_ = client.chat.completions.create(
model=os.getenv(
"YOUTUBE_TREND_INGEST_MODEL",
"moonshotai/Kimi-K2-Instruct",
"MiniMax-M2.1",
Copy link

Copilot AI Dec 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default model name "MiniMax-M2.1" may not be a valid model identifier for the MiniMax API. Based on common naming conventions for OpenAI-compatible APIs, model names typically use lowercase with hyphens (e.g., "minimax-m2.1" or similar). Please verify this is the correct model identifier according to MiniMax's API documentation, as using an incorrect model name will cause API calls to fail.

Suggested change
"MiniMax-M2.1",
"minimax-m2.1",

Copilot uses AI. Check for mistakes.
),
messages=[
{
Comment on lines 304 to 310

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correctness: Verify that 'MiniMax-M2.1' is compatible with the existing logic and update any dependencies or expectations related to its output.

Expand Down
Loading