Skip to content

Feature: Native Venice AI Provider Integration #5820

@SavannahOz

Description

@SavannahOz

Is your feature request related to a problem? Please describe.

ElizaOS currently lacks support for privacy-focused, uncensored AI providers. Users who value data privacy, open models, and freedom from content filtering have no first-class option within the framework.

This limits ElizaOS's reach among privacy-conscious developers, decentralized AI advocates, and users who reject corporate-controlled AI platforms.

Describe the solution you'd like

Add native Venice AI provider integration to ElizaOS.

Venice AI is a privacy-first, uncensored AI platform that uses open-source models and keeps user data local. It offers an OpenAI-compatible API at https://api.venice.ai/v1.

Implementation should include:

  1. VeniceModelProvider built into core or as default provider
    class VeniceModelProvider implements ModelProvider {
      async generateText(params: GenerateTextParams) {
        const response = await fetch("https://api.venice.ai/v1/chat/completions", {
          method: "POST",
          headers: {
            "Authorization": `Bearer ${process.env.VENICE_API_KEY}`,
            "Content-Type": "application/json"
          },
          body: JSON.stringify({
            model: params.model,
            messages: params.messages,
            temperature: params.temperature
          })
        });
    
        const data = await response.json();
        return data.choices[0].message.content;
      }
    }

Auto-detect when MODEL_PROVIDER=venice in .env

No plugin install needed
Just set:
MODEL_PROVIDER=venice
VENICE_API_KEY=your_key_here
Update CLI to support Venice at creation

bash
elizaos create --agent --name=Vena --model-provider=venice
→ Auto-configures .env and provider

Add documentation

/docs/providers/venice.md
Example .env config
Privacy-first setup guide
VVV staking + VCU usage notes
Optional: Publish @elizaos/plugin-venice

Pre-built provider
Twitter/X + Venice posting flows
Dashboard toggle for censorship mode
Describe alternatives you've considered
Using OpenAI / Anthropic → rejected due to data logging and censorship
Running local LLMs (e.g. Llama) → too resource-heavy for most users
Forking ElizaOS → not sustainable for long-term updates
Additional context
Venice AI aligns perfectly with ElizaOS’s open, agent-driven vision:

✅ No data stored on servers
✅ Uncensored responses
✅ Open-source model access
✅ VVV token staking instead of pay-per-call
This integration would empower users to run truly private, autonomous agents — the way AI should be.

Let’s make ElizaOS the home for free AI agents — not just another wrapper for corporate APIs.

I offer to test the integration once implemented.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions