The easiest way to create and manage AI-powered social media personas that can authentically engage with followers. Socials:
P.S. I have altered this guide for Windows users; This requires virtualization installed by the following steps
-
To enable virtualization on your computer, access your BIOS settings by pressing the designated key during startup (usually F2, F10, or Del) then navigate to the "Advanced" or "CPU Configuration" section and find the option labeled "Intel Virtualization Technology" (or similar depending on your CPU) and set it to "Enabled" before saving and exiting the BIOS
-
Boot your windows operating system, Select Start , enter Windows features, and select Turn Windows features on or off from the list of results, in the Windows Features window that just opened, find Virtual Machine Platform and select it, Select OK.Restart your PC
-
Use the Search tab for "CMD", right click and open as administrator. Then type
wsl --install -d Ubuntu-22.04
This will install Ubuntu 22.04 on your computer to use in Windows Subsystem for Linux
-
Setup your Ubuntu process username and password (please note passwords are invisible to you but what you enter is there being logged so don't forget what you type)
-
Run the following commands to install Node 20 and a Debian registry
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | sudo gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg
NODE_MAJOR=20
echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_$NODE_MAJOR.x nodistro main" | sudo tee /etc/apt/sources.list.d/nodesource.list- Install Yarn into your Ubuntu Runtime
yarnand confirm yarn version with
yarn -vCongratulations now you have the pre-requisites for running the Oracle Framework! Proceed below to understand more:
Oracle is a TypeScript framework that lets you quickly bootstrap social media personas powered by large language models. Each persona has its own personality, posting style, and interaction patterns.
- Clone and install dependencies:
git clone https://github.com/0xFrostyFlakes/oracle-framework/
cd oracle-framework
yarn install- Set up environment (you only need to run this command the first time):
cp .env.example .env- Configure your
.envfile with:
- LLM provider credentials (we recommend OpenRouter), the following is just an example of how it should look
LLM_PROVIDER_URL=https://openrouter.ai/api/v1
LLM_PROVIDER_API_KEY=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
- Twitter account credentials (optional)
- Telegram bot token (optional)
- Discord bot token (optional)
In windows, you can find .env file in the following folder \wsl$\Ubuntu\home\USERNAME\Orace-framework (where USERNAME is replaced by your own username on your WSL)
You will need your own LLM provider Api key which you will have to replace the example key above you can find this at https://openrouter.ai/settings/keys simply sign-up and create a key
- Create your agent:
- Modify
src/characters/characters.jsonwith your agent's personality.
Advanced usage: you can have more than one agent in the file and run more than one agent at a time but you will need to change the environment variables accordingly.
- Run:
# Talk to your agent on the command line, default username is "carolainetrades"
npm run dev -- cli <username>
# or with yarn
yarn dev cli <username>
# Run on Twitter
# Generate Twitter authentication
npm run dev -- generateCookies <username>
# or with yarn
yarn dev generateCookies <username>
# Start the agent's actions on Twitter
npm run dev -- autoResponder <username> # Reply to timeline
npm run dev -- topicPost <username> # Post new topics
npm run dev -- replyToMentions <username> # Handle mentions
# Start the agent on Telegram
npm run dev telegram <username>
# Start the agent on Discord
npm run dev discord <username># Build the project
npm run build
# or
yarn build
# Format code
npm run format
# or
yarn format- AI-Powered Interactions: Uses LLMs to generate human-like responses and posts
- Personality System: Define your agent's character, tone, and behavior
- Multi-Modal: Can generate and handle text, images, and voice content
- Platform Support: Supports Twitter, Telegram, Discord and more integrations coming soon
- Engagement Tools: Auto-posting, replies, mention handling
Characters are defined in a single JSON file under src/characters/characters.json. Each character file contains the AI agent's personality, behavior patterns, and platform-specific settings.
[
{
"username": "your agent's username -- this is the same as the Twitter username",
"agentName": "your agent's name",
"bio": [
"your agent's bio",
"this is an array of strings",
"containing your agent's description"
],
"lore": [
"your agent's lore",
"this is an array of strings",
"containing your agent's backstory"
],
"postDirections": [
"your agent's post directions",
"this is an array of strings",
"containing your agent's posting style"
],
"topics": [
"your agent's topics",
"this is an array of strings",
"containing your agent's favorite topics"
],
"adjectives": [
"adjectives used to create posts",
],
"telegramBotUsername": "your agent's name on Telegram",
"discordBotUsername": "your agent's name on Discord",
"postingBehavior": {
// how long to wait before replying (ms)
"replyInterval": 2700000,
// how long to wait before posting a new topic (ms)
"topicInterval": 10800000,
// whether to remove periods from the message
"removePeriods": true,
// list of rules for chat mode
"chatModeRules": [
"if the message says: a you say b",
"if the message says: good night you reply gn",
],
// model to use for chat mode
"chatModeModel": "meta-llama/llama-3.3-70b-instruct",
// whether to generate an image prompt
"generateImagePrompt": true,
// chance to post an image when generating a new post on Twitter
"imagePromptChance": 0.33,
// chance to post a sticker on Telegram
"stickerChance": 0.2,
// list of stickers to use on Telegram
"stickerFiles": [
"CAACAgEAAyEFAASMuWLFAAIDkWeDQ_kOhEWzEl0oTiAOokps_P24AAKzBAAC6XRQRu807DcersvfNgQ",
"CAACAgIAAyEFAASMuWLFAAIDlWeDRJqI8gtcgFW0yBVlSMCfA6KsAAKHMwACYPoYSCgCth58j8ruNgQ",
]
},
// currently only used on Twitter
// the provider to use for image generation and the model to use for the prompt
// ms2 is the only one that has a milady and cheesworld chance
"imageGenerationBehavior": {
"provider": "ms2",
"imageGenerationPromptModel": "meta-llama/llama-3.3-70b-instruct",
"ms2": {
"miladyChance": 0.2,
"cheesworldChance": 0.2
}
},
// the provider to use for audio generation
"audioGenerationBehavior": {
"provider": "kokoro",
"kokoro": {
"voice": "af",
"speed": 1.0
}
},
// the main LLM to use for content generation
"model": "anthropic/claude-3.5-sonnet",
// the fallback LLM to use if the prompt is banned
"fallbackModel": "meta-llama/llama-3.3-70b-instruct",
// the temperature to use
"temperature": 0.75
}
]- agentName: Display name shown on social platforms
- username: Twitter handle without '@'
- bio: The agent's bio
- lore: Background stories that shape the character's history
- postDirections: Guidelines for how the agent should post
- topics: Subjects the agent is knowledgeable about
- adjectives: Character traits that define the personality
- postingBehavior: Technical settings for posting frequency and style
- model: Primary LLM to use for generation
- temperature: "Creativity" level - 0.0-1.0, higher = more creative. An excellent primer on the temperature setting can be found here.
For a complete example, check out src/characters/characters.json. We have a sample character called Carolaine. You can see her in action at @carolainetrades.
Required variables in .env:
LLM_PROVIDER_URL=
LLM_API_KEY=
# Twitter configuration (if using Twitter)
AGENT_TWITTER_PASSWORD=
# Telegram configuration (if using Telegram)
AGENT_TELEGRAM_API_KEY=
# Discord configuration (if using Discord)
AGENT_DISCORD_API_KEY=
# MS2 configuration (if using MS2 for image generation)
AGENT_MS2_API_KEY=
We highly recommend using OpenRouter as your LLM provider. It offers a wide range of models and it is OpenAI compatible.
- OpenRouter (Recommended): Provides access to multiple models
- RedPill: Alternative provider with compatible API
Important note for OpenAI:
If you are using OpenAI as your LLM provider you will not be able to use Claude 3.5 Sonnet as your primary model or Llama as a fallback or on the chat mode, so please configure your character file accordingly.
Set LLM_PROVIDER_URL=https://api.openai.com/v1 and all the models used in the character file will have to be OpenAI models, you can find the list of models here.
Generally speaking we have found that the best all purpose model for creative writing is anthropic/claude-3.5-sonnet. The reason why we fallback to meta-llama/llama-3.3-70b-instruct is that Claude is heavily moderated and some use cases (like Carolaine) require the agent to speak about topics that the LLM's moderators will not allow. meta-llama/llama-3.3-70b-instruct is the main model we recommend for chat mode as we don't currently test for banned prompts in chat mode to make the experience snappier and feel like you are talking to a real person.
generateCookies: Create Twitter authentication cookiesautoResponder: Start the timeline response systemtopicPost: Begin posting original contentreplyToMentions: Handle mentions and replies
telegram: Start the Telegram bot
Important:
- Telegram requires a bot token, which you can get from @BotFather in Telegram.
discord: Start the Discord bot
Important:
- Discord requires an API key, which you can get from the Discord Developer Portal -- You will also need to enable several permissions and use the Application ID as the agent's name in the config file (
src/characters/characters.json).
- Test your agent's personality thoroughly before deployment. The best way to do this is to use CLI mode as you don't need to deploy anything or connect to any external services.
- Monitor early interactions to ensure appropriate responses
- Adjust posting frequencies based on engagement
- Regularly update the agent's knowledge and interests
Current TODO:
- Improve reply quality on Twitter
- Add scraping of targeted content
- Develop reply prioritization system
For issues and feature requests, please use the GitHub issue tracker.