b0f16e50d0c469441c2e2e09ceab8bb4_1720554721.mp4 |
21f3eaccfb3841a35c2a60a597f06d50_1727826848.mp4 |
3f43596b26cc29274c67b271c7a840ef_1719038138.mp4 |
4c42aeae66cac878b21b83a217a4928c_1719265698.mp4 |
b0d01a0d24c35780b62805c63e5fb573_1718658527.mp4 |
31f4debe9c9e2233dd6bd803614f5233_1728006302.mp4 |
|---|
LLMCA (Language Model Cellular Automata) is an experimental project that combines cellular automata with large language models (LLMs). It simulates a cognitive space where each cell evolves based on rules defined and interpreted by an LLM, considering the state of its neighbors.
- Cognitive Units with Memory: Each cell acts as a cognitive unit with configurable temporal memory, storing its past states with timestamps for historical awareness.
- LLM-Driven Evolution: Cells determine their next state by querying an LLM, providing their memory history and their neighbors' current states. The LLM responds with a new state and optionally, an updated rule using structured JSON schemas (
CognitiveUnitPair). - Von Neumann Neighborhood: Cells interact with their immediate neighbors in a 2D grid using the Von Neumann neighborhood (north, south, east, west, and diagonals).
- Distributed Computation: Supports distributing computations across multiple LLM API instances for improved performance with parallel task execution.
- Entity Management System: Built-in
LifeManagerfor managing multiple simulation entities with persistence and lifecycle management. - Flexible API Configuration: Support for multiple LLM resolvers via TOML configuration (
resolvers.toml) or environment variables, allowing heterogeneous API backends. - JSON Schema Integration: Uses
schemarsfor automatic schema generation, ensuring type-safe communication between the simulation and LLM APIs. - Visualization: Renders the simulation in real-time using Macroquad, representing cell states with colors derived from hexadecimal strings returned by the LLM.
- Persistence: Saves the simulation state to disk (in
.lifedirectory), allowing resumption from previous steps.
- Rust & Cargo: Ensure you have Rust and Cargo installed.
- LLM API Access: Requires access to an LLM API compatible with the OpenAI API format (e.g., OpenAI, Ollama). Set up necessary environment variables (see Usage).
- Macroquad: For visualization.
- Clone:
git clone https://github.com/pinsky-three/llmca.git - Build:
cd llmca && cargo build
-
API Configuration: Configure LLM resolvers using one of two methods:
Option A: TOML Configuration (Recommended)
Create a
resolvers.tomlfile in the project root:[[resolvers]] api_url = "http://localhost:11434/v1" model_name = "phi3" api_key = "ollama" [[resolvers]] api_url = "http://localhost:11435/v1" model_name = "llama2" api_key = "ollama"
Option B: Environment Variables
Create a
.envfile in the project root:OPENAI_API_URL="http://your_api_url:port/v1" # Comma-separated for multiple APIs OPENAI_MODEL_NAME="your_model_name" # Comma-separated for multiple models OPENAI_API_KEY="your_api_key" # Comma-separated for multiple keys
If using multiple APIs, ensure the number of URLs, model names, and API keys match.
-
Run:
cargo run -p minimal-ui
The LLM receives a JSON input representing a cell's memory (previous states) and its neighbors' current states. It's instructed to return a JSON object containing the next state and optionally, a new rule following the CognitiveUnitPair schema.
Example LLM System Prompt:
You're a LLM Cognitive Unit and your unique task is to respond with your next (rule, state)
based on your current rule and the states of your neighbors in json format.
Always respond with a plain json compliant with `CognitiveUnitPair` schema.
The user passes your memory and the neighborhood states as a list of 'messages' in json format.
Don't put the json in a code block, don't add explanations, just return the json ready to be parsed.
Only if your rule is empty, you may propose a new rule and return it with the response.
If you think the rule is wrong, you may propose a new rule and return it with the response.
Example of valid response: `{"rule": "rule_1", "state": "state_1"}`
Example LLM Input (Simplified):
[
"self memory",
{"rule": "be red if neighbors are green", "state": "#ff0000"},
{"rule": "be red if neighbors are green", "state": "#ff0000"},
"neighbors",
{"rule": "...", "state": "#00ff00"},
{"rule": "...", "state": "#00ff00"}
]Example LLM Output:
{"rule": "be red if neighbors are green", "state": "#ff0000"}The visualization then interprets the state (e.g., #ff0000) as a color. The simulation maintains a temporal memory (configurable size) of past states for each cognitive unit, allowing the LLM to consider historical patterns when determining the next state.
Contributions are welcome! Fork the project and submit pull requests.
This project is licensed under the MIT License. For more details, see the LICENSE file.