A Streamlit app for collecting, analyzing, and gaining insights on Open Interpreter prompts and their overall efficacy at executing local code.
Note: This project is a Work-In-Progress (WIP). Features and functionalities are continuously being developed and improved.
git clone https://github.com/actual-username-or-org/oi-prompt-insights.git
pip install -r requirements.txt
streamlit run oi_prompt_insights.py
This tool helps users track, analyze, and improve prompts for Open Interpreter. It offers an intuitive interface for recording prompt performance and visualizing analytics.
Minor variations in prompt wording can significantly impact token usage and code execution. For instance, two similar prompts led to vastly different outcomes:
- Prompt A: Simple execution, efficient token use
- Prompt B: Complex local code execution, 100x more tokens used
This tool aims to:
- Track prompt variations and outcomes
- Analyze prompt efficiency
- Identify effective patterns
- Develop best practices
- Enhance Open Interpreter user experience
- Record and categorize prompts
- Visualize prompt effectiveness
- Track satisfaction rates and execution times
- Compare token usage across prompts
- Interactive analytics dashboard
- Data persistence across sessions
- Efficiency Prompt Index in Leaderboard: Rank prompts based on their efficiency
After launching the app:
- Use the sidebar to navigate
- Record new prompts
- View analytics
- Explore insights
- Compare prompt variations
We welcome contributions from the community! Please see the CONTRIBUTING.md file for guidelines on how to contribute to this project.