Releases: hegelai/prompttools
prompttools 0.0.45 - Introducing Observability features!
Launch of PromptTools Observability (Private Beta)
We're excited to announce the addition of observability features on our hosted platform. It allows your teams to monitor and evaluate your production usages of LLMs with just one line of code change!
import prompttools.logger
The new features are integrated with our open-source library as well as the PromptTools playground. Our goal is to enable reliable deployments of LLM usages more quickly and observes any issues in real-time.
If you are interested to try out platform, please reach out to us.
We remain committed to expanding this open source library. We look forward to build more development tools that enable you to iterate faster with AI models. Please have a look at our open issues to what features are coming.
Major Features Updates
OpenAI API Updates
- We have updated various experiments and examples to use OpenAI's latest features and Python API
- Make sure you are using
openai
version 1.0+
Moderation API
- We have integrated with OpenAI's moderation API as an eval function
- This allows you to check if your experiments' responses (from any LLMs) violate content moderation policy (such as violence, harassment).
Hosted APIs
- Production logging API
- Contact us if you would like to get started with our hosted observability features!
Community
If you have suggestions on the API or use cases you'd like to be covered, please open a GitHub issue. We'd love to hear thoughts and feedback. As always, we welcome new contributors to our repo and we have a few good first issues to get you started.
Full Changelog: v0.0.41...v0.0.45
prompttools 0.0.41 - Hosted Playground Launch
Launch of PromptTools Playground (Private Beta)
We're excited to announce the private beta of PromptTools Playground! It is a hosted platform integrated with our open-source library. It persists your experiments with version control and provides collaboration features suited for teams.
If you are interested to try out platform, please reach out to us. We remain committed to expanding this open source library. We look forward to build more development tools that enable you to iterate faster with AI models.
Major Features Updates
New Harnesses
- ChatPromptTemplateExperimentationHarness
- ModelComparisonHarness
Experimental APIs
run_one
andrun_partial
for `OpenAIChatExperiment- You no longer have to re-run the entire experiment! You can now partial execute parameters that you care about.
Hosted APIs
- Save, load, and share your experiments through our hosted playground
save_experiment
load_experiment
Community
If you have suggestions on the API or use cases you'd like to be covered, please open a GitHub issue. We'd love to hear thoughts and feedback. As always, we welcome new contributors to our repo and we have a few good first issues to get you started.
prompttools 0.0.35
Major Features Updates
New APIs
- Google Vertex AI
- Azure OpenAI Service
- Replicate
- Stable Diffusion
- Pinecone
- Qdrant
- Retrieval-Augmented Generation (RAG)
Utility Functions
chunk_text
autoeval_with_documents
structural_similarity
Community
Shout out to @HashemAlsaket, @bweber-rebellion, @imalsky, @kacperlukawski for actively participating and contributing new features!
If you have suggestions on the API or use cases you'd like to be covered, please open a GitHub issue. We'd love to hear thoughts and feedback. As always, we welcome new contributors to our repo and we have a few good first issues to get you started.
If you are interested in a hosted version of prompttools
with more features for your team, please reach out.
prompttools 0.0.33
Major Features Updates
New APIs
OpenAIChatExperiment
- it can now call functions.- LangChain Sequential Chain
- LangChain Router Chain
- LanceDB
- Initial support for benchmarking (with HellaSwag)
Other Improvements
There are also many fixes and improvements we made to different experiments. Notably, we refactored how evaluate
works. In this version, the evaluation function being passed into experiment.evaluate()
should handle a row of data plus other optional keyword arguments. Please see our updated example notebooks as references.
Playground
The playground now supports shareable links. You can use the Share
button to create a link and share your experiment setup with your teammates.
Community
Shout out to @HashemAlsaket, @AyushExel, @pramitbhatia25 @mmmaia actively participating and contributing new features!
If you have suggestions on the API or use cases you'd like to be covered, please open a GitHub issue. We'd love to hear thoughts and feedback. As always, we welcome new contributors to our repo and we have a few good first issues to get you started.
prompttools 0.0.22
Major features added recently:
New APIs:
- Anthropic Claude
- Google PaLM
- Chroma
- Weaviate
- MindsDB
Playground
If you would like to execute your experiments in a StreamLit UI rather than in a notebook, you can do that with:
pip install prompttools
git clone https://github.com/hegelai/prompttools.git
cd prompttools && streamlit run prompttools/playground/playground.py
Community
Shout out to @HashemAlsaket actively participating and contributing new features!
If you have suggestions on the API or use cases you'd like to be covered, please open a GitHub issue. We'd love to hear thoughts and feedback. As always, we welcome new contributors to our repo and we have a few good first issues to get you started.