All notable changes to this project will be documented in this file.
- New Environment Variables for OpenAI proxies: OpenAI_BASE_URL (LiteLLM support) (weaviate/Verba#56)
- Docker also installs HuggingFace stack (weaviate/Verba#84)
- Fix Docker Default Vectorizer (weaviate/Verba#50)
- Fix requirements.txt spelling error
- PDFReader powered by PyPDF2
- TokenChunker powered by tiktoken
- Ruff Linting (set as pre-commit)
- Markdown Formatting for chat messages (weaviate/Verba#48)
- Added missing dependencies
- Fixed restart bug
- Fixed MiniLM Cuda to_device bug (weaviate/Verba#41)
- Fixed Config Issues (weaviate/Verba#51)
- Fixed Weaviate Embedded Headers for Cohere
- Refactor modular architecture
- Add ability to import data through the frontend, CLI, and script
- Add Readers (SimpleReader, PathReader, GithubReader, PDFReader)
- Add Chunkers (WordChunker, SentenceChunker)
- Add Embedders (ADAEmbedder,SentenceTransformer, Cohere)
- Add Generators (GPT3, GPT4, LLama, Cohere)
- Status Page
- Reset functionality
- Streaming Token Generation
- Lazy Document Loading
- Add Copy and Cached Tag
- Improved Semantic Cache
- Added LLama 2 and Cohere support
- Added new OpenAI models
- Improved Documentation
- Added technical docs and contribution guidelines
- Error handling for data ingestion (handling chunk size)
- Schmea handling on startup
- Removed Simple- and AdvancedEngine logic
- OpenAI API documentation example dataset
- First version of Verba released! (many to come :)
- Verba favicon
- Add static files to package
- Weaviate Embedded not shutting down
- Prepare Verba for first release