Skip to content

Commit

Permalink
V0.1 (#112)
Browse files Browse the repository at this point in the history
v0.1 consists of 7 large language models and 2 inference libraries to run rubra models. v0.0 is no longer supported as a majority of that work has moved to https://github.com/gptscript-ai/gptscript

---------

Co-authored-by: Yingbei <[email protected]>
  • Loading branch information
sanjay920 and tybalex committed Jul 1, 2024
1 parent abf872e commit dfd9a66
Show file tree
Hide file tree
Showing 124 changed files with 1,707 additions and 17,037 deletions.
16 changes: 0 additions & 16 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -228,22 +228,6 @@ zipalign*
**/node_modules
**/docs/yarn.lock

#Tauri
tauri/src-tauri/target
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
pnpm-debug.log*
lerna-debug.log*

node_modules
dist
dist-ssr
*.local
**/.xwin-cache

# Editor directories and files
.vscode/*
!.vscode/extensions.json
Expand Down
9 changes: 9 additions & 0 deletions .gitmodules
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
[submodule "tools.cpp"]
path = tools.cpp
url = https://github.com/rubra-ai/tools.cpp.git
[submodule "vllm"]
path = vllm
url = https://github.com/rubra-ai/vllm.git
[submodule "rubra-tools"]
path = rubra-tools
url = https://github.com/rubra-ai/rubra-tools.git
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -67,4 +67,4 @@ validate-docs:
echo "Encountered dirty repo!"; \
git diff; \
exit 1 \
;fi
;fi
67 changes: 25 additions & 42 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,59 +1,42 @@
# Rubra

Rubra is an open-source ChatGPT. It's designed for users who want:

- **Multi-Model Support:** Rubra integrates with a variety of LLMs, including a local model optimized for Rubra, as well as models from OpenAI and Anthropic. More providers will be added in the future.
- **Assistant Tools:** Create powerful assistants using tools for web search, knowledge retrieval, and more, all designed to augment your LLMs with the information they need to be truly helpful.
- **OpenAI API Compatibility:** Use Rubra's OpenAI-compatible Assistants API, allowing you to use OpenAI's Python and JavaScript libraries to create and manage Assistants.
- **Self-Hosting:** Keep your data private and secure by running Rubra on your own hardware.

## Getting Started
<p align="left">
<a href="README_CN.md">中文</a>&nbsp | &nbspEnglish&nbsp </a>
</p>
<br><br>

### Prerequisites
# Rubra

- M-series Mac or Linux with GPU
- On MacOS you need to have Xcode Command Line Tools installed: `xcode-select --install`
- At least 16 GB RAM
- At least 10 GB of available disk space
- Docker and Docker Compose (>= v2.17.0) installed
#### Rubra is a collection of open-weight, tool-calling LLMs.

### Installation
Rubra enhances the top open-weight large language models with tool-calling capability. The ability to call user-defined external tools in a deterministic manner while reasoning and chatting makes Rubra models ideal for agentic use cases.

Rubra offers a simple one-command installation:
All models are enhanced from the top open-source LLMs with further post-training and methods that effectively teach instruct-tuned models new skills while mitigating catastrophic forgetting. For easy use, we extend popular inferencing projects, allowing you to run Rubra models easily.

```bash
curl -sfL https://get.rubra.ai | sh -s -- start
```
## Enhanced Models

After installation, access the Rubra UI at `http://localhost:8501` and start exploring the capabilities of your new ChatGPT-like assistant.
| Enhanced Model | Context Length | Size | Parent Model Publisher |
|-----------------------------------------------------------------------|----------------|------|------------------------|
| [rubra-ai/Meta-Llama-3-8B-Instruct](https://huggingface.co/rubra-ai/Meta-Llama-3-8B-Instruct) | 8,000 | 8B | Meta |
| [rubra-ai/Meta-Llama-3-70B-Instruct](https://huggingface.co/rubra-ai/Meta-Llama-3-70B-Instruct) | 8,000 | 70B | Meta |
| [rubra-ai/gemma-1.1-2b-it](https://huggingface.co/rubra-ai/gemma-1.1-2b-it) | 8,192 | 2B | Google |
| [rubra-ai/Mistral-7B-Instruct-v0.3](https://huggingface.co/rubra-ai/Mistral-7B-Instruct-v0.3) | 32,000 | 7B | Mistral |
| [rubra-ai/Mistral-7B-Instruct-v0.2](https://huggingface.co/rubra-ai/Mistral-7B-Instruct-v0.2) | 32,000 | 7B | Mistral |
| [rubra-ai/Phi-3-vision-128k-instruct](https://huggingface.co/rubra-ai/Phi-3-vision-128k-instruct)| 128,000 | 3B | Microsoft |
| [rubra-ai/Qwen2-7B-Instruct](https://huggingface.co/rubra-ai/Qwen2-7B-Instruct) | 131,072 | 7B | Qwen |

## Usage
## Demo

Here's a quick example of how to create an assistant using Rubra's API, compatible with OpenAI's libraries:
Try out the models immediately without downloading anything in Our [Huggingface Spaces](https://huggingface.co/spaces/sanjay920/rubra-v0.1-dev)! It's free and requires no login.

```python
from openai import OpenAI
## Run Rubra Models Locally

client = OpenAI(
base_url="http://localhost:8000", # Rubra backend
api_key=""
)
We extend the following inferencing tools to run Rubra models in an OpenAI-compatible tool-calling format for local use:

assistant = client.beta.assistants.create(
instructions="You are a customer support chatbot. Use your knowledge base to best respond to customer queries.",
model="rubra_local",
tools=[{"type": "retrieval"}],
file_ids=[client.files.create(file=open("knowledge.txt", "rb"), purpose='assistants').id]
)
```
- [llama.cpp](https://github.com/ggerganov/llama.cpp)
- [vllm](https://github.com/vllm-project/vllm)

## Contributing

We welcome contributions from the developer community! Whether it's adding new features, fixing bugs, or improving documentation, your help is invaluable. Check out our [contributing guidelines](CONTRIBUTING.md) for more information on how to get involved.

## Support

If you encounter any issues or have questions, please file an issue on GitHub. For more detailed guidance and discussions, join our community on [Discord](https://discord.gg/swvAH2DXZH) or [Slack](https://slack.acorn.io) or start a [Github discussion](https://github.com/rubra-ai/rubra/discussions).
Contributions to Rubra are welcome! We'd love to improve tool-calling capability in the models based on your feedback. Please open an issue if your tool doesn't work.

---

Expand Down
52 changes: 52 additions & 0 deletions README_CN.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
<p align="left">
中文</a>&nbsp | &nbsp<a href="README.md">English</a>&nbsp</a>
</p>
<br><br>


# Rubra

#### Rubra 是一系列开放权重、聚焦于工具调用的大模型(LLM)。

Rubra 增强了当前最流行的一系列开放权重大模型(LLM)的工具调用能力。以能够在和用户对话时以稳定的方式调用用户定义的外部工具,使 Rubra 大模型非常适用于Agent相关的场景。

所有模型均基于流行的Instruct模型,通过进一步的微调,有效地教授或增强模型调用工具的能力,同时尽可能减少模型在基础能力和知识上的流失。为了便于用户使用,我们扩展了流行的llm本地部署项目,让您可以轻松运行 Rubra 模型。

## Rubra模型系列

| 模型 | 最大上下文长度 | 大小 | 基础模型发布者 |
|---------------------------------------------------------------|----------------|------|----------------------|
| [rubra-ai/Meta-Llama-3-8B-Instruct](https://huggingface.co/rubra-ai/Meta-Llama-3-8B-Instruct) | 8,000 | 8B | Meta |
| [rubra-ai/Meta-Llama-3-70B-Instruct](https://huggingface.co/rubra-ai/Meta-Llama-3-70B-Instruct) | 8,000 | 70B | Meta |
| [rubra-ai/gemma-1.1-2b-it](https://huggingface.co/rubra-ai/gemma-1.1-2b-it) | 8,192 | 2B | Google |
| [rubra-ai/Mistral-7B-Instruct-v0.3](https://huggingface.co/rubra-ai/Mistral-7B-Instruct-v0.3) | 32,000 | 7B | Mistral |
| [rubra-ai/Mistral-7B-Instruct-v0.2](https://huggingface.co/rubra-ai/Mistral-7B-Instruct-v0.2) | 32,000 | 7B | Mistral |
| [rubra-ai/Phi-3-vision-128k-instruct](https://huggingface.co/rubra-ai/Phi-3-vision-128k-instruct)| 128,000 | 3B | Microsoft |
| [rubra-ai/Qwen2-7B-Instruct](https://huggingface.co/rubra-ai/Qwen2-7B-Instruct) | 131,072 | 7B | Qwen |

## Demo

在我们的 [Huggingface Spaces](https://huggingface.co/spaces/sanjay920/rubra-v0.1-dev) 上可以免费试用以上的大模型,不需要登录!

## 本地部署运行 Rubra 模型

我们扩展了以下部署工具,以在 OpenAI 风格的API格式下本地运行 Rubra 模型:

- [llama.cpp](https://github.com/ggerganov/llama.cpp)
- [vllm](https://github.com/vllm-project/vllm)

## 贡献

欢迎您参与对 Rubra 的进一步开发!我们希望根据您的反馈改进模型的工具调用能力。如果您的工具调用不起作用或出错,请创建一个Issue并分享您遇到的问题。

---

## 许可证

版权所有 (c) 2024 Acorn Labs, Inc.

根据 Apache License, Version 2.0(“许可证”)授权;您不得在不符合许可证的情况下使用此文件。您可以在以下网址获取许可证副本:

<http://www.apache.org/licenses/LICENSE-2.0>

除非适用法律要求或书面同意,按“原样”分发的软件不附带任何明示或暗示的保证或条件。请参阅许可证以了解特定语言的管理权限和限制。
Loading

0 comments on commit dfd9a66

Please sign in to comment.