Skip to content

Commit

Permalink
chore: trunk upgrade and language fixes
Browse files Browse the repository at this point in the history
  • Loading branch information
ryanfoxtyler committed Jan 26, 2025
1 parent cd6418c commit 7994a30
Show file tree
Hide file tree
Showing 4 changed files with 18 additions and 12 deletions.
8 changes: 4 additions & 4 deletions .trunk/trunk.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,11 +18,11 @@ runtimes:

lint:
enabled:
- renovate@39.109.0
- renovate@39.128.0
- [email protected]
- [email protected].3
- [email protected].6
- [email protected].353
- [email protected].4
- [email protected].7
- [email protected].357
- git-diff-check
- [email protected]
- [email protected]
Expand Down
2 changes: 1 addition & 1 deletion modus/app-manifest.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ Each connection has a `type` property, which controls how it's used and which
additional properties are available. The following table lists the available
connection types:

| Type | Purpose | Functions Namespaces |
| Type | Purpose | Function Classes |
| :----------- | :------------------------------- | :-------------------------- |
| `http` | Connect to an HTTP(S) web server | `http`, `graphql`, `models` |
| `postgresql` | Connect to a PostgreSQL database | `postgresql` |
Expand Down
13 changes: 7 additions & 6 deletions modus/deepseek-model.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,17 +6,18 @@ mode: "wide"
---

DeepSeek is an AI lab that has developed and released a series of open source
LLMs that are notable for both their performance and cost-efficiency. By using a
Mixture-of-Experts (MoE) system that utilizes only 37 billion of the models' 671
billion parameters for any task, the DeepSeek-R1 model is able to achieve best
in class performance at a fraction of cost of inference on other comparable
models. In this guide we review how to leverage the DeepSeek models using Modus.
large language models (LLM) that are notable for both their performance and
cost-efficiency. By using a Mixture-of-Experts (MoE) system that utilizes only
37 billion of the models' 671 billion parameters for any task, the DeepSeek-R1
model is able to achieve best in class performance at a fraction of cost of
inference on other comparable models. In this guide we review how to leverage
the DeepSeek models using Modus.

## Options for using DeepSeek with Modus

There are two options for invoking DeepSeek models in your Modus app:

1. [Use the distilled DeepSeek model hosted by Hypermode](#using-the-distilled-deepseek-model-hosted-by-Hypermode)
1. [Use the distilled DeepSeek model hosted by Hypermode](#using-the-distilled-deepseek-model-hosted-by-hypermode)
Hypermode hosts and makes available the distilled DeepSeek model which can be
used by Modus apps developed locally and deployed to Hypermode
2. [Use the DeepSeek API with your Modus app](#using-the-deepseek-api-with-modus)
Expand Down
7 changes: 6 additions & 1 deletion styles/config/vocabularies/general/accept.txt
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@ inferencing
LLM
[Mm]odus
namespace
namespaces
nnClassify
npm
NQuads
Expand All @@ -56,4 +57,8 @@ upsert
URL|url
urql
UUID
[Dd]eserialize
[Dd]eserialize
upsertBatch
computeDistance
getNamespaces
timeLog

0 comments on commit 7994a30

Please sign in to comment.