diff --git a/.github/workflows/jekyll-gh-pages.yml b/.github/workflows/jekyll-gh-pages.yml new file mode 100644 index 00000000..559bddf5 --- /dev/null +++ b/.github/workflows/jekyll-gh-pages.yml @@ -0,0 +1,51 @@ +# Sample workflow for building and deploying a Jekyll site to GitHub Pages +name: Deploy Jekyll with GitHub Pages dependencies preinstalled + +on: + # Runs on pushes targeting the default branch + push: + branches: ["main"] + + # Allows you to run this workflow manually from the Actions tab + workflow_dispatch: + +# Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages +permissions: + contents: read + pages: write + id-token: write + +# Allow only one concurrent deployment, skipping runs queued between the run in-progress and latest queued. +# However, do NOT cancel in-progress runs as we want to allow these production deployments to complete. +concurrency: + group: "pages" + cancel-in-progress: false + +jobs: + # Build job + build: + runs-on: ubuntu-latest + steps: + - name: Checkout + uses: actions/checkout@v3 + - name: Setup Pages + uses: actions/configure-pages@v3 + - name: Build with Jekyll + uses: actions/jekyll-build-pages@v1 + with: + source: ./ + destination: ./_site + - name: Upload artifact + uses: actions/upload-pages-artifact@v2 + + # Deployment job + deploy: + environment: + name: github-pages + url: ${{ steps.deployment.outputs.page_url }} + runs-on: ubuntu-latest + needs: build + steps: + - name: Deploy to GitHub Pages + id: deployment + uses: actions/deploy-pages@v2 diff --git a/README.md b/README.md index b3f0aadd..886039ff 100644 --- a/README.md +++ b/README.md @@ -97,14 +97,14 @@ QQ Group for communication: 30920262 * `--tokenizer`: Tokenizer path * `--port`: Running port * `--quant`: Specify the number of quantization layers -* `--adapter`: Adapter (GPU and backend) selection options +* `--adepter`: Adapter (GPU and backend) selection options ### Example The server listens on port 3000, loads the full-layer quantized (32 > 24) 0.4B model, and selects adapter 0 (to get the specific adapter number, you can first not add this parameter, and the program will enter the adapter selection page). ```bash -$ cargo run --release -- --model assets/models/RWKV-4-World-0.4B-v1-20230529-ctx4096.st --port 3000 --quant 32 --adapter 0 +$ cargo run --release -- --model assets/models/RWKV-4-World-0.4B-v1-20230529-ctx4096.st --port 3000 --quant 32 --adepter 0 ``` ## 📙Currently Available APIs diff --git a/README_jp.md b/README_jp.md index fa76f9d6..f9d2976e 100644 --- a/README_jp.md +++ b/README_jp.md @@ -95,14 +95,14 @@ OpenAIのChatGPT APIインターフェースと互換性があります。 * `--tokenizer`: トークナイザーのパス * `--port`: 実行ポート * `--quant`: 量子化レイヤーの数を指定 -* `--adapter`: アダプター(GPUおよびバックエンド)の選択オプション +* `--adepter`: アダプター(GPUおよびバックエンド)の選択オプション ### 例 サーバーはポート3000でリッスンし、全レイヤー量子化(32 > 24)の0.4Bモデルをロードし、アダプター0を選択します(特定のアダプター番号を取得するには、最初にこのパラメーターを追加せず、プログラムがアダプター選択ページに入るまで待ちます)。 ```bash -$ cargo run --release -- --model assets/models/RWKV-4-World-0.4B-v1-20230529-ctx4096.st --port 3000 --quant 32 --adapter 0 +$ cargo run --release -- --model assets/models/RWKV-4-World-0.4B-v1-20230529-ctx4096.st --port 3000 --quant 32 --adepter 0 ``` ## 📙現在利用可能なAPI diff --git a/README_zh.md b/README_zh.md index 42164881..c025b8f5 100644 --- a/README_zh.md +++ b/README_zh.md @@ -104,13 +104,13 @@ - `--tokenizer`: 词表路径 - `--port`: 运行端口 - `--quant`: 指定量化层数 -- `--adapter`: 适配器(GPU和后端)选择项 +- `--adepter`: 适配器(GPU和后端)选择项 ### 示例 服务器监听3000端口,加载全部层量化(32 > 24)的0.4B模型,选择0号适配器(要查看具体适配器编号可以先不加该参数,程序会先进入选择页面)。 ```bash -$ cargo run --release -- --model assets/models/RWKV-4-World-0.4B-v1-20230529-ctx4096.st --port 3000 --quant 32 --adapter 0 +$ cargo run --release -- --model assets/models/RWKV-4-World-0.4B-v1-20230529-ctx4096.st --port 3000 --quant 32 --adepter 0 ```