Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[pull] main from mlc-ai:main #17

Open
wants to merge 5 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 3 additions & 5 deletions .github/workflows/build-site.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ jobs:
- name: Configuring build Environment
run: |
sudo apt-get update
python -m pip install -U pip

- name: Setup Ruby
uses: ruby/setup-ruby@v1
Expand All @@ -24,13 +25,10 @@ jobs:

- name: Installing dependencies
run: |
python -m pip install -r docs/requirements.txt
gem install jekyll jekyll-remote-theme jekyll-sass-converter

- name: Build site
run: |
cd site && jekyll b && cd ..

- name: Push to gh-pages branch
- name: Build and deploy site
if: github.ref == 'refs/heads/main'
run: |
git remote set-url origin https://x-access-token:${{ secrets.MLC_GITHUB_TOKEN }}@github.com/$GITHUB_REPOSITORY
Expand Down
17 changes: 16 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
**High-Performance In-Browser LLM Inference Engine.**


[Get Started](#get-started) | [Blogpost](https://blog.mlc.ai/2024/06/13/webllm-a-high-performance-in-browser-llm-inference-engine) | [Examples](examples) | [Documentation](https://mlc.ai/mlc-llm/docs/deploy/webllm.html)
[Documentation](https://webllm.mlc.ai/docs/) | [Blogpost](https://blog.mlc.ai/2024/06/13/webllm-a-high-performance-in-browser-llm-inference-engine) | [Paper](https://arxiv.org/abs/2412.15803) | [Examples](examples)

</div>

Expand Down Expand Up @@ -455,6 +455,21 @@ This project is initiated by members from CMU Catalyst, UW SAMPL, SJTU, OctoML,

This project is only possible thanks to the shoulders open-source ecosystems that we stand on. We want to thank the Apache TVM community and developers of the TVM Unity effort. The open-source ML community members made these models publicly available. PyTorch and Hugging Face communities make these models accessible. We would like to thank the teams behind Vicuna, SentencePiece, LLaMA, and Alpaca. We also would like to thank the WebAssembly, Emscripten, and WebGPU communities. Finally, thanks to Dawn and WebGPU developers.

## Citation
If you find this project to be useful, please cite:

```
@misc{ruan2024webllmhighperformanceinbrowserllm,
title={WebLLM: A High-Performance In-Browser LLM Inference Engine},
author={Charlie F. Ruan and Yucheng Qin and Xun Zhou and Ruihang Lai and Hongyi Jin and Yixin Dong and Bohan Hou and Meng-Shiun Yu and Yiyan Zhai and Sudeep Agarwal and Hangrui Cao and Siyuan Feng and Tianqi Chen},
year={2024},
eprint={2412.15803},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2412.15803},
}
```

## Contributors

<a href="https://github.com/mlc-ai/web-llm/graphs/contributors">
Expand Down
20 changes: 20 additions & 0 deletions docs/Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# Minimal makefile for Sphinx documentation
#

# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= python -m sphinx
SOURCEDIR = .
BUILDDIR = _build

# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

.PHONY: help Makefile

# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
30 changes: 30 additions & 0 deletions docs/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
# WebLLM Documentation

The documentation was built upon [Sphinx](https://www.sphinx-doc.org/en/master/).

## Dependencies

Run the following command in this directory to install dependencies first:

```bash
pip3 install -r requirements.txt
```

## Build the Documentation

Then you can build the documentation by running:

```bash
make html
```

## View the Documentation

Run the following command to start a simple HTTP server:

```bash
cd _build/html
python3 -m http.server
```

Then you can view the documentation in your browser at `http://localhost:8000` (the port can be customized by appending ` -p PORT_NUMBER` in the python command above).
87 changes: 87 additions & 0 deletions docs/_static/img/mlc-logo-with-text-landscape.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
102 changes: 102 additions & 0 deletions docs/conf.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
# -*- coding: utf-8 -*-
import os
import sys

import tlcpack_sphinx_addon

# -- General configuration ------------------------------------------------

sys.path.insert(0, os.path.abspath("../python"))
sys.path.insert(0, os.path.abspath("../"))
autodoc_mock_imports = ["torch"]

# General information about the project.
project = "web-llm"
author = "WebLLM Contributors"
copyright = "2023, %s" % author

# Version information.

version = "0.2.77"
release = "0.2.77"

extensions = [
"sphinx_tabs.tabs",
"sphinx_toolbox.collapse",
"sphinxcontrib.httpdomain",
"sphinx.ext.autodoc",
"sphinx.ext.napoleon",
"sphinx_reredirects",
]

redirects = {"get_started/try_out": "../index.html#getting-started"}

source_suffix = [".rst"]

language = "en"

exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]

# The name of the Pygments (syntax highlighting) style to use.
pygments_style = "sphinx"

# A list of ignored prefixes for module index sorting.
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False

# -- Options for HTML output ----------------------------------------------

# The theme is set by the make target
import sphinx_rtd_theme

html_theme = "sphinx_rtd_theme"
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]

templates_path = []

html_static_path = []

footer_copyright = "© 2023 MLC LLM"
footer_note = " "

html_logo = "_static/img/mlc-logo-with-text-landscape.svg"

html_theme_options = {
"logo_only": True,
}

header_links = [
("Home", "https://webllm.mlc.ai/"),
("GitHub", "https://github.com/mlc-ai/web-llm"),
("Discord", "https://discord.gg/9Xpy2HGBuD"),
]

header_dropdown = {
"name": "Other Resources",
"items": [
("WebLLM Chat", "https://chat.webllm.ai/"),
("MLC Course", "https://mlc.ai/"),
("MLC Blog", "https://blog.mlc.ai/"),
("MLC LLM", "https://llm.mlc.ai/"),
],
}

html_context = {
"footer_copyright": footer_copyright,
"footer_note": footer_note,
"header_links": header_links,
"header_dropdown": header_dropdown,
"display_github": True,
"github_user": "mlc-ai",
"github_repo": "web-llm",
"github_version": "main/docs/",
"theme_vcs_pageview_mode": "edit",
# "header_logo": "/path/to/logo",
# "header_logo_link": "",
# "version_selecter": "",
}


# add additional overrides
templates_path += [tlcpack_sphinx_addon.get_templates_path()]
html_static_path += [tlcpack_sphinx_addon.get_static_path()]
6 changes: 6 additions & 0 deletions docs/developer/add_models.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
Adding Models
=============

WebLLM allows you to compile custom language models using `MLC LLM <https://llm.mlc.ai/>`_ and then serve compiled model through WebLLM.

For instructions of how to compile and add custom models to WebLLM, check the `MLC LLM documentation here <https://llm.mlc.ai/docs/deploy/webllm.html>`_.
35 changes: 35 additions & 0 deletions docs/developer/building_from_source.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
Building From Source
====================

Clone the Repository
---------------------
.. code-block:: bash

git clone https://github.com/mlc-ai/web-llm.git
cd web-llm

Install Dependencies
---------------------
.. code-block:: bash

npm install

Build the Project
-----------------
.. code-block:: bash

npm run build

Test Changes
------------

To test you changes, you can reuse any existing example or create a new example for your new functionality to test.

Then, to test the effects of your code change in an example, inside ``examples/<example>/package.json``, change from ``"@mlc-ai/web-llm": "^0.2.xx"`` to ``"@mlc-ai/web-llm": ../...`` to let it reference you local code.

.. code-block:: bash

cd examples/<example>
# Modify the package.json
npm install
npm start
35 changes: 35 additions & 0 deletions docs/index.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
👋 Welcome to WebLLM
====================

`GitHub <https://github.com/mlc-ai/web-llm>`_ | `WebLLM Chat <https://chat.webllm.ai/>`_ | `NPM <https://www.npmjs.com/package/@mlc-ai/web-llm>`_ | `Discord <https://discord.gg/9Xpy2HGBuD>`_

WebLLM is a high-performance in-browser language model inference engine that brings large language models (LLMs) to web browsers with hardware acceleration. With WebGPU support, it allows developers to build AI-powered applications directly within the browser environment, removing the need for server-side processing and ensuring privacy.

It provides a specialized runtime for the web backend of MLCEngine, leverages
`WebGPU <https://www.w3.org/TR/webgpu/>`_ for local acceleration, offers OpenAI-compatible API,
and provides built-in support for web workers to separate heavy computation from the UI flow.

Key Features
------------
- 🌐 In-Browser Inference: Run LLMs directly in the browser
- 🚀 WebGPU Acceleration: Leverage hardware acceleration for optimal performance
- 🔄 OpenAI API Compatibility: Seamless integration with standard AI workflows
- 📦 Multiple Model Support: Works with Llama, Phi, Gemma, Mistral, and more

Start exploring WebLLM by `chatting with WebLLM Chat <https://chat.webllm.ai/>`_, and start building webapps with high-performance local LLM inference with the following guides and tutorials.

.. toctree::
:maxdepth: 2
:caption: User Guide

user/get_started.rst
user/basic_usage.rst
user/advanced_usage.rst
user/api_reference.rst

.. toctree::
:maxdepth: 2
:caption: Developer Guide

developer/building_from_source.rst
developer/add_models.rst
35 changes: 35 additions & 0 deletions docs/make.bat
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
@ECHO OFF

pushd %~dp0

REM Command file for Sphinx documentation

if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set SOURCEDIR=.
set BUILDDIR=_build

%SPHINXBUILD% >NUL 2>NUL
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.https://www.sphinx-doc.org/
exit /b 1
)

if "%1" == "" goto help

%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
goto end

:help
%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%

:end
popd
8 changes: 8 additions & 0 deletions docs/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
sphinx-tabs == 3.4.1
sphinx-rtd-theme
sphinx == 5.2.3
sphinx-toolbox == 3.4.0
tlcpack-sphinx-addon==0.2.2
sphinxcontrib_httpdomain==1.8.1
sphinxcontrib-napoleon==0.7
sphinx-reredirects==0.1.2
Loading