Releases: xorbitsai/inference
Releases · xorbitsai/inference
v0.8.3
What's new in 0.8.3 (2024-02-02)
These are the changes in inference v0.8.3.
New features
- FEAT: add whisper.small and belle distilwhisper model, fix parameter in rerank by @zhanghx0905 in #944
- FEAT: Support jina-embeddings-v2-base-zh by @aresnow1 in #948
- FEAT: Support Yi VL by @codingl2k1 in #946
- FEAT: Support more embedding and rerank models by @aresnow1 in #959
Enhancements
- ENH: Record gpu mem status in workers by @ChengjieLi28 in #941
- ENH: Allow chat max_tokens is None by @codingl2k1 in #960
- ENH:
chatglm
ggml
format supportssystem_prompt
by @ChengjieLi28 in #962
Bug fixes
- BUG: Fix roles in chat UI by @aresnow1 in #949
- BUG: Fix heartbeat by @codingl2k1 in #957
- BUG: Fix model's content length by @aresnow1 in #955
Documentation
- DOC: Update readme by @aresnow1 in #938
- DOC: Add image model doc by @codingl2k1 in #947
- DOC: Add audio model doc by @codingl2k1 in #954
- DOC: Reorge model related docs by @onesuper in #961
New Contributors
- @zhanghx0905 made their first contribution in #944
Full Changelog: v0.8.2...v0.8.3
v0.8.2
What's new in 0.8.2 (2024-01-26)
These are the changes in inference v0.8.2.
New features
- FEAT: Support events by @codingl2k1 in #916
- FEAT: Support audio model by @codingl2k1 in #929
- FEAT: Support orion series models by @aresnow1 in #933
- Feat: Support Mixtral-8x7B-Instruct-v0.1-AWQ by @aresnow1 in #936
Enhancements
- ENH: Launch model by
version
by @ChengjieLi28 in #896 - ENH: Move multimodal to LLM by @codingl2k1 in #917
- ENH: InternLM2 chat template by @aresnow1 in #919
- ENH: Support
use_fp16
for rerank model by @aresnow1 in #927 - ENH: record instance count and version count when detailed listing model registrations by @ChengjieLi28 in #920
- BLD: Resolve conflicts during installation by @aresnow1 in #924
- REF: Move auth code to service for better scalability by @ChengjieLi28 in #925
Documentation
- DOC: Update readme by @aresnow1 in #914
- DOC: Display contributors in readme by @onesuper in #915
- DOC: Merge multimodal to LLM by @codingl2k1 in #923
- DOC: Model usage guide by @onesuper in #926
- DOC: Audio doc by @codingl2k1 in #937
Full Changelog: v0.8.1...v0.8.2
v0.8.1
What's new in 0.8.1 (2024-01-19)
These are the changes in inference v0.8.1.
New features
- FEAT: Auto recover limit by @codingl2k1 in #893
- FEAT: Prometheus metrics exporter by @codingl2k1 in #906
- FEAT: Add internlm2-chat support by @aresnow1 in #913
Enhancements
- ENH: Launch model asynchronously by @ChengjieLi28 in #879
- ENH: qwen vl modelscope by @codingl2k1 in #902
- ENH: Add "tools" in model ability by @aresnow1 in #904
- ENH: Add quantization support for qwen chat by @aresnow1 in #910
Bug fixes
- BUG: Fix prompt template of chatglm3-32k by @aresnow1 in #889
- BUG: invalid volumn in docker compose yml by @ChengjieLi28 in #890
- BUG: Revert #883 by @aresnow1 in #903
- BUG: Fix chatglm backend by @codingl2k1 in #898
- BUG: Fix tool calls on custom model by @codingl2k1 in #899
- BUG: Fix is_valid_model_name by @aresnow1 in #907
Documentation
- DOC: Update the documentation about use of docker by @aresnow1 in #901
- DOC:ADD FAQ IN troubleshooting.rst by @sisuad in #911
New Contributors
Full Changelog: v0.8.0...v0.8.1
v0.8.0
What's new in 0.8.0 (2024-01-11)
These are the changes in inference v0.8.0.
New features
- FEAT: qwen 1.8b gptq by @codingl2k1 in #869
- FEAT: docker compose support by @Minamiyama in #868
- FEAT: Simple OAuth2 system by @ChengjieLi28 in #793
- FEAT: Chat vl web UI by @codingl2k1 in #882
- FEAT: Yi chat gptq by @codingl2k1 in #876
Enhancements
- ENH: Stream use xoscar generator by @codingl2k1 in #859
- ENH: UI supports registering custom
gptq
models by @ChengjieLi28 in #875 - ENH: make the size param of *_to_image more compatible by @liunux4odoo in #881
- BLD: Update package-lock.json by @aresnow1 in #886
- REF: Add
model_hub
property inEmbeddingModelSpec
by @aresnow1 in #877
Bug fixes
- BUG: Fix image model b64_json output by @codingl2k1 in #874
- BUG: fix libcuda.so.1: cannot open shared object file by @superhuahua in #883
- BUG: Fix auto recover kwargs by @codingl2k1 in #885
Documentation
- DOC: docker image translation by @aresnow1 in #865
- DOC: register model with
model_family
by @ChengjieLi28 in #863 - DOC: Add OpenAI Client API doc by @codingl2k1 in #864
- DOC: add docker instructions by @onesuper in #878
New Contributors
- @superhuahua made their first contribution in #883
Full Changelog: v0.7.5...v0.8.0
v0.7.5
What's new in 0.7.5 (2024-01-05)
These are the changes in inference v0.7.5.
New features
- FEAT: text2vec by @ChengjieLi28 in #857
Enhancements
- ENH: Offload all response serialization to ModelActor by @codingl2k1 in #837
- ENH: Custom model uses vLLM by @ChengjieLi28 in #861
- BLD: Docker image by @ChengjieLi28 in #855
Bug fixes
- BUG: Fix typing_extension version problem in notebook by @onesuper in #856
- BUG: Fix multimodal cmdline by @codingl2k1 in #850
- BUG: Fix generate of chatglm3 by @aresnow1 in #858
Documentation
- DOC: CUDA Version recommendation by @ChengjieLi28 in #841
- DOC: new doc cover by @onesuper in #843
- DOC: Autogen modelhub info by @onesuper in #845
- DOC: Add multimodal feature in README by @onesuper in #846
- DOC: Chinese doc for user guide by @aresnow1 in #847
- DOC: add notebook for quickstart by @onesuper in #854
- DOC: Add docs about environments by @aresnow1 in #853
- DOC: Add jupyter notebook quick start tutorial by @onesuper in #851
Others
- CHORE: Add docker image with
latest
tag by @ChengjieLi28 in #862
Full Changelog: v0.7.4.1...v0.7.5
v0.7.4.1
What's new in 0.7.4.1 (2023-12-29)
These are the changes in inference v0.7.4.1.
Documentation
- DOC: Multimodal example by @codingl2k1 in #842
Full Changelog: v0.7.4...v0.7.4.1
v0.7.4
What's new in 0.7.4 (2023-12-29)
These are the changes in inference v0.7.4.
New features
- FEAT: Support sd-turbo by @codingl2k1 in #797
- FEAT: Support Skywork models by @Minamiyama in #809
- FEAT: Support sdxl-turbo by @codingl2k1 in #816
- FEAT: Supports registering rerank models by @ChengjieLi28 in #825
- FEAT: Support Phi-2 by @Bojun-Feng in #828
- FEAT: Support vllm gptq by @codingl2k1 in #832
- FEAT: Support qwen vl chat by @codingl2k1 in #829
Enhancements
- ENH: Custom model can use tool calls by @codingl2k1 in #818
- ENH: Replace uuid with model name for
model_uid
by @ChengjieLi28 in #831
Bug fixes
- BUG: error when check model_uid & model_name in restful_api by @liunux4odoo in #803
- BUG: launch method exception (#807) by @auxpd in #808
- BUG: Model description does not support Chinese from UI registering by @ChengjieLi28 in #812
- BUG: Find correct class for customized model by @sarsmlee in #835
Documentation
- DOC: add function calling in read me by @onesuper in #804
- DOC: Chinese documents for
Logging
andModels
parts by @ChengjieLi28 in #650 - DOC: remove version limit of sphinx by @qinxuye in #820
- DOC: polish function call description by @onesuper in #821
- DOC: add switcher.json by @qinxuye in #822
- DOC: add document language switcher by @qinxuye in #823
- DOC: use wechat and zhihu for zh doc by @qinxuye in #824
- DOC: Chinese doc for getting started by @aresnow1 in #833
- DOC: simplify entry doc by @onesuper in #826
- DOC: Chinese doc for
example
part by @ChengjieLi28 in #838
New Contributors
- @liunux4odoo made their first contribution in #803
- @auxpd made their first contribution in #808
- @sarsmlee made their first contribution in #835
Full Changelog: v0.7.3.1...v0.7.4
v0.7.3.1
What's new in 0.7.3.1 (2023-12-22)
These are the changes in inference v0.7.3.1.
Bug fixes
- BUG: Worker starts failed on windows by @ChengjieLi28 in #800
Full Changelog: v0.7.3...v0.7.3.1
v0.7.3
What's new in 0.7.3 (2023-12-22)
These are the changes in inference v0.7.3.
New features
- FEAT: Support OpenHermes 2.5 by @Bojun-Feng in #776
- FEAT: Support deepseek models by @aresnow1 in #786
- FEAT: Support tool message by @codingl2k1 in #794
- FEAT: Support Mixtral-8x7B-v0.1 models by @Bojun-Feng in #782
- FEAT: Support mistral instruct v0.2 by @aresnow1 in #796
Enhancements
- ENH: Enable streaming on Ctransformer by @Bojun-Feng in #784
- ENH: vllm backend support tool calls by @codingl2k1 in #785
- ENH: qwen switch to llama cpp by @codingl2k1 in #778
- ENH: [UI] register custom embedding model by @ChengjieLi28 in #791
Bug fixes
- BUG: UI Crash on Search when
model_format
andmodel_size
have been selected by @Bojun-Feng in #772 - BUG: When changing
XINFERENCE_HOME
env, the model files are still stored where they were. by @ChengjieLi28 in #777 - BUG: Remove the modelscope import by @aresnow1 in #788
- BUG: when terminating worker by
ctrl+C
, supervisor does not remove worker information by @ChengjieLi28 in #779 - BUG: Xinference does not release custom model name when registering failed by @ChengjieLi28 in #790
Documentation
- DOC: Update readme by @aresnow1 in #743
- DOC: Update FunctionCall.ipynb by @codingl2k1 in #773
Full Changelog: v0.7.2...v0.7.3
v0.7.2
What's new in 0.7.2 (2023-12-15)
These are the changes in inference v0.7.2.
New features
- FEAT: Supports
qwen-chat
1.8B by @ChengjieLi28 in #757 - FEAT: Support gorilla openfunctions v1 by @codingl2k1 in #760
- FEAT: qwen function call by @codingl2k1 in #763
Enhancements
- ENH: Handle tool call failed by @codingl2k1 in #767
Bug fixes
- BUG: [UI] Fix model size selection crash issue by @ChengjieLi28 in #764
Documentation
- DOC: Fix
model_uri
missing inCustom Models
by @ChengjieLi28 in #759
Full Changelog: v0.7.1...v0.7.2