Releases: InternLM/xtuner
Releases · InternLM/xtuner
XTuner Release V0.1.4
XTuner Release V0.1.3
What's Changed
- [Feature] Add Baichuan2 7B-chat, 13B-base, 13B-chat by @LZHgrla in #103
- [Fix] Use
token_id
instead oftoken
forencode_fn
& Set eval mode before generate by @LZHgrla in #107 - [Feature] Support log processed dataset & Fix doc by @HIT-cwh in #101
- [Fix] move toy data by @HIT-cwh in #108
- bump version to 0.1.3 by @HIT-cwh in #109
Full Changelog: v0.1.2...v0.1.3
XTuner Release V0.1.2
What's Changed
- [Doc] Fix dataset docs by @HIT-cwh in #87
- [Doc] Fix readme by @HIT-cwh in #92
- [Improve] Add ZeRO2-offload configs by @LZHgrla in #94
- [Improve] Redesign convert tools by @LZHgrla in #96
- [Fix] fix generation config by @HIT-cwh in #98
- [Feature] Support Baichuan2 models by @LZHgrla in #102
- bump version to 0.1.2 by @LZHgrla in #100
Full Changelog: v0.1.1...v0.1.2
XTuner Release V0.1.1
What's Changed
- [Doc] Update WeChat image by @LZHgrla in #74
- [Doc] Modify install commands for DeepSpeed integration by @LZHgrla in #75
- Add bot: Create .owners.yml by @del-zhenwu in #81
- [Improve] Add several InternLM-7B full parameters fine-tuning configs by @LZHgrla in #84
- [Feature] Add starcoder example by @HIT-cwh in #83
- [Doc] Add data_prepare.md docs by @LZHgrla in #82
- bump version to 0.1.1 by @HIT-cwh in #85
New Contributors
- @del-zhenwu made their first contribution in #81
Full Changelog: v0.1.0...v0.1.1
XTuner Release V0.1.0
Changelog
v0.1.0 (2023.08.30)
XTuner is released! 🔥🔥🔥
Highlights
- XTuner supports LLM fine-tuning on consumer-grade GPUs. The minimum GPU memory required for 7B LLM fine-tuning is only 8GB.
- XTuner supports various LLMs, datasets, algorithms and training pipelines.
- Several fine-tuned adapters are released simultaneously, including various gameplays such as the colorist LLM, plugins-based LLM, and many more. For further details, please visit XTuner on HuggingFace!