Skip to content

A set of bash scripts to automate deployment of GGML/GGUF models [default: RWKV] with the use of KoboldCpp on Android - Termux

License

Notifications You must be signed in to change notification settings

ThinkThroughLabs/AltaeraAI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

084e485 · Apr 22, 2025
Feb 19, 2024
Apr 22, 2025
Apr 7, 2025
Sep 5, 2023
Apr 16, 2025

Repository files navigation

AltaeraAI

Github All Releases Github All Releases

Note: the number of downloads shown is only for the pre-packaged KoboldCpp and also includes downloads resulting from updates containing package upgrades.

Notice: AltaeraAI periodically undergoes heavy changes that can disrupt the installation process. If this disruption occurs, please try the installation a little later.

altaeragithublogo

README parts:

What is it?

AltaeraAI is a Termux wrapper that packages multiple AI Back & Front-ends for native usage on Android devices, which currently includes:

What is it about?

AltaeraAI is a free and open-source solution designed to bring the power of AI directly to your smartphone. It enables you to run GGML/GGUF models with ease, even if you're not familiar with programming or command-line tools.

By simplifying the installation process, it sets up Artix Linux (a lightweight operating system) within a special environment called PRoot Distro. This approach ensures you can quickly get everything you need—without dealing with complex setup steps or dependencies.

The tool also streamlines the installation of both AI backends and frontends for easy deployment. Additionally, it automatically configures your bash.bashrc file, adding a simple shortcut—just type ae to access the main menu.

AltaeraAI works as a platform in order to provide easy instructions and support for AI deployment on Android devices with the use of Termux. You can read more at: altaera.ai

Changelog
* v6.0.3 - fixed the link to RWKV7-Goose-World3-2.9B
* v6.0.2 - updated the pre-packaged KoboldCpp to experimental/v1.88 — the reason being that the experimental branch included a fix for RWKV related to a broken recurrent cache component in upstream LlamaCpp
* v6.0.1 - added RWKV7-Goose-World weights to the list of models
* v6.0.0 - updated the pre-packaged KoboldCpp to v1.87.4
* v5.7.9 - updated the pre-packaged KoboldCpp to v1.77
* v5.7.8 - updated the pre-packaged KoboldCpp to v1.76
* v5.7.7 - added RWKV-6-Finch and RWKV-6-World weights to the list of models
* added option to compile experimental KoboldCpp
* removed OpenBLAS, which was replaced by the "llamafile" library - it is now set by default
* v5.7.6 - updated the pre-packaged KoboldCpp to v1.75.2
* added the ability to back up stories to the end-to-end encrypted MEGA cloud service
* v5.7.5 - updated the pre-packaged KoboldCpp to v1.74
* v5.7.4.2 - Hotfix: fixed code mistakes related to the "Manage AI Back-ends" option
* v5.7.4.1 - Hotfix: fixed the issue related to the new version of pre-packaged KoboldCpp not being recognised as an update
* v5.7.4 - updated the pre-packaged KoboldCpp to v1.73.1
* added initial SillyTavern support
* added "Manage AI Back-ends" to the MENU
* v5.7.3 - updated the pre-packaged KoboldCpp to v1.73
* a mechanism has been added so that when updating AltaeraAI, the pre-packaged KoboldCpp will update only if it was previously installed
* added ability to use “maid” as an external front-end for ollama
* various fixes and expanded ollama support
* v5.7.2 – added initial ollama support
* v5.7.1 - added Gemma-2-2B-it and Gemma-2-2B-it-abliterated weights to the list of models
* v5.7 - updated to koboldcpp-1.72
* re-written the installation script to be more automated and user friendly, added visual enhancements and fixed bugs related to it
* v5.6 - updated to koboldcpp-1.71.1
* v5.5.2 - fixed a bug that caused the “Functional Status” check to always inform the user of technical difficulties
* v5.5.1 - a “Functional Status” check has been added, which will notify the user of ongoing technical difficulties (as determined by the repository owner) that may occur for users who have recently installed or upgraded AltaeraAI, due to its rolling-release lifecycle
* v5.5 - updated to koboldcpp-1.70.1
* v5.4.2 - a "very fast" installation method has been introduced that drastically shortens the process. It uses a pre-packaged PRoot-Distro environment, instead of installing it from scratch, and restores it in a backup form. It is now a default installation method
* v5.4.1 - added an automatic Termux update check, which will inform the user of an available update and allow the user to choose whether Termux was initially downloaded from GitHub or from F-Droid to further proceed (this is necessary for the update to actually work). This functionality is set by default (as are automatic update & file integrity checks), but can also be disabled under settings
* v5.4 - updated to koboldcpp-1.69.1
* added Gemma-2-9B-it weights to the list of models
* v5.3 - updated to koboldcpp-1.69
* v5.2.3 - fixed a bug in the file integrity checking mechanism that reported missing files even when they were not (especially after a fresh installation)
* added an initial solution to when the PRoot Distro environment fails to install, during the installation process
* v5.2.2 - added the ability to enter a custom value for Context & Blas Batch Size, in addition to the fixed sizes
* various visual improvements and fixes to the MENU
* v5.2.1 - introduced "File Integrity checks", which run simultaneously with the update checking mechanism, to determine potential file deficiences that can impact AltaeraAIs functionality; in case there are missing files, the user will be asked whether to carry out file repair. Said functionality is set by default (as are automatic update checks), but can also be disabled under settings
* visual fixes and improvements, changes to the MENU
* v5.2 – updated to koboldcpp-1.68
* added the “Horde” option to the MENU, which utilises AI Horde to allow for sharing your processing power (an AI Model) for users worldwide
* minor aesthetic changes and fixes to the MENU
* v5.1.2 - fixed the issue regarding pre-packaged KoboldCpp not being downloaded after switching to an organisational repository
* added a pre-launch check (when starting KoboldCpp) to see if the KoboldCpp directory exists in PRoot Distro; if not, the user will be asked whether to download or compile it
* v5.1.1 - shifted the projects main GitHub repository into an organisational one (ThinkThroughLabs). This upgrade does not bring any functionalities, its sole purpose is to redirect local AltaeraAI update mechanisms to a new address
* v5.1 - updated to koboldcpp-1.67
* added "aef", "aeforce" and "altaeraforce" arguments to the "bash.bashrc" file, which allow the user to launch AltaeraAI without the automatic update checking mechanism, in case there is a start-up problem, i.e., poor network connectivity
* v5.0 - updated to koboldcpp-1.66.1
* visual fixes and improvements to Model MENUS
* v4.9.6 - added Phi-SoSerious-Mini-V1/imatrix weights to the list of models
* v4.9.5 - fixed "KoboldCpp Settings"
* v4.9.4 - added an optional (set by default) black MENU background (Bash display dialog boxes)
* v4.9.3 – added Gemma-2B/7B-it weights (and a reference to their LICENSE file, with a notice) to the list of models
* v4.9.2 - added Yi-1.5-6B-Chat weights to the list of models
* v4.9.1 - added "Benchmark" mode to test AI models (--benchmark flag - KoboldCpp), into the MENU
* v4.9 – updated to koboldcpp-1.65
* v4.8.5 - fixed a bug which always informed the user about an available update, when launching AltaeraAI in offline mode
* v4.8.4 – added a changelog to the main MENU
* v4.8.3 – added an optional (set by default) “auto-update” mechanism, which automatically
  checks for updates whenever you type in “ae” in order to start AltaeraAI
* added “AltaeraAI Settings” into the MENU
* v4.8.2 – added information about device RAM and free storage in the main
  MENU
* v4.8.1 – added KobbleTinyV2-1.1B (imatrix) weights to the list of models
* v4.8 – updated to koboldcpp-1.64.1
* v4.7.2 – added Tiny-Vicuna and TinyDolphin (imatrix/laser) weights to the
  list of models
* added the ability to enable/disable the experimental Flash Attention
  (–flashattention) flag for compatible models in “KoboldCpp Settings”
* v4.7.1 – in case there is no update to KoboldCpp itself available, the
  “check for updates” mechanism will no more ask you to choose from a pre-
  packaged KoboldCpp or a locally compiled one; instead it will only update
  shell files, provided there is an update available to those
* added the option to force-update shell files only
* v4.7 – updated to koboldcpp-1.64
* v4.6.3 – added KobbleTinyV2-1.1B weights to the list of models
* v4.6.2 – fixes to the reinstallation mechanism
* v4.6.1 – added: KobbleTiny, TinyLlama, Mamba and Phi-3 Mini weights to
  the list of models
* v4.6 – updated to koboldcpp-1.63
* added LLaMA-3 weights to the list of models
* removed OpenBLAS support by default, due to reports of a significant
  slowdown when using this processing method
* fixes to the update mechanism when selecting local compilation
* visual fixes to the MENU
* v4.5 – updated to koboldcpp-1.62.1
* v4.4 – updated to koboldcpp-1.61.2
* v4.3 – updated to koboldcpp-1.60.1
* v4.2 – updated to koboldcpp-1.59.1
* v4.1 – updated to koboldcpp-1.58
* v4.0 – updated to koboldcpp-1.57.1
* v3.9 – updated to koboldcpp-1.56
* “Compatility Mode” no longer required, nor utilised to work with old GGML
  models
* v3.8.1 – added Vicuna weights to the list of models
* v3.8 – updated to koboldcpp-1.55.1
* v3.7.3 – added the ability to choose from installing/updating with the
  pre-packaged KoboldCpp or building one on your own device
* v3.7.2 – fixed ngrok to work with Artix Linux
* v3.7.1 – added Phi-2 weights to the list of models
* v3.7 – updated to koboldcpp-1.54
* v3.6 – updated to koboldcpp-1.53
* small changes to the embedded Kobold Lite (replaced “summary” with
  “memory” for better context following in Chat Mode)
* v3.5.1 – added Mistral weights to the list of models
* v3.5 – updated to koboldcpp-1.52.2
* visual fixes and improvements in the MENU
* v3.4 – updated to koboldcpp-1.52.1
* v3.3 – updated to koboldcpp-1.51.1
* v3.2.7 – cosmetic MENU UI visual enhancements when updating
* v3.2.6 – upgraded every language available on the list to upstream
  changes and fixes
* v3.2.5 – changes to the MENU
* v3.2.4 – added the “List Installed Models” option, fixed launching flags
  for GGML/bin models
* v3.2.3 – introduced “Compatibility Mode”, which from now on automatically
  deploys outdated GGML models (RWKV-4) with an older koboldcpp-1.49,
  thereby fixing the ‘GGML_ASSERT’ error [the embedded KoboldLite will
  continue to be updated]. Users that installed AltaeraAI prior to 26 Nov
  2023 need to re-install in order to utilise the Compatibility Mode
* v3.2.2 – refactoring, bug fixes, aesthetic changes to the MENU
* v.3.2.1 – ð¦ð¦ back on the models’ list!
* v3.2 – updated to koboldcpp-1.50.1
* small refactoring
* v3.1.2 – bug fixes
* small refactoring
* v3.1.1 – added the ability to store multiple AI Models at a time and
  choose which one of them to deploy/download/remove/back-up/restore
* changes to the MENU
* v3.1 – updated to koboldcpp-1.49
* v3.0 – updated to koboldcpp-1.48.1 – [reverted to v2.9.3 until glibc-2.38
  package is upstream in Ubuntu-22.04_arm64 repositories due to an OpenBLAS
  dependency requirement]
* Switched over from Ubuntu PRoot-Distro to Artix Linux. Users that
  installed AltaeraAI prior to 11 Nov 2023 are requested to re-install in
  order to receive future updates
* v2.9.3 – bug fixes
* v2.9.2 – added “KoboldCpp Settings” into the MENU, minor aesthetic
  changes to it
* v2.9.1 – changes to the MENU
* v2.9 – updated to koboldcpp-1.47.2
* v2.8.1 – minor aesthetic changes to the MENU
* v2.8 – updated to koboldcpp-1.46.1
* – performance upgrades
* v2.7 – updated to koboldcpp-1.44.2
* – performance upgrades, default AI model changed to “RWKV-claude-for-
  mobile-v4-world”
* v2.6.1 – intensive changes in the MENU, added many more functionalities
  and facilities
* v2.6 – updated to koboldcpp-1.43
* v2.5 – updated to koboldcpp-1.39.1
* minor performance upgrades
* v2.4.1 – minor performance upgrades to the RWKV model
* v2.4 – updated to koboldcpp-1.38
* upgraded the modified version of embedded Kobold Lite UI to contain new
  functionalities
* v2.3 – updated to koboldcpp-1.37.1
* v2.2 – updated to koboldcpp-1.35
* added an auto-detection system for model selection (default)
* v2.1 – added llamacpp weights to the list of models. – [temporarily
  removed in newest versions]
Current Models List

Installation

  1. Download and install Termux

  2. Open Termux and paste in:

    curl -fLo install https://github.com/ThinkThroughLabs/AltaeraAI/raw/refs/heads/main/scripts/install && chmod +x install && ./install

⚠️ Deprecated / Unavailable:

curl -fsSL in.altaera.ai | bash
  • Press Enter to begin the installation process. Ensure that your device is connected to a Wi-Fi network, as the installer will download approximately 0.5 GB of data—or more, depending on your configuration. The installation typically takes around 2 minutes, but this may vary based on your device's performance and internet speed.

Launching & Updating

  • Open Termux, type in 'ae' – you will be welcomed with the MENU screen.

The “Start AltaeraAI” button will ask you to choose from currently installed models, and then forward you to a browser with deployed UI.

Access Inference on external devices

You can access your AI Inference on external devices like PCs, laptops, etc., with the use of Secure Tunnelling [ngrok] - AltaeraAI has this function implemented in its code. You can learn more at: ngrok Secure Tunnels - AltaeraAI

Technical Support

License [derived from KoboldCpp]

  • The original GGML library and llama.cpp by ggerganov are licensed under the MIT License
  • ollama is licensed under the MIT License
  • However, Kobold Lite is licensed under the AGPL v3.0 License
  • The other files are also under the AGPL v3.0 License unless otherwise stated

TODO

  • A lot of things ;)