arXiv scraper using the arXiv Python API-wrapper: arXiv.py.
To use the AI-LLM capabilities, you will need to install some an external program called "Ollama". This is a piece of middleware that lets you host LLMs locally.
curl -fsSL https://ollama.com/install.sh | sh
Install from https://ollama.com/download/mac and follow the setup instructions in the app.
Ollama for windows is avaliable at https://ollama.com/download/windows howeever it is in beta (as of last time I checked), so my prefered method is to install Windows Subsystem for Linux (WSL), via
wsl --install
Then opening it
wsl
Then following Linux directions.
Ollama has a whole bunch of models to pick from. My reccomendation is Llama3 8b. This can be downloaded.
ollama pull llama3.1:8b
If you choose another model, ensure you update the config.
I reccomend Llama3 8b because it balences system requirements with performance. The model takes about 45s per article on my laptop. For more intelligent reccomendation, try a larger model. e.g. mistral-nemo:12b
. If you want faster performance, and can tolerate dumber reccomendations, consider phi3 mini.
See the avaliable models at https://ollama.com/library
You have already got everything needed to run the ai capabilityies. You can test out the capabilities by calling your model straight from the command line.
ollama run llama3
You can then talk to it just like ChatGPT.