Ollama is a tool that allows you to run and manage large language models (LLMs) locally on your computer. It provides a simple way to download, run, and interact with AI models without relying on cloud-based services.
- Run AI models locally: Supports models like LLaMA, Mistral, Gemma, and more.
- Offline functionality: No internet required once models are downloaded.
- Faster responses: Local execution ensures low latency and quick results.
- Custom models: Supports custom models and fine-tuning.
- Cross-platform: Works on Linux, macOS, and Windows (via WSL).
To get started with Ollama, follow these steps:
- Download Ollama: Visit the official Ollama website to download the latest version for your operating system.
- Install Ollama: Follow the installation instructions for your platform.
- Download Models: Use the command-line interface to download and manage AI models.
- Run Models: Start interacting with the models locally.
To download a model (e.g., LLaMA), use the following command:
ollama pull llama
Step 1 Setup Ollama on Linux.
sudo snap install ollama
Windows User use https://ollama.com/
Step 2 Install deepseek-r1 Dependencies These dependencies take time to install into the system. https://ollama.com/library/deepseek-r1:1.5b
Step 3 To Access Ollama On linux you can install using this command
Step 4 To run the DeepSeek-R1 1.5B model using Ollama, use the following command its take some time maybe 15 mint's depend on server configuration
ollama run deepseek-r1:1.5b
Step 5 Enter Your Prompt if you connect with local machine without internet then its sake 3 seconds to think your prompt
- Select API Host & Deepseek-r1:1.5b
- Save the setting