AI Tutor is an intelligent educational tool powered by Ollama's Gemma3:1b model, containerized with Docker for easy deployment. It provides functionalities such as natural language response generation, quiz creation, PDF summarization, and more, making it a versatile assistant for students, educators, and knowledge seekers.
- Response Generation: Answer user queries with context-aware responses using Gemma3:1b.
- Quiz Generation: Automatically create quizzes based on input topics or content.
- PDF Summarization: Extract and summarize key points from uploaded PDF documents.
- Docker Integration: Run the application in a containerized environment for portability and scalability.
- Streamlit Interface: User-friendly web interface built with Streamlit for interacting with the AI Tutor.
- Extensible Design: Modular code structure allows for adding new features.
- Docker installed on your system.
- Git for cloning the repository.
- Basic familiarity with command-line interfaces.
- Optional: Ollama installed locally if you want to test the model outside Docker.
-
Clone the Repository
git clone https://github.com/ogulcanzorba/Docker_AI_Project.git cd Docker_AI_Project
-
Build the Docker Image The repository includes a
Dockerfile
that sets up the environment, installs dependencies, and pulls the Gemma3:1b model via Ollama.docker build -t ai-tutor .
-
Run the Docker Container Expose port 8501 (used by Streamlit) to access the web interface.
docker run -p 8501:8501 ai-tutor
-
Verify Setup
- Ensure the Gemma3:1b model is downloaded and loaded within the container (handled automatically by the
Dockerfile
). - Check that the Streamlit app is running by navigating to
http://localhost:8501
in your browser.
- Ensure the Gemma3:1b model is downloaded and loaded within the container (handled automatically by the
- Access the Web Interface: Open
http://localhost:8501
to interact with the AI Tutor. - Response Generation: Enter queries in the provided text input to receive answers powered by Gemma3:1b.
- Quiz Generation: Use the quiz feature to generate questions based on a topic or uploaded content.
- PDF Summarization: Upload a PDF file through the interface to receive a summarized version of its content.
- Explore Additional Features: Check the interface for other functionalities like topic exploration or content analysis.
Docker_AI_Project/
├── Dockerfile # Docker configuration for building the app
├── app.py # Main Streamlit application script
├── requirements.txt # Python dependencies (e.g., streamlit, ollama, PyPDF2)
├── README.md # This documentation file
└── .gitignore # Git ignore file for excluding unnecessary files
- Dockerfile: Defines the container setup, including Ubuntu base image, Ollama installation, Gemma3:1b model pull, and Python dependencies.
- app.py: Implements the Streamlit web interface and integrates with Ollama's Gemma3:1b model for AI functionalities.
- requirements.txt: Lists Python packages required for the application (e.g.,
streamlit
,ollama
,PyPDF2
).
Contributions are welcome! To contribute:
- Fork the repository.
- Create a new branch (
git checkout -b feature/your-feature
). - Make your changes and commit (
git commit -m "Add your feature"
). - Push to the branch (
git push origin feature/your-feature
). - Open a Pull Request.
Please ensure your code adheres to the project's coding standards and includes appropriate documentation.
- Model Loading Issues: Ensure sufficient disk space and memory for Gemma3:1b (check Docker logs with
docker logs <container_id>
). - Port Conflicts: If port 8501 is in use, map a different host port (e.g.,
docker run -p 8502:8501 ai-tutor
). - Dependency Errors: Rebuild the Docker image to refresh dependencies (
docker build --no-cache -t ai-tutor .
).
This project is licensed under the MIT License. See the LICENSE file for details (if added to the repository).
For questions or support, open an issue on the GitHub repository or contact the maintainer at [[email protected]].