A powerful, privacy-focused desktop application for real-time meeting transcription with AI-powered summaries and smart reply suggestions.
- Real-time Transcription - Live speech-to-text using Deepgram (1-2 second latency)
- AI-Powered Summaries - Generate meeting summaries with one click
- Smart Reply Suggestions - Get contextual reply suggestions based on the conversation
- Offline Recording - Record meetings for later transcription
- Privacy First - Your audio stays on your device, only transcription is sent to APIs
- Beautiful UI - Modern, responsive interface with dark mode support
- Cross-Platform - Works on macOS, Windows, and Linux
# Clone the repository
git clone https://github.com/YOUR_USERNAME/meeting-assistant.git
cd meeting-assistant
# Install dependencies
npm install
# Run in development mode
npm run tauri dev
# Build for production
npm run tauri buildYou'll need API keys from the following services:
| Service | Purpose | Get Key | Free Tier |
|---|---|---|---|
| Deepgram | Real-time transcription | console.deepgram.com | $200 credit |
| Groq | AI summaries & replies | console.groq.com/keys | Free tier |
| AssemblyAI (optional) | Batch transcription | assemblyai.com/app | 100 hrs/month |
- Deepgram + Groq = Full real-time experience with AI features
- Open the app
- Click Settings in the header
- Enter your API keys
- Start transcribing!
- Click "Start Live Transcription"
- Speak into your microphone
- Watch real-time transcription appear
- Click "Stop" when done
- After transcription, click "Generate" in the Summary panel
- AI will create a concise meeting summary
- Click "Generate from Transcript"
- Get smart, contextual reply suggestions
- Click any suggestion to copy it
| Layer | Technology |
|---|---|
| Frontend | React + TypeScript + Vite |
| Backend | Rust + Tauri 2.0 |
| Transcription | Deepgram (real-time), AssemblyAI (batch) |
| AI/LLM | Groq (Llama 3.1, Mixtral) |
| Audio | cpal (cross-platform audio capture) |
| Styling | CSS with dark mode support |
meeting-assistant/
├── src/ # React frontend
│ ├── App.tsx # Main React component
│ └── App.css # Styles
├── src-tauri/ # Rust backend
│ ├── src/
│ │ ├── lib.rs # Tauri commands & state
│ │ ├── deepgram.rs # Real-time transcription
│ │ ├── assemblyai.rs # Batch transcription
│ │ └── audio.rs # Audio recording
│ └── Cargo.toml # Rust dependencies
├── package.json # Node dependencies
└── README.md
Contributions are welcome! Here's how you can help:
- Report bugs
- Suggest features
- Submit pull requests
- Improve documentation
- Share the project
# Install Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Clone and setup
git clone https://github.com/YOUR_USERNAME/meeting-assistant.git
cd meeting-assistant
npm install
# Run development server
npm run tauri dev- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- Speaker diarization (identify who is speaking)
- Export to various formats (PDF, Word, Markdown)
- Meeting templates
- Calendar integration (Google, Outlook)
- Keyboard shortcuts
- Local LLM support (Ollama)
- Browser extension
- Mobile companion app
- Multi-language support
Q: Is my audio data stored anywhere? A: No. Audio is processed in real-time and only the transcription text is sent to APIs. Nothing is stored on external servers.
Q: Can I use this without internet? A: Recording works offline, but transcription and AI features require internet connection.
Q: Which API should I get first? A: Start with Deepgram (for transcription) + Groq (for AI). Both have generous free tiers.
This project is licensed under the MIT License - see the LICENSE file for details.
- Tauri - Desktop framework
- Deepgram - Real-time transcription
- Groq - Fast LLM inference
- AssemblyAI - Batch transcription
- Star this repo if you find it useful!
- Report bugs
- Request features
Made with love using Tauri + React + Rust

