A Claude skill that automates NotebookLM notebook creation from YouTube videos — research featured people, add sources, and generate Audio Overviews automatically.
Give Claude a YouTube link, and it will:
- Extract people mentioned in the video
- Research their backgrounds via web search
- Create a NotebookLM notebook with:
- YouTube video as a source (transcript auto-extracted)
- Research document as additional context
- Generate an Audio Overview podcast
Perfect for turning interviews, talks, and documentaries into listen-on-the-go podcasts.
Input:
Use the notebooklm video research skill to prepare audio overview of this video:
https://www.youtube.com/watch?v=0nlNX94FcUE
Output:
- NotebookLM notebook with full video transcript
- Research on Sergey Brin's background and recent work
- Audio Overview podcast ready to listen
- Claude Desktop app
- Google account logged into NotebookLM
- Chrome browser
- In Claude Desktop, go to Settings → Connectors
- Enable "Control Chrome" connector
- Open Chrome
- Go to View → Developer menu
- Enable "Allow JavaScript from Apple Events"
Option A: Download and install manually
- Download this repository as ZIP
- Extract to your Claude skills folder
- Follow Anthropic's skill installation guide
Option B: Clone with git
cd ~/Library/Application\ Support/Claude/skills # Mac
# or
cd %APPDATA%\Claude\skills # Windows
git clone https://github.com/YOUR_USERNAME/notebooklm-video-research.gitJust tell Claude:
Use the notebooklm video research skill to prepare audio overview of this video: [YouTube URL]
Or more casually:
Create a NotebookLM notebook for this video: [YouTube URL]
- Works with Haiku model for faster/cheaper runs
- Let it run in the background — takes a few minutes
- Make sure you're logged into NotebookLM in Chrome before starting
notebooklm-video-research/
├── SKILL.md # Main skill instructions for Claude
├── README.md # This file
└── references/
└── notebooklm_ui_guide.md # UI element reference for automation
The skill uses a screenshot-first automation approach:
📸 Screenshot → 👀 Analyze → 🎯 Target → 🖱️ Execute → 📸 Verify
This is crucial because NotebookLM has multiple similar-looking input fields. Without visual verification, automation often targets the wrong element.
NotebookLM uses Angular, so simple .value = 'text' doesn't work. The skill uses proper event dispatching:
var setter = Object.getOwnPropertyDescriptor(
HTMLTextAreaElement.prototype, 'value'
).set;
setter.call(textarea, 'content');
textarea.dispatchEvent(new Event('input', {bubbles: true}));| Action | Time |
|---|---|
| Video research | 1-2 min |
| NotebookLM automation | 2-3 min |
| Audio Overview generation | 5-10 min |
Found a bug? Have an improvement? PRs welcome!
Ideas for extensions:
- Generate slide decks from video content
- Add multiple videos to one notebook
- Include additional web sources automatically
- Create structured notes instead of/alongside audio
MIT — use freely, modify as needed, share with others.
- Built with Claude by Anthropic
- Uses NotebookLM by Google
Questions? Open an issue or reach out on Twitter/X.