A Node.js Express REST API for NCAA Division I wrestling rankings with authentication, rate limiting, and subscription plans.
- 🔐 API key authentication
- 📊 NCAA Division I wrestler rankings by weight class
- 🎯 Rate limiting (500 requests/month for free tier)
- 💳 Stripe integration ready for paid plans
- 📝 CSV import system for easy data management
- 🔍 Filter rankings by weight class
- 🌐 Web scraping support (Cheerio + Playwright)
# Install dependencies (using yarn as per your preference)
yarn install
# Or with npm
npm installCreate a .env file:
# Database (defaults to SQLite if not specified)
DATABASE_URL=sqlite:wrestling_api.db
# Stripe (optional - add when ready)
STRIPE_API_KEY=your_stripe_api_key_here
STRIPE_WEBHOOK_SECRET=your_stripe_webhook_secret_here
# Server
PORT=8000
NODE_ENV=development# Development mode (with auto-reload)
yarn dev
# Production mode
yarn startThe API will be available at http://localhost:8000
Sign up to get an API key:
curl -X POST "http://localhost:8000/api/v1/[email protected]"Response:
{
"email": "[email protected]",
"api_key": "your-api-key-here",
"plan": "free"
}# All wrestlers
curl -H "x-api-key: YOUR_API_KEY" http://localhost:8000/api/v1/rankings
# Filter by weight class
curl -H "x-api-key: YOUR_API_KEY" http://localhost:8000/api/v1/rankings?weight_class=157curl "http://localhost:8000/api/v1/[email protected]"curl -X DELETE "http://localhost:8000/api/v1/[email protected]"Create a sample CSV:
node scripts/importCsv.js createThis creates wrestlers_sample.csv with the format:
rank,name,school,weight_class,source
1,Spencer Lee,Iowa,125,FloWrestling
2,Patrick Glory,Princeton,125,FloWrestling
Import your CSV:
node scripts/importCsv.js wrestlers.csvThis is the best option for Railway and other cloud deployments:
# Trigger scraper (uses Playwright by default)
curl -X POST \
-H "x-api-key: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"usePlaywright": true, "clearExisting": true}' \
https://your-app.railway.app/api/v1/scraper/run
# Check scraper status
curl -H "x-api-key: YOUR_API_KEY" \
https://your-app.railway.app/api/v1/scraper/statusRequest body options:
usePlaywright(boolean, default: true) - Use Playwright for JS-rendered sitesclearExisting(boolean, default: true) - Clear existing data before importing
Basic scraper (using Cheerio):
yarn scrape
# or
node scripts/runScraper.jsPlaywright scraper (for JavaScript-rendered sites):
# First, install Playwright browsers
npx playwright install chromium
# Run Playwright scraper
node scripts/runScraper.js --playwright
# or
node scripts/runScraper.js -pTest scraper without saving to database:
node scripts/testScraper.js
node scripts/testScraper.js --playwrightwrestling-api/
├── src/
│ ├── index.js # Express application entry
│ ├── config.js # Configuration
│ ├── database.js # Sequelize setup
│ ├── models/
│ │ ├── User.js # User model
│ │ ├── Wrestler.js # Wrestler model
│ │ └── APIUsage.js # API usage tracking
│ ├── middleware/
│ │ └── auth.js # Authentication & rate limiting
│ ├── routes/
│ │ ├── rankings.js # Rankings endpoints
│ │ ├── user.js # User/signup endpoints
│ │ └── scraper.js # Scraper trigger endpoints
│ └── scrapers/
│ ├── ncaa.js # Basic scraper (Cheerio)
│ └── playwright.js # Playwright scraper
├── scripts/
│ ├── importCsv.js # CSV import utility
│ ├── runScraper.js # Scraper runner
│ └── testScraper.js # Test scraper
├── package.json # Dependencies and scripts
├── .env # Configuration (create this)
├── wrestling_api.db # SQLite database (auto-created)
├── Dockerfile # Docker configuration
├── docker-compose.yml # Docker Compose setup
└── Procfile # Heroku/Railway deployment
GET /- Welcome messagePOST /api/v1/signup?email={email}- Get API keyGET /api/v1/user?email={email}- Get user infoDELETE /api/v1/user?email={email}- Delete user account
GET /api/v1/rankings- Get all wrestler rankingsGET /api/v1/rankings?weight_class={weight}- Filter by weight classPOST /api/v1/scraper/run- Trigger scraper to populate databaseGET /api/v1/scraper/status- Check database status and scraper health
- Free tier: 500 requests per month
- Pro/Business tier: Unlimited (set in user.plan field)
Rate limits are tracked monthly per user in the APIUsage table.
By default, the API uses SQLite for easy local development. The database file is wrestling_api.db.
For production, set the DATABASE_URL environment variable:
DATABASE_URL=postgresql://user:password@host:5432/dbnameUsers
- id, email (unique), api_key (unique), plan (free/pro/business)
Wrestlers
- id, name, school, weight_class, rank, source, last_updated
APIUsage
- id, user_id (FK), date, requests (count)
Build and run:
docker build -t wrestling-api .
docker run -p 8000:8000 --env-file .env wrestling-apiOr use Docker Compose:
docker-compose upThis starts both the API and PostgreSQL database.
The project includes a Procfile for easy deployment to Heroku or Railway:
web: node src/index.js
Set environment variables in your platform's dashboard:
DATABASE_URL(PostgreSQL connection string)STRIPE_API_KEY(optional)STRIPE_WEBHOOK_SECRET(optional)PORT(automatically set by platform)
DATABASE_URL=postgresql://user:pass@host:5432/dbname
STRIPE_API_KEY=sk_live_...
STRIPE_WEBHOOK_SECRET=whsec_...
PORT=8000
NODE_ENV=productionNCAA Division I weight classes:
- 125, 133, 141, 149, 157, 165, 174, 184, 197, 285 lbs
Rankings can be sourced from:
- FloWrestling: https://www.flowrestling.org/rankings
- NCAA.com: https://www.ncaa.com/rankings/wrestling/d1
- Manual CSV import (recommended for accuracy)
Note: Most wrestling ranking sites use JavaScript rendering. The Playwright scraper is recommended for automated data collection.
Available npm/yarn scripts:
yarn start # Start production server
yarn dev # Start development server with nodemon
yarn scrape # Run basic scraper
yarn import # Run CSV import utility
yarn test # Test scraper without saving- Sign up first:
POST /api/v1/[email protected] - Include header:
x-api-key: YOUR_KEY
- Wait until next month (usage resets monthly)
- Or manually update user plan in database to 'pro'
- Import data:
node scripts/importCsv.js wrestlers_sample.csv - Or run scraper:
yarn scrape
- Try Playwright scraper:
node scripts/runScraper.js --playwright - Or use CSV import method (most reliable)
MIT
For issues or questions, please open an issue on the repository.