Rate limiting middleware for MPP (Machine Payments Protocol).
Control request volume per session with configurable limits, storage backends, and strategies. Built for AI agents and API services that accept MPP payments.
- Session-aware — Rate limits tied to MPP session IDs, not IP addresses
- Three strategies —
block(reject),queue(delay),degrade(reduce quality) - Pluggable storage — In-memory (default), Redis, or custom backends
- Analytics built-in — Track request counts, blocked sessions, and top consumers
- TypeScript first — Full type definitions included
- Framework agnostic — Works with Express, Hono, Elysia, Fastify, or any Node.js server
npm install mppx-rate-limit
# or
pnpm add mppx-rate-limit
# or
yarn add mppx-rate-limitRequires mppx as a peer dependency:
npm install mppximport { createRateLimiter } from 'mppx-rate-limit'
import { MppxServer } from 'mppx'
const server = new MppxServer({ port: 3000 })
const limiter = createRateLimiter({
maxRequests: 100, // 100 requests
windowMs: 60_000, // per 60 seconds
strategy: 'block', // reject over-limit requests
})
server.onChallenge(async (challenge) => {
const result = await limiter({
sessionId: challenge.sessionId,
clientId: challenge.clientId,
challenge,
})
if (!result.allowed) {
return {
status: 402,
headers: {
'X-RateLimit-Limit': String(100),
'X-RateLimit-Remaining': '0',
'X-RateLimit-Reset': String(Math.ceil(result.resetAt / 1000)),
'Retry-After': String(Math.ceil((result.retryAfterMs ?? 0) / 1000)),
},
}
}
return null // challenge passed, proceed to handler
})Creates a rate limiter with the given configuration.
| Option | Type | Default | Description |
|---|---|---|---|
maxRequests |
number |
Required | Max requests per session in the time window |
windowMs |
number |
Required | Time window in milliseconds |
strategy |
'block' | 'queue' | 'degrade' |
'block' |
Strategy when limit is exceeded |
onExceeded |
(session, count) => void |
— | Called when a session exceeds its limit |
onRequest |
(session, count, remaining) => void |
— | Called on each request (for analytics) |
getSessionId |
(challenge) => string |
challenge.sessionId ?? challenge.clientId |
Custom session extractor |
storage |
RateLimitStorage |
InMemoryStorage |
Storage backend |
Returns a middleware function (challengeInfo) => Promise<RateLimitResult>.
Creates an Express/Hono-compatible middleware.
import { createRateLimitMiddleware } from 'mppx-rate-limit'
import { MppxServer } from 'mppx'
const server = new MppxServer({ port: 3000 })
const rateLimitMw = createRateLimitMiddleware(
{
maxRequests: 100,
windowMs: 60_000,
},
async (req) => {
const challenge = await server.extractChallenge(req)
return { sessionId: challenge.sessionId, challenge }
}
)
// Use with Express
app.post('/api/resource', rateLimitMw, async (req, res) => {
// MPP payment handled, resource delivery here
})Track usage patterns across all sessions.
import { RateLimitAnalytics } from 'mppx-rate-limit'
const analytics = new RateLimitAnalytics({ maxRequests: 100, windowMs: 60_000 })
const limiter = createRateLimiter(analytics.attachTo({
maxRequests: 100,
windowMs: 60_000,
}))
// Later, query stats
console.log(analytics.getTotalStats())
// { totalRequests: 15420, totalBlocked: 342, activeSessions: 89 }
console.log(analytics.getTopSessions(5))
// [
// { sessionId: "sess_abc123", requestCount: 542, blockedCount: 12, lastRequest: 1743032400000, totalRequests: 554 },
// ...
// ]Implement custom storage for distributed rate limiting.
import { RateLimitStorage } from 'mppx-rate-limit'
class RedisStorage implements RateLimitStorage {
constructor(private redis: Redis) {}
async increment(key: string, windowMs: number): Promise<number> {
const multi = this.redis.multi()
multi.incr(key)
multi.expire(key, Math.ceil(windowMs / 1000))
const results = await multi.exec()
return results![0][1] as number
}
async get(key: string): Promise<number | undefined> {
const val = await this.redis.get(key)
return val ? parseInt(val, 10) : undefined
}
async reset(key: string): Promise<void> {
await this.redis.del(key)
}
}
const limiter = createRateLimiter({
maxRequests: 100,
windowMs: 60_000,
storage: new RedisStorage(redisClient),
})Returns allowed: false when limit is exceeded. The MPP server should return HTTP 402.
Returns allowed: true but queues the request until a rate limit slot frees up. Good for non-critical background tasks.
Returns allowed: true but includes a degrade flag. Your server can reduce response quality (e.g., lower quality AI output, cached data instead of fresh computation).
##MPP Context
mppx-rate-limit works at the MPP session layer, not the IP layer. This means:
- Rate limits follow the AI agent's MPP session across IP changes
- Each agent session gets its own independent limit
- You can combine with IP-based limits for additional security
MIT