-
Notifications
You must be signed in to change notification settings - Fork 0
Description
Problem
The upstream stickerdaniel/linkedin-mcp-server uses browser cookie extraction for authentication. This breaks in serverless environments (Cloud Run, Lambda) because:
- Cold starts kill sessions. Cookies are stored in-process memory. When the container scales to zero and back, auth is gone.
- Re-login required every time. The
--loginflag triggers a browser-based flow that can't run headless in a container. - No multi-user support. One cookie jar = one LinkedIn account. No way to serve multiple users from a single deployment.
Error from upstream server:
Authentication failed. Run with --login to re-authenticate.
Runtime: linux-amd64-container
Proposed solution
1. OAuth2 authentication flow
Replace the cookie-based login with LinkedIn's official OAuth 2.0 flow:
- Redirect user to LinkedIn authorization URL
- Receive callback with auth code
- Exchange for access token + refresh token
- Store tokens (not cookies) persistently
LinkedIn OAuth scopes needed:
r_liteprofile(basic profile)r_emailaddress(email)w_member_social(posting, if needed later)- Messaging API: requires LinkedIn Marketing/Compliance API partnership or Community Management API access. Check current availability.
Blocker to investigate: LinkedIn's messaging API (
/messaging/conversations) is not available via standard OAuth apps. It requires either the Compliance API (enterprise) or scraping. Evaluate whether the existing scraping approach can be kept but with persistent cookie storage as a middle ground.
2. Cookie bucket storage (GCS)
If OAuth messaging access is not feasible, keep the cookie-based approach but persist cookies externally:
- On successful login, serialize cookies to JSON
- Upload to a GCS bucket (one file per user, keyed by LinkedIn username or hash)
- On cold start, check bucket for valid cookies before requesting re-login
- TTL: LinkedIn session cookies typically last 1-6 months
- Encrypt at rest (GCS default encryption or customer-managed keys)
gs://linkedin-mcp-cookies/
{user_hash}/
cookies.json.enc
metadata.json # last_refreshed, expires_at
3. Cloud Run deployment
- Host as Cloud Run service with
--min-instances=0(scale to zero for cost) - Auth via Cloud Run IAM or API key for the MCP client
- Health check endpoint:
GET /healthreturns cookie freshness status - Login endpoint:
GET /auth/logintriggers OAuth or cookie refresh flow
4. Message reading support
Extend the MCP server with:
get_inbox(limit)- list recent conversations (already exists upstream but broken)get_conversation(thread_id)- read full threadsearch_conversations(keywords)- keyword searchmark_as_read(thread_id)- optional
Ensure message parsing handles:
- InMail vs regular messages
- Group conversations
- Attachments / link previews (metadata only)
Tasks
- Fork setup: Cloud Run Dockerfile, service config, IAM
- Investigate LinkedIn messaging API access via OAuth (is it even possible without enterprise partnership?)
- If no OAuth messaging: implement GCS cookie bucket persistence
- Add
/auth/loginendpoint with callback handler - Add
/healthendpoint reporting session validity - Test cold start recovery (scale to zero, wait, call get_inbox)
- Test session expiry handling (graceful re-auth prompt vs crash)
- Wire up as MCP server for Claude.ai integration
Context
This fork exists because the upstream server fails in Claude.ai's MCP integration. The server connects but auth drops on every cold start, making it unusable for daily workflows. The goal is a self-hosted, persistent LinkedIn MCP server that survives container restarts and serves one user reliably from Cloud Run at near-zero cost (same pattern as Nexus CRM deployment).
References
- Upstream: github.com/stickerdaniel/linkedin-mcp-server
- LinkedIn OAuth docs: https://learn.microsoft.com/en-us/linkedin/shared/authentication/authorization-code-flow
- LinkedIn Messaging API status: https://learn.microsoft.com/en-us/linkedin/compliance/
- GCS client library:
@google-cloud/storage