Conversation
❌ Multi-LLM Code Review — REQUEST CHANGESAggregated Score: 5.8/10 (median of 3 models) 📊 Method: mean Model Verdicts
Category Scores (Median)
Warning: Bounty Spec Compliance: PARTIALThis submission partially meets the acceptance criteria. Review the issues above for gaps. SummarySonnet 4.6: Review failed: No module named 'anthropic' Issues
Suggestions
Contributor stats: 7 merged bounty PRs, rep score 61 🎯 Expected scope: ~29 files | This work spans app-wide API infrastructure, environment configuration, query/caching setup, loading SolFoundry Multi-LLM Review Pipeline v3.1 — GPT-5.4 + Gemini 3.1 Pro + Grok 4 + Sonnet 4.6 + DeepSeek V3.2 Next StepsPlease address the issues above and push updated commits. The review will re-run automatically. |
📝 WalkthroughWalkthroughThis PR integrates the frontend with real backend APIs by replacing mock data with React Query-based hooks and adding a centralized Axios service layer. On the backend, it adds new API endpoints for contributor dashboards ( Estimated code review effort🎯 4 (Complex) | ⏱️ ~70 minutes Possibly related PRs
Suggested labels
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 28
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
frontend/src/hooks/useBountyBoard.ts (1)
20-39: 🧹 Nitpick | 🔵 TrivialConsider stronger typing for API response mapping.
The
mapApiBountyfunction acceptsanytype, which loses type safety. Consider defining aRawApiBountyinterface matching the backend response shape, or at minimum add runtime validation for critical fields likeidandreward_amount.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/hooks/useBountyBoard.ts` around lines 20 - 39, mapApiBounty currently accepts any; add a RawApiBounty TypeScript interface that mirrors the backend shape (include id, title, description, tier, required_skills/reward_amount/rewardAmount, deadline, status, submission_count, created_at, created_by, creator_type, github_issue_url, relevance_score, skill_match_count) and change the function signature to mapApiBounty(b: RawApiBounty). Inside mapApiBounty add minimal runtime validation for critical fields (e.g., ensure id exists and reward_amount or rewardAmount is a number) and either throw a descriptive error or fall back to safe defaults when validation fails; keep existing field mappings (id, reward_amount, required_skills, etc.) and update references like rewardAmount, submission_count, created_at to use the typed properties.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/app/api/contributors.py`:
- Around line 257-264: The DashboardData Pydantic model currently uses fields
stats, bounties, activities, notifications, earnings, linkedAccounts which do
not match the frontend DashboardData interface; update the DashboardData model
(class DashboardData) to expose the frontend-expected keys: summary,
activeBounties, earnings (use the EarningsSummary shape), recentActivity,
notifications, and settings (with linkedAccounts inside settings), or create a
response transformation function that maps existing internal fields to those
exact names and types before returning to the API client; ensure field names and
nested shapes (EarningsSummary, settings.linkedAccounts) match the frontend
types so the JSON contract is identical.
- Around line 257-264: The DashboardData Pydantic model references BaseModel but
BaseModel is not imported; add the missing import "from pydantic import
BaseModel" to the top import block so the class DashboardData can inherit from
BaseModel without raising a NameError.
- Around line 286-303: The dashboard currently returns hardcoded
bounties/activities (symbols: bounties, activities, stats,
contributor.total_earnings) — replace these mocks with real DB/indexer queries:
call a repository or ORM methods (e.g.,
get_bounties_for_contributor(contributor.id) and
get_activities_for_contributor(contributor.id, limit, cursor)) to fetch
contributor-specific records, map DB fields into the existing response shape
(id, title, reward, deadline, status, progress for bounties; id, type, title,
description, timestamp, amount for activities), and compute derived stats
(activeBounties, pendingPayouts, totalContributors) from query results or count
queries instead of hardcoded numbers; ensure you handle pagination, errors, and
an empty-result case before returning the assembled stats, bounties, and
activities.
- Around line 267-270: The file is missing the BaseModel import used by the
DashboardData pydantic model; add the import "from pydantic import BaseModel" to
the module imports so DashboardData (and any other pydantic models) can inherit
BaseModel without causing a runtime NameError — locate the DashboardData class
and the get_my_dashboard route to confirm the import is available at the top of
the file.
In `@backend/app/api/stats.py`:
- Around line 138-164: The TokenomicsData and TreasuryStats Pydantic models
(TokenomicsData, TreasuryStats) currently hold hardcoded mock values; replace
these placeholders by wiring the tokenomics and treasury endpoints to real data
sources (e.g., call the indexer/blockchain client used elsewhere in the service
or implement helper functions like fetch_tokenomics(), fetch_treasury_stats())
and populate the models from those calls, or if immediate integration isn't
possible, add clear documentation/comments and an explicit “placeholder”
flag/property on the response models so callers know the values are mocked;
update any endpoint handlers that return TokenomicsData/TreasuryStats to use the
new fetchers or to set the placeholder indicator.
- Line 156: The module-level assignment lastUpdated =
datetime.now(timezone.utc).isoformat() is evaluated at import time so the
timestamp is stale; instead compute the timestamp at request/response time.
Remove the module-level default and either (A) set lastUpdated using
datetime.now(timezone.utc).isoformat() right before building the response inside
the stats endpoint function (e.g., in the handler that returns the stats
dict/response), or (B) if lastUpdated is a model field (dataclass/Pydantic), use
a default_factory/Field(default_factory=...) to call
datetime.now(timezone.utc).isoformat() per instance; update references to
lastUpdated accordingly (look for the symbol lastUpdated and the stats response
builder in backend/app/api/stats.py).
In `@frontend/src/components/ContributorDashboard.tsx`:
- Around line 317-345: The Connect/Disconnect buttons in ContributorDashboard
aren't invoking the provided callbacks; update the button onClick handlers
inside the linkedAccounts.map to call the component props
onConnectAccount(account.type) when account.connected is false and
onDisconnectAccount(account.type) when account.connected is true (or pass the
whole account object if the parent expects that signature), ensure these props
(onConnectAccount, onDisconnectAccount) are accepted in the ContributorDashboard
props and typed accordingly, and guard calls with a null-check (e.g., if
(onConnectAccount) onConnectAccount(...)) so clicks trigger the parent handlers.
- Around line 180-188: The notification handlers are stubs that only log to
console; implement handleMarkAsRead and handleMarkAllAsRead to call the
appropriate mutation(s) (or API) and update local state or trigger a refetch so
unreadNotifications and row styling update; replace the clickable divs with
accessible buttons or elements that handle keyboard events. Specifically, in
ContributorDashboard update handleMarkAsRead(id) to call the
MarkNotificationRead mutation (or notifications API) and then either update the
notifications array (mark matching notification.read = true) or call
refetchNotifications, and implement handleMarkAllAsRead to call the bulk
mark-read mutation and update state/refetch; remove console.log statements and
ensure the UI controls use <button> (or onKeyDown handlers) so they are keyboard
accessible.
- Around line 276-290: The Recent Activity card currently renders only the shell
when activities is empty; update the JSX in ContributorDashboard (the Recent
Activity block that maps over activities) to detect when activities.length === 0
and render an explicit empty state (e.g., an icon, a short message like "No
recent activity" and a hint) using the same styling pattern used for
bounties/notifications, while keeping the existing mapping for activities and
still calling formatRelativeTime(a.timestamp) for real items.
In `@frontend/src/components/CreatorDashboard.tsx`:
- Line 17: The hook useCreatorDashboard is being invoked with walletAddress ??
'' even when walletAddress is undefined, causing React Query to create an
unnecessary observer; instead, ensure the wallet check (walletAddress presence)
happens before calling useCreatorDashboard: either early-return/render a
loading/placeholder when walletAddress is falsy, or pass walletAddress directly
(allowing undefined) so the hook's enabled: !!walletAddress gating works
correctly; update call sites referencing useCreatorDashboard and any conditional
rendering around walletAddress to perform the check prior to invoking the hook.
- Around line 25-30: The code uses the unsafe any type in callbacks (e.g.,
bl.forEach callback parameter b, nested s for submissions, the filter callback
b, and the map callback bounty) which weakens type safety; define proper
interfaces (e.g., interface Submission { id: string; status:
'pending'|'disputed'|...; ... } and interface Bounty { id: string; submissions?:
Submission[]; ... }) or import existing types, then replace all any usages in
the bl.forEach callback (b), the submission loop (s), the filter callback (b)
and the map callback (bounty) with these concrete types and update
component/props signatures so TypeScript can validate field access like
b.submissions and s.status. Ensure optional fields keep ? where appropriate and
adjust any array typings (e.g., Submission[]) accordingly.
In `@frontend/src/components/leaderboard/LeaderboardPage.tsx`:
- Around line 67-79: The error UI currently calls window.location.reload(),
causing a full app reload; change the Retry button to call the refetch function
returned from the useLeaderboard hook instead. Locate the error return block in
LeaderboardPage (the JSX with role="alert" and the Retry button) and replace
onClick={() => window.location.reload()} with onClick={() => refetch()} (ensure
the useLeaderboard hook result is destructured to obtain refetch where the
component calls useLeaderboard). If refetch is async, you may optionally handle
errors with .catch or try/catch around the call.
In `@frontend/src/components/tokenomics/TokenomicsPage.tsx`:
- Around line 50-68: In TokenomicsPage, the current early-return shows the
skeleton when loading or when translations (t/tr) are missing, which can mask an
existing error; change the conditional order so that the error check (error)
runs before the loading/translation check (loading || !t || !tr) — i.e., if
error is truthy return the error alert immediately, otherwise render the
skeleton when loading or translations are absent; update the early-return logic
around the symbols error, loading, t, and tr inside the TokenomicsPage component
accordingly.
In `@frontend/src/hooks/useAgent.ts`:
- Around line 4-13: The hook returns untyped API data; add explicit TypeScript
types by defining an Agent (and optional paged wrapper, e.g., AgentsResponse)
interface and annotate useQuery and api.get with those generics (e.g.,
useQuery<Agent[] | AgentsResponse, Error> and api.get<Agent[] | AgentsResponse>
inside useAgents). Update the function signature (useAgents) to accept typed
filters and, if desired, export the hook return type as UseQueryResult<...> to
give consumers full type safety and IDE autocomplete while locating the change
around useAgents, useQuery, and api.get.
In `@frontend/src/hooks/useBounties.ts`:
- Around line 32-40: The useBounty hook returns the raw API payload (snake_case)
instead of the frontend Bounty shape—update the queryFn in useBounty to
transform the API response using the existing mapApiBounty function (same
transformation used by useBountyBoard/useBounties), i.e., call mapApiBounty on
the response data before returning so the hook consistently yields camelCase
Bounty objects.
- Around line 4-17: The local Bounty interface in useBounties.ts conflicts with
frontend/src/types/bounty.ts; remove the duplicate interface and import the
canonical Bounty type from types/bounty.ts, then update all references in
useBounties (e.g., any function signatures, state types) to use the imported
Bounty. If the API returns snake_case or different shapes, add/ reuse a mapping
function (like mapApiBounty from useBountyBoard.ts) inside useBounties to
convert API responses to the canonical Bounty shape before storing/returning
them. Ensure fields/status/tier use the types defined in types/bounty.ts so no
local snake_case or differing unions remain.
In `@frontend/src/hooks/useBountyBoard.ts`:
- Around line 98-130: The three useQuery calls (the main query returning
searchData/searchLoading, the hot bounties query returning hotBounties, and the
recommendedBounties query) don’t expose error state; update each useQuery
invocation to destructure and capture isError and error (e.g.,
searchError/searchIsError, hotError/hotIsError,
recommendedError/recommendedIsError) and then include a consolidated error
return from the hook (for example return error: searchError || hotError ||
recommendedError and/or isError: searchIsError || hotIsError ||
recommendedIsError) so consumers can detect and render API failures; reference
the existing query keys ['bounties', filters, sortBy, page], ['bounties','hot'],
and ['bounties','recommended', filters.skills] and ensure you don’t change the
queryFn implementations (mapApiBounty can remain) while only adding the error
fields to the hook’s returned object.
- Around line 120-130: The recommended bounties query in useBountyBoard.ts
currently falls back to hardcoded skills ['react','typescript','rust'] when
filters.skills is empty; change the queryFn in the recommendedBounties block so
it does not inject those defaults: use the actual filters.skills (allow empty
array) and pass the skills param only when filters.skills.length > 0 (or pass an
empty string/omit the param) so the backend can decide how to handle “no
skills”; update the logic in the const { data } = await
api.get('/bounties/recommended', ...) call inside the queryFn of the
recommendedBounties constant to use filters.skills directly and conditionally
include params, and add a short comment noting that the backend handles empty
skill filters.
In `@frontend/src/hooks/useContributor.ts`:
- Around line 31-39: The contributor dashboard is cached under
['contributor-dashboard'] and must be invalidated on auth/session transitions;
update the code so that whenever auth state changes you call the React Query
client to clear/invalidate this key (e.g. use
queryClient.invalidateQueries(['contributor-dashboard']) or
queryClient.removeQueries(['contributor-dashboard'])). Specifically: add
invalidation in the API 401 error handler in the request interceptor, call
invalidation inside disconnect() in useWallet.ts (and any wallet disconnect
handler in WalletConnect), and add invalidation on wallet/account switch
handlers so the cached dashboard cannot be replayed to another user; reference
useContributorDashboard, disconnect(), and the API interceptor when applying the
changes.
In `@frontend/src/hooks/useTreasuryStats.ts`:
- Around line 30-35: The hook useTreasuryStats currently maps any query failure
to the fixed string 'Failed to load treasury stats'; change the returned error
to preserve and surface the original error (prefer returning errorTokenomics ??
errorTreasury or their .message) so consumers get diagnostic details, and also
add the refetchTokenomics and refetchTreasury functions (or aggregated refetch)
to the returned object so callers can retry failed requests; update references
to loadingTokenomics/loadingTreasury, tokenomics/treasury,
errorTokenomics/errorTreasury, and refetchTokenomics/refetchTreasury when
modifying the return value.
In `@frontend/src/pages/AgentMarketplacePage.tsx`:
- Line 30: Replace the loose any types by defining a concrete Agent interface
and using it across AgentMarketplacePage: create an Agent type matching the API
fields (id, name, role, availability, success_rate?, description?,
capabilities?), update the useState call (selected and setSelected) to use Agent
| null instead of any, type data?.items as Agent[] (or Agent[] | undefined) and
replace other explicit any annotations at references (e.g., where items are
mapped or passed around) with Agent so the component and its handlers use strong
typing.
- Line 98: Replace the hardcoded avatar "JS" with computed initials derived from
the agent's name inside the AgentMarketplacePage component: compute initials
from a.name (e.g., split on spaces, map to first letters, join, take up to 2
chars and uppercase) and use that value wherever the avatar element appears (the
div rendering the initials for each agent). Update both occurrences that
currently render "JS" so they display the computed initials (reference the agent
variable a used in the list render).
- Around line 28-30: The state variable minRate (and its setter setMinRate) is
dead code—remove its declaration to clean up unused state; update the component
by deleting the lines that call useState for minRate and ensure nothing else
(e.g., useAgents call, filtering logic, or JSX) references minRate or
setMinRate; keep related states like availOnly and selected as-is.
- Line 103: The component is using the wrong field and an incompatible local
Status type; update agents typing to use the shared AgentProfile/AgentStatus
types (import AgentProfile and AgentStatus and change agents.map((a: any) =>
...) to typed items), replace usages of a.availability with a.status, and align
the local Status/SC mapping to AgentStatus values (replace 'working' with 'busy'
or add a 'busy' key in SC) so Badge receives a correctly typed value (use Badge
with a.status as AgentStatus). Also update the other occurrence near line ~140
similarly so both Badge usages and SC lookups use AgentStatus and typed
AgentProfile items.
In `@frontend/src/pages/ContributorProfilePage.tsx`:
- Around line 27-33: The badgeStats object is using fabricated values that can
mislead UI (mergedWithoutRevisionCount, isTopContributorThisMonth,
prSubmissionTimestampsUtc); replace these mock calculations with safe defaults
and guard UI consumers until a real endpoint exists: set
mergedWithoutRevisionCount and isTopContributorThisMonth to null or undefined
(instead of computed values) and leave prSubmissionTimestampsUtc undefined,
update any UI that reads badgeStats to hide badges when these fields are
null/undefined, and add a TODO comment and/or create a tracking issue for
implementing the backend badge-stats endpoint; search for badgeStats,
mergedWithoutRevisionCount, isTopContributorThisMonth, and
prSubmissionTimestampsUtc to apply these changes.
In `@frontend/src/services/api.ts`:
- Around line 13-19: The request interceptor currently reads tokens from
localStorage via localStorage.getItem('auth_token') and sets
config.headers.Authorization inside api.interceptors.request.use, which exposes
the token to XSS; to fix, either switch to httpOnly cookies (remove localStorage
usage, stop attaching Authorization header manually, and enable sending
credentials by setting axios to send cookies such as
api.defaults.withCredentials = true and ensure server reads the cookie), or if
you must keep localStorage, add explicit XSS mitigations (strong CSP, input
sanitization, and same-site cookie fallback) and centralize token access behind
a safe helper so config.headers.Authorization = `Bearer ${token}` only happens
through that helper; update api.interceptors.request.use and any token helper
function names accordingly.
- Around line 5-10: The axios instance created as api via axios.create(...) has
no timeout and can hang; update the axios.create call for api to include a
timeout property (e.g., timeout: 10000) so all requests from the api instance
time out after a reasonable period, and ensure any calling code handles timeout
errors from this instance (check code that uses api.request / api.get / api.post
to handle AxiosError with code 'ECONNABORTED' if necessary).
- Around line 24-31: The response error handler that currently clears the token
(localStorage.removeItem('auth_token')) on 401 should also surface feedback and
navigate the user to login: update the anonymous error handler used in the axios
response interceptor (the (error) => { ... } function) to after removing the
token (1) trigger a user-visible notification/toast (using the app's
notification service or emit an event) explaining the session expired, and (2)
programmatically redirect to the login page (use the app router e.g.,
router.push('/login') or window.location.href as appropriate). Ensure the
notification and redirect are only executed for 401 responses and keep returning
Promise.reject(error) afterwards.
---
Outside diff comments:
In `@frontend/src/hooks/useBountyBoard.ts`:
- Around line 20-39: mapApiBounty currently accepts any; add a RawApiBounty
TypeScript interface that mirrors the backend shape (include id, title,
description, tier, required_skills/reward_amount/rewardAmount, deadline, status,
submission_count, created_at, created_by, creator_type, github_issue_url,
relevance_score, skill_match_count) and change the function signature to
mapApiBounty(b: RawApiBounty). Inside mapApiBounty add minimal runtime
validation for critical fields (e.g., ensure id exists and reward_amount or
rewardAmount is a number) and either throw a descriptive error or fall back to
safe defaults when validation fails; keep existing field mappings (id,
reward_amount, required_skills, etc.) and update references like rewardAmount,
submission_count, created_at to use the typed properties.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: ba6dc445-8c12-425f-b39e-bb8c9f09a7bd
📒 Files selected for processing (18)
backend/app/api/contributors.pybackend/app/api/stats.pyfrontend/package.jsonfrontend/src/App.tsxfrontend/src/components/ContributorDashboard.tsxfrontend/src/components/CreatorDashboard.tsxfrontend/src/components/leaderboard/LeaderboardPage.tsxfrontend/src/components/tokenomics/TokenomicsPage.tsxfrontend/src/hooks/useAgent.tsfrontend/src/hooks/useBounties.tsfrontend/src/hooks/useBountyBoard.tsfrontend/src/hooks/useContributor.tsfrontend/src/hooks/useLeaderboard.tsfrontend/src/hooks/useTreasuryStats.tsfrontend/src/pages/AgentMarketplacePage.tsxfrontend/src/pages/AgentProfilePage.tsxfrontend/src/pages/ContributorProfilePage.tsxfrontend/src/services/api.ts
| class DashboardData(BaseModel): | ||
| """Payload for the contributor dashboard.""" | ||
| stats: dict | ||
| bounties: list | ||
| activities: list | ||
| notifications: list = [] | ||
| earnings: list = [] | ||
| linkedAccounts: list = [] |
There was a problem hiding this comment.
Backend DashboardData schema mismatches frontend DashboardData interface.
The backend returns stats, bounties, activities, notifications, earnings, linkedAccounts, but the frontend interface in frontend/src/types/dashboard.ts expects summary, activeBounties, earnings (as EarningsSummary), recentActivity, notifications, and settings (containing linkedAccounts). This contract mismatch will cause runtime errors or incorrect UI rendering when the frontend consumes this endpoint.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/api/contributors.py` around lines 257 - 264, The DashboardData
Pydantic model currently uses fields stats, bounties, activities, notifications,
earnings, linkedAccounts which do not match the frontend DashboardData
interface; update the DashboardData model (class DashboardData) to expose the
frontend-expected keys: summary, activeBounties, earnings (use the
EarningsSummary shape), recentActivity, notifications, and settings (with
linkedAccounts inside settings), or create a response transformation function
that maps existing internal fields to those exact names and types before
returning to the API client; ensure field names and nested shapes
(EarningsSummary, settings.linkedAccounts) match the frontend types so the JSON
contract is identical.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify BaseModel is not imported in contributors.py
rg -n "from pydantic import" backend/app/api/contributors.pyRepository: SolFoundry/solfoundry
Length of output: 47
🏁 Script executed:
head -25 backend/app/api/contributors.pyRepository: SolFoundry/solfoundry
Length of output: 809
🏁 Script executed:
rg -n "from app.models" backend/app/api/contributors.pyRepository: SolFoundry/solfoundry
Length of output: 143
🏁 Script executed:
cat -n backend/app/models/contributor.py | head -30Repository: SolFoundry/solfoundry
Length of output: 1023
🏁 Script executed:
sed -n '250,270p' backend/app/api/contributors.pyRepository: SolFoundry/solfoundry
Length of output: 714
🏁 Script executed:
rg -n "class DashboardData" backend/app/api/contributors.py -A 10Repository: SolFoundry/solfoundry
Length of output: 389
🏁 Script executed:
cat backend/app/models/contributor.py | grep -E "^__all__|class (ContributorCreate|ContributorListResponse|ContributorResponse|ContributorUpdate|BaseModel)"Repository: SolFoundry/solfoundry
Length of output: 228
Missing BaseModel import will cause runtime NameError.
The DashboardData class at line 257 inherits from BaseModel, but pydantic.BaseModel is never imported in backend/app/api/contributors.py. The file imports specific classes from app.models.contributor (lines 15-20) but not BaseModel. This will cause a NameError when the module loads.
Add to the import section: from pydantic import BaseModel
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/api/contributors.py` around lines 257 - 264, The DashboardData
Pydantic model references BaseModel but BaseModel is not imported; add the
missing import "from pydantic import BaseModel" to the top import block so the
class DashboardData can inherit from BaseModel without raising a NameError.
| @router.get("/me/dashboard", response_model=DashboardData) | ||
| async def get_my_dashboard( | ||
| user_id: str = Depends(get_current_user_id) | ||
| ) -> DashboardData: |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify route ordering in contributors.py - /me/dashboard should come before /{contributor_id}
rg -n "@router\.(get|post|patch|delete)" backend/app/api/contributors.py | head -20Repository: SolFoundry/solfoundry
Length of output: 663
🏁 Script executed:
# Check the context around the /{contributor_id} route and /me/dashboard route
sed -n '105,115p' backend/app/api/contributors.py
echo "---"
sed -n '263,275p' backend/app/api/contributors.pyRepository: SolFoundry/solfoundry
Length of output: 712
🏁 Script executed:
# Check if there are any tests for the /me/dashboard endpoint
find . -type f -name "*.py" | xargs grep -l "me/dashboard\|/me.*dashboard" | head -10Repository: SolFoundry/solfoundry
Length of output: 98
🏁 Script executed:
# Check the router setup and any middleware
head -50 backend/app/api/contributors.pyRepository: SolFoundry/solfoundry
Length of output: 1656
🏁 Script executed:
# Look for any tests that actually test the /me/dashboard endpoint
find . -type f -name "test_*.py" -o -name "*_test.py" | xargs grep -l "dashboard\|me.*dashboard" 2>/dev/nullRepository: SolFoundry/solfoundry
Length of output: 143
🏁 Script executed:
# Check the complete route definitions to understand path matching context
sed -n '100,280p' backend/app/api/contributors.py | cat -nRepository: SolFoundry/solfoundry
Length of output: 6989
🏁 Script executed:
# Verify how the router is initialized and if there are any special configs
grep -n "router.*=" backend/app/api/contributors.py | head -5Repository: SolFoundry/solfoundry
Length of output: 420
🏁 Script executed:
# Check if there are any tests specifically for the /me/dashboard route
grep -r "me/dashboard" backend/tests/ --include="*.py"Repository: SolFoundry/solfoundry
Length of output: 47
🏁 Script executed:
# Verify the exact line numbers for the routes in question
sed -n '105,115p' backend/app/api/contributors.py | cat -n
echo "---"
sed -n '165,175p' backend/app/api/contributors.py | cat -nRepository: SolFoundry/solfoundry
Length of output: 798
🏁 Script executed:
# Check if this endpoint is actually being tested anywhere
grep -r "get_my_dashboard" backend/tests/ --include="*.py"Repository: SolFoundry/solfoundry
Length of output: 47
🏁 Script executed:
# Check if there's a test file for the contributors API
find backend/tests -name "*contributor*" -type fRepository: SolFoundry/solfoundry
Length of output: 99
🏁 Script executed:
# Look for any test that makes a request to /{contributor_id}
grep -r "/{contributor_id}" backend/ --include="*.py" -B 2 -A 2Repository: SolFoundry/solfoundry
Length of output: 1904
🏁 Script executed:
# Search for actual integration tests or any tests that would exercise these routes
find backend/tests -name "test_*.py" | head -5 | xargs cat | grep -A 5 "contributors"Repository: SolFoundry/solfoundry
Length of output: 47
🏁 Script executed:
# Check if DashboardData model is properly imported where the route is defined
sed -n '1,30p' backend/app/api/contributors.pyRepository: SolFoundry/solfoundry
Length of output: 943
🏁 Script executed:
# Check the test file to see if /me/dashboard is being tested
cat backend/tests/test_contributors.pyRepository: SolFoundry/solfoundry
Length of output: 11770
🏁 Script executed:
# Verify the exact content around the DashboardData and /me/dashboard route
sed -n '155,180p' backend/app/api/contributors.pyRepository: SolFoundry/solfoundry
Length of output: 803
🏁 Script executed:
# Check if DashboardData is properly imported/defined in the file
grep -n "class DashboardData\|from.*DashboardData\|import.*DashboardData" backend/app/api/contributors.pyRepository: SolFoundry/solfoundry
Length of output: 100
🏁 Script executed:
# Look for any base model import
grep -n "BaseModel\|from pydantic" backend/app/api/contributors.py | head -5Repository: SolFoundry/solfoundry
Length of output: 100
🏁 Script executed:
# Check if there are any router configuration or middleware that might affect route matching
grep -rn "router.*=" backend/app/ --include="*.py" | grep -i contribRepository: SolFoundry/solfoundry
Length of output: 1064
🏁 Script executed:
# Verify the actual line numbers more carefully
nl -ba backend/app/api/contributors.py | sed -n '105,115p'
echo "---SEPARATOR---"
nl -ba backend/app/api/contributors.py | sed -n '263,273p'Repository: SolFoundry/solfoundry
Length of output: 179
🏁 Script executed:
# Check if there are any integration tests or acceptance tests that would test the full path
find backend/tests -name "*.py" -type f -exec grep -l "me/dashboard\|/api/contributors/me" {} \;Repository: SolFoundry/solfoundry
Length of output: 47
🏁 Script executed:
# Look for any documentation or API spec that documents this endpoint
find . -name "*.md" -o -name "*.openapi*" -o -name "*.swagger*" | xargs grep -l "me/dashboard" 2>/dev/null | head -5Repository: SolFoundry/solfoundry
Length of output: 47
🏁 Script executed:
# Get the complete file and check the exact context around both routes
sed -n '100,120p' backend/app/api/contributors.py
echo "---DIVIDER---"
sed -n '250,275p' backend/app/api/contributors.pyRepository: SolFoundry/solfoundry
Length of output: 1438
🏁 Script executed:
# Look for how the main app is configured and if there's anything special about routing
sed -n '1,30p' backend/app/main.pyRepository: SolFoundry/solfoundry
Length of output: 1461
🏁 Script executed:
# Search for any existing route tests that would test the /me pattern
grep -n "test.*me\|/me" backend/tests/test_contributors.pyRepository: SolFoundry/solfoundry
Length of output: 286
🏁 Script executed:
# Verify the actual behavior by checking if there's any documentation or comments about route matching
grep -B 5 -A 5 "/me/dashboard" backend/app/api/contributors.pyRepository: SolFoundry/solfoundry
Length of output: 381
🏁 Script executed:
# Check if this route is actually working or if there are any known issues
grep -r "dashboard" backend/ --include="*.py" | grep -i "test\|issue\|fixme\|todo" | head -10Repository: SolFoundry/solfoundry
Length of output: 256
🏁 Script executed:
# Verify by creating a minimal test to check path matching behavior
python3 << 'EOF'
# Simulate FastAPI route matching behavior
paths_to_test = [
"/contributors/me",
"/contributors/me/dashboard",
"/contributors/alice",
"/contributors/some-uuid-123",
]
# Pattern: /{contributor_id} matches single segment only
single_segment_pattern = r"^/contributors/([^/]+)$"
# Pattern: /me/dashboard matches two segments
multi_segment_pattern = r"^/contributors/me/dashboard$"
import re
print("Single segment pattern /{contributor_id}:")
for path in paths_to_test:
match = re.match(single_segment_pattern, path)
print(f" {path}: {'MATCH' if match else 'NO MATCH'}")
print("\nMulti segment pattern /me/dashboard:")
for path in paths_to_test:
match = re.match(multi_segment_pattern, path)
print(f" {path}: {'MATCH' if match else 'NO MATCH'}")
EOFRepository: SolFoundry/solfoundry
Length of output: 412
🏁 Script executed:
# Check if BaseModel is imported at the top of the file
head -30 backend/app/api/contributors.py | grep -i "pydantic\|basemodel"Repository: SolFoundry/solfoundry
Length of output: 47
🏁 Script executed:
# Get the complete imports section
sed -n '1,50p' backend/app/api/contributors.pyRepository: SolFoundry/solfoundry
Length of output: 1656
🏁 Script executed:
# Verify if the endpoint is incomplete or has syntax errors
sed -n '257,290p' backend/app/api/contributors.pyRepository: SolFoundry/solfoundry
Length of output: 1131
🏁 Script executed:
# Check if BaseModel is imported anywhere in the file (maybe as part of a wildcard import)
grep -n "from pydantic\|import.*BaseModel\|from.*models" backend/app/api/contributors.pyRepository: SolFoundry/solfoundry
Length of output: 143
🏁 Script executed:
# Check the models.contributor module to see if BaseModel is re-exported
grep -n "BaseModel" backend/app/models/contributor.py | head -5Repository: SolFoundry/solfoundry
Length of output: 263
🏁 Script executed:
# Check what's actually imported from app.models.contributor
sed -n '15,23p' backend/app/api/contributors.pyRepository: SolFoundry/solfoundry
Length of output: 295
🏁 Script executed:
# Verify if BaseModel is missing from the imports in contributors.py
rg "^from pydantic import|^import.*BaseModel" backend/app/api/contributors.pyRepository: SolFoundry/solfoundry
Length of output: 47
🏁 Script executed:
# Look at the complete DashboardData definition and the imports just before it
sed -n '245,280p' backend/app/api/contributors.pyRepository: SolFoundry/solfoundry
Length of output: 1164
🏁 Script executed:
# Final verification: Check if there's a __init__.py or any wildcard imports that could export BaseModel
grep -r "from pydantic import.*\*\|__all__.*BaseModel" backend/app/ --include="*.py" | head -5Repository: SolFoundry/solfoundry
Length of output: 47
🏁 Script executed:
# Check if the file can even be imported (syntax check)
python3 -c "import sys; sys.path.insert(0, 'backend'); from app.api.contributors import router; print('Import successful')" 2>&1 | head -20Repository: SolFoundry/solfoundry
Length of output: 332
🏁 Script executed:
# Check the exact imports more carefully and see if BaseModel might be available indirectly
sed -n '1,30p' backend/app/api/contributors.py | cat -nRepository: SolFoundry/solfoundry
Length of output: 1153
🏁 Script executed:
# Verify the complete import section including any missed lines
sed -n '1,60p' backend/app/api/contributors.py | grep -E "^from|^import"Repository: SolFoundry/solfoundry
Length of output: 456
Missing BaseModel import causes runtime error.
The DashboardData class at line 257 uses BaseModel which is not imported. Add from pydantic import BaseModel to the imports at the top of the file.
Note: The route ordering concern is unfounded. FastAPI's path matching correctly distinguishes between /{contributor_id} (single-segment route) and /me/dashboard (two-segment literal route), so these patterns do not conflict regardless of definition order.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/api/contributors.py` around lines 267 - 270, The file is missing
the BaseModel import used by the DashboardData pydantic model; add the import
"from pydantic import BaseModel" to the module imports so DashboardData (and any
other pydantic models) can inherit BaseModel without causing a runtime NameError
— locate the DashboardData class and the get_my_dashboard route to confirm the
import is available at the top of the file.
| # In a real app, query database/indexers for these lists | ||
| stats = { | ||
| "totalEarned": contributor.total_earnings, | ||
| "activeBounties": 2, # Mocked count | ||
| "pendingPayouts": 0, | ||
| "reputationRank": 15, | ||
| "totalContributors": 120, | ||
| } | ||
|
|
||
| bounties = [ | ||
| {"id": "b1", "title": "Implement API Rate Limiting", "reward": 500, "deadline": "2026-03-30", "status": "in_progress", "progress": 75}, | ||
| {"id": "b2", "title": "Refactor Frontend Hooks", "reward": 300, "deadline": "2026-03-29", "status": "claimed", "progress": 10}, | ||
| ] | ||
|
|
||
| activities = [ | ||
| {"id": "a1", "type": "payout", "title": "Payout Received", "description": "Earned 500 $FNDRY for rate limiter", "timestamp": "2026-03-21T10:00:00Z", "amount": 500}, | ||
| {"id": "a2", "type": "pr_submitted", "title": "PR Submitted", "description": "Submitted PR for security middleware", "timestamp": "2026-03-21T09:00:00Z"}, | ||
| ] |
There was a problem hiding this comment.
Dashboard endpoint returns hardcoded mock data.
The bounties and activities fields return hardcoded static data rather than querying the database. This contradicts the PR objective of replacing mock data with real API integration. The comment on line 286 acknowledges this ("In a real app, query database/indexers"), but for a production-ready integration, these should fetch actual contributor-specific data.
Do you want me to help design the database queries to fetch real bounty and activity data for the authenticated contributor?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/api/contributors.py` around lines 286 - 303, The dashboard
currently returns hardcoded bounties/activities (symbols: bounties, activities,
stats, contributor.total_earnings) — replace these mocks with real DB/indexer
queries: call a repository or ORM methods (e.g.,
get_bounties_for_contributor(contributor.id) and
get_activities_for_contributor(contributor.id, limit, cursor)) to fetch
contributor-specific records, map DB fields into the existing response shape
(id, title, reward, deadline, status, progress for bounties; id, type, title,
description, timestamp, amount for activities), and compute derived stats
(activeBounties, pendingPayouts, totalContributors) from query results or count
queries instead of hardcoded numbers; ensure you handle pagination, errors, and
an empty-result case before returning the assembled stats, bounties, and
activities.
| class TokenomicsData(BaseModel): | ||
| """Tokenomics data response.""" | ||
| tokenName: str = "$FNDRY" | ||
| tokenCA: str = "Fndry...1H7" | ||
| totalSupply: int = 1_000_000_000 | ||
| circulatingSupply: int = 420_000_000 | ||
| treasuryHoldings: int = 250_000_000 | ||
| totalDistributed: int = 150_000_000 | ||
| totalBuybacks: int = 50_000_000 | ||
| totalBurned: int = 30_000_000 | ||
| feeRevenueSol: float = 1245.50 | ||
| distributionBreakdown: Dict[str, int] = { | ||
| "Circulating": 420_000_000, | ||
| "Treasury": 250_000_000, | ||
| "Staking": 200_000_000, | ||
| "Team_Vested": 100_000_000, | ||
| "Burned": 30_000_000, | ||
| } | ||
| lastUpdated: str = datetime.now(timezone.utc).isoformat() | ||
|
|
||
|
|
||
| class TreasuryStats(BaseModel): | ||
| """Treasury stats response.""" | ||
| solBalance: float = 1245.50 | ||
| fndryBalance: int = 250_000_000 | ||
| totalPayouts: int = 1240 | ||
| treasuryWallet: str = "6v...m7p" |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Tokenomics and treasury endpoints return hardcoded placeholder data.
Similar to the dashboard endpoint, these endpoints return static mock data rather than querying real blockchain/indexer sources. The inline comments acknowledge this limitation. For a complete API integration, consider implementing actual data fetching or clearly documenting these as placeholder endpoints that require future implementation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/api/stats.py` around lines 138 - 164, The TokenomicsData and
TreasuryStats Pydantic models (TokenomicsData, TreasuryStats) currently hold
hardcoded mock values; replace these placeholders by wiring the tokenomics and
treasury endpoints to real data sources (e.g., call the indexer/blockchain
client used elsewhere in the service or implement helper functions like
fetch_tokenomics(), fetch_treasury_stats()) and populate the models from those
calls, or if immediate integration isn't possible, add clear
documentation/comments and an explicit “placeholder” flag/property on the
response models so callers know the values are mocked; update any endpoint
handlers that return TokenomicsData/TreasuryStats to use the new fetchers or to
set the placeholder indicator.
| "Team_Vested": 100_000_000, | ||
| "Burned": 30_000_000, | ||
| } | ||
| lastUpdated: str = datetime.now(timezone.utc).isoformat() |
There was a problem hiding this comment.
lastUpdated is evaluated at module load time, not request time.
datetime.now(timezone.utc).isoformat() as a default value is evaluated once when the module loads, not when each request is made. All responses will show the same stale timestamp until the server restarts.
Proposed fix
class TokenomicsData(BaseModel):
"""Tokenomics data response."""
tokenName: str = "$FNDRY"
# ... other fields ...
- lastUpdated: str = datetime.now(timezone.utc).isoformat()
+ lastUpdated: Optional[str] = None
`@router.get`("/stats/tokenomics", response_model=TokenomicsData)
async def get_tokenomics() -> TokenomicsData:
"""Get tokenomics statistics for $FNDRY."""
- return TokenomicsData()
+ return TokenomicsData(lastUpdated=datetime.now(timezone.utc).isoformat())🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/api/stats.py` at line 156, The module-level assignment
lastUpdated = datetime.now(timezone.utc).isoformat() is evaluated at import time
so the timestamp is stale; instead compute the timestamp at request/response
time. Remove the module-level default and either (A) set lastUpdated using
datetime.now(timezone.utc).isoformat() right before building the response inside
the stats endpoint function (e.g., in the handler that returns the stats
dict/response), or (B) if lastUpdated is a model field (dataclass/Pydantic), use
a default_factory/Field(default_factory=...) to call
datetime.now(timezone.utc).isoformat() per instance; update references to
lastUpdated accordingly (look for the symbol lastUpdated and the stats response
builder in backend/app/api/stats.py).
| <p className="font-medium text-white truncate">{a.name}</p> | ||
| <p className="text-xs text-gray-400 capitalize">{a.role}</p> | ||
| </div> | ||
| <Badge status={a.availability as Status} /> |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if there's an Agent type definition in the codebase
ast-grep --pattern 'interface Agent {
$$$
}'
# Also check for type definitions in hooks/useAgent.ts
cat frontend/src/hooks/useAgent.ts 2>/dev/null || echo "File not found"Repository: SolFoundry/solfoundry
Length of output: 737
🏁 Script executed:
# Find the Agent type definition
rg "interface Agent|type Agent" frontend/src -A 10 --type-list | head -100Repository: SolFoundry/solfoundry
Length of output: 454
🏁 Script executed:
# Find the Status type definition
rg "type Status|enum Status" frontend/src -B 2 -A 5 --type-list | head -50Repository: SolFoundry/solfoundry
Length of output: 454
🏁 Script executed:
# Look at AgentMarketplacePage.tsx around line 103 with context
wc -l frontend/src/pages/AgentMarketplacePage.tsx && sed -n '95,115p' frontend/src/pages/AgentMarketplacePage.tsxRepository: SolFoundry/solfoundry
Length of output: 1580
🏁 Script executed:
# Search for Badge component definition to understand what it expects
rg "Badge.*Status|interface Badge|type.*Badge" frontend/src -A 5 | head -80Repository: SolFoundry/solfoundry
Length of output: 5321
🏁 Script executed:
# Search for Status type definition and SC object
rg "type Status|interface Status|const Status" frontend/src -B 2 -A 3Repository: SolFoundry/solfoundry
Length of output: 635
🏁 Script executed:
# Look at the full AgentMarketplacePage.tsx to find Status import/definition and SC object
head -50 frontend/src/pages/AgentMarketplacePage.tsxRepository: SolFoundry/solfoundry
Length of output: 2171
🏁 Script executed:
# Search for SC object definition
rg "const SC|SC =" frontend/src -B 2 -A 5Repository: SolFoundry/solfoundry
Length of output: 1586
🏁 Script executed:
# Search for Agent interface/type definition
rg "interface Agent|type Agent" frontend/src -B 2 -A 15Repository: SolFoundry/solfoundry
Length of output: 7204
🏁 Script executed:
# Also check if there are any type definitions in services directory
fd -e ts frontend/src/services -exec grep -l "Agent\|availability" {} \;Repository: SolFoundry/solfoundry
Length of output: 319
🏁 Script executed:
# Check the API service or any mock data to understand the availability/status field
rg "availability|\.status" frontend/src/pages/AgentMarketplacePage.tsx -B 2 -A 2Repository: SolFoundry/solfoundry
Length of output: 769
🏁 Script executed:
# Check if there's any API response type definition or mock data
fd -type f -name "*api*" -o -name "*mock*" frontend/src | grep -E "\.(ts|tsx)$" | head -10Repository: SolFoundry/solfoundry
Length of output: 235
Critical type mismatch: availability field does not match API type and Status enum is incomplete.
The code accesses a.availability and casts it as Status ('available' | 'working' | 'offline'), but the actual AgentStatus type from frontend/src/types/agent.ts is 'available' | 'busy' | 'offline'. This mismatch means:
- Field name discrepancy:
AgentProfileinterface defines astatusfield, notavailability. The code appears to use the wrong field name. - Incompatible status values:
AgentStatusincludes 'busy' but the localStatustype uses 'working'. If the API returns'busy', it will not match any key in theSCobject, causingSC['busy']to be undefined and breaking the Badge styling. - Untyped data:
agents.map((a: any)lacks explicit typing, making these issues undetectable at compile time.
Both instances (line 103 and line ~140) are affected. Align the local Status type with AgentStatus from the shared type definition, or verify the actual field name and values returned by the API endpoint.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/pages/AgentMarketplacePage.tsx` at line 103, The component is
using the wrong field and an incompatible local Status type; update agents
typing to use the shared AgentProfile/AgentStatus types (import AgentProfile and
AgentStatus and change agents.map((a: any) => ...) to typed items), replace
usages of a.availability with a.status, and align the local Status/SC mapping to
AgentStatus values (replace 'working' with 'busy' or add a 'busy' key in SC) so
Badge receives a correctly typed value (use Badge with a.status as AgentStatus).
Also update the other occurrence near line ~140 similarly so both Badge usages
and SC lookups use AgentStatus and typed AgentProfile items.
| // Map backend stats to the expected badge stats interface | ||
| const badgeStats = { | ||
| mergedPrCount: contributor.total_bounties, | ||
| mergedWithoutRevisionCount: Math.floor(contributor.total_bounties * 0.6), // Mock ratio for now | ||
| isTopContributorThisMonth: contributor.reputation_score > 500, | ||
| prSubmissionTimestampsUtc: [], // Would need a separate endpoint or enrichment | ||
| }; |
There was a problem hiding this comment.
Badge stats use mock calculations that may mislead users.
The badgeStats object uses arbitrary calculations that don't reflect real data:
mergedWithoutRevisionCount: 60% oftotal_bounties(line 30) — this ratio is fabricatedisTopContributorThisMonth:reputation_score > 500(line 31) — arbitrary thresholdprSubmissionTimestampsUtc: empty array (line 32) — badges depending on this won't work
These values drive badge display, potentially showing unearned badges. Consider:
- Adding a TODO/tracking issue for implementing the backend endpoint
- Hiding badges that rely on unavailable data rather than using mock values
Do you want me to open an issue to track implementing the badge stats endpoint?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/pages/ContributorProfilePage.tsx` around lines 27 - 33, The
badgeStats object is using fabricated values that can mislead UI
(mergedWithoutRevisionCount, isTopContributorThisMonth,
prSubmissionTimestampsUtc); replace these mock calculations with safe defaults
and guard UI consumers until a real endpoint exists: set
mergedWithoutRevisionCount and isTopContributorThisMonth to null or undefined
(instead of computed values) and leave prSubmissionTimestampsUtc undefined,
update any UI that reads badgeStats to hide badges when these fields are
null/undefined, and add a TODO comment and/or create a tracking issue for
implementing the backend badge-stats endpoint; search for badgeStats,
mergedWithoutRevisionCount, isTopContributorThisMonth, and
prSubmissionTimestampsUtc to apply these changes.
| const api = axios.create({ | ||
| baseURL, | ||
| headers: { | ||
| 'Content-Type': 'application/json', | ||
| }, | ||
| }); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider adding request timeout.
No timeout is configured, so requests could hang indefinitely on network issues. Add a reasonable timeout to prevent UI blocking.
Proposed enhancement
const api = axios.create({
baseURL,
+ timeout: 30000, // 30 seconds
headers: {
'Content-Type': 'application/json',
},
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const api = axios.create({ | |
| baseURL, | |
| headers: { | |
| 'Content-Type': 'application/json', | |
| }, | |
| }); | |
| const api = axios.create({ | |
| baseURL, | |
| timeout: 30000, // 30 seconds | |
| headers: { | |
| 'Content-Type': 'application/json', | |
| }, | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/services/api.ts` around lines 5 - 10, The axios instance created
as api via axios.create(...) has no timeout and can hang; update the
axios.create call for api to include a timeout property (e.g., timeout: 10000)
so all requests from the api instance time out after a reasonable period, and
ensure any calling code handles timeout errors from this instance (check code
that uses api.request / api.get / api.post to handle AxiosError with code
'ECONNABORTED' if necessary).
| api.interceptors.request.use((config) => { | ||
| const token = localStorage.getItem('auth_token'); | ||
| if (token) { | ||
| config.headers.Authorization = `Bearer ${token}`; | ||
| } | ||
| return config; | ||
| }); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Token storage in localStorage is acceptable but has XSS implications.
Using localStorage for auth tokens is a common pattern but means tokens are accessible to any JavaScript on the page. Ensure the application has robust XSS protections. For higher security requirements, consider httpOnly cookies instead.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/services/api.ts` around lines 13 - 19, The request interceptor
currently reads tokens from localStorage via localStorage.getItem('auth_token')
and sets config.headers.Authorization inside api.interceptors.request.use, which
exposes the token to XSS; to fix, either switch to httpOnly cookies (remove
localStorage usage, stop attaching Authorization header manually, and enable
sending credentials by setting axios to send cookies such as
api.defaults.withCredentials = true and ensure server reads the cookie), or if
you must keep localStorage, add explicit XSS mitigations (strong CSP, input
sanitization, and same-site cookie fallback) and centralize token access behind
a safe helper so config.headers.Authorization = `Bearer ${token}` only happens
through that helper; update api.interceptors.request.use and any token helper
function names accordingly.
| (error) => { | ||
| // Handle global errors (e.g., 401 unauthorized) | ||
| if (error.response?.status === 401) { | ||
| // Potentially clear token and redirect to login | ||
| localStorage.removeItem('auth_token'); | ||
| } | ||
| return Promise.reject(error); | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
401 handler clears token but provides no user feedback.
When a 401 is received, the token is cleared but the user isn't redirected to login or shown any notification. This could leave users confused about why their actions fail. Consider adding a redirect or event emission.
Example enhancement
if (error.response?.status === 401) {
localStorage.removeItem('auth_token');
+ // Optionally redirect to login or emit an event
+ // window.location.href = '/login';
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/services/api.ts` around lines 24 - 31, The response error
handler that currently clears the token (localStorage.removeItem('auth_token'))
on 401 should also surface feedback and navigate the user to login: update the
anonymous error handler used in the axios response interceptor (the (error) => {
... } function) to after removing the token (1) trigger a user-visible
notification/toast (using the app's notification service or emit an event)
explaining the session expired, and (2) programmatically redirect to the login
page (use the app router e.g., router.push('/login') or window.location.href as
appropriate). Ensure the notification and redirect are only executed for 401
responses and keep returning Promise.reject(error) afterwards.
Description
This PR replaces all static mock data with real backend API integration across the entire frontend. It implements a robust data fetching layer using React Query and Axios, ensuring proper loading states, error handling, and caching.
Closes #185
Solana Wallet for Payout
Wallet: 4QhseKvBuaCQhdkP248iXoUxohPzVC5m8pE9hAv4nMYw
Type of Change
Checklist
console.logor debugging code left behindTesting
Additional Notes
VITE_API_URLenvironment variable should be set in the frontend.envfile for production deployment.common/Skeleton.tsxwere used to improve perceived performance.