Built on NEAR AI, this web application supports analysis of governance proposals and related forum discussions.
Results based on 6 Quality Criteria and 2 Attention Scores:
Quality Criteria (must pass all to succeed):
- Complete - Proposal includes all the required template elements for a proposal of its type. For example, funding proposal includes budget and milestones.
- Legible - Proposal content is clear enough that the decision being made can be unambiguously understood.
- Consistent - Proposal does not contradict itself. Details such as budget, dates, and scope, are consistent everywhere they are referenced in the proposal contents.
- Compliant - Proposal is compliant with all relevant rules/guidelines, such as the Constitution, HSP-001, and the Code of Conduct.
- Justified - Proposal provides rationale that logically supports the stated objectives and actions. For example, the proposed solution reasonably addresses the problem and the proposal explains how.
- Measurable - Proposal includes measurable outcomes and success criteria that can be evaluated.
Attention Scores (informational):
- Relevant - Proposal directly relates to the NEAR ecosystem. (high/medium/low)
- Material - Proposal has high potential impact and/or risks. (high/medium/low)
| Category | Technology |
|---|---|
| Framework | Next.js |
| Language | TypeScript |
| Database | PostgreSQL with Drizzle ORM |
| AI Provider | NEAR AI Cloud |
| NEAR Wallet | @hot-labs/near-connect |
| Authentication | near-sign-verify (NEP-413) |
- Node.js 18+ or Bun
- PostgreSQL database
- NEAR Wallet (testnet or mainnet)
- NEAR AI Cloud API Key
# Download code from repository
git clone https://github.com/near-research/gov.git
# Change directory
cd gov
# Dependencies
bun install
# Copy .env template
cp .env.example .envCreate .env file with:
DATABASE_URL=postgresql://user:password@localhost:5432/neargov
NEAR_AI_CLOUD_API_KEY=your_api_key_here- Create PostgreSQL database:
createdb neargov- Run migrations:
# Using provided migration file
psql neargov < migration.sql
# Or generate from schema
bun run db:push- Test connection:
bun run scripts/test-db.ts# Start development server
bun run dev
# Build for production
bun run build
# Start production server
bun run startVisit http://localhost:3000
Primary table for storing proposal evaluations.
| Column | Type | Description |
|---|---|---|
topic_id |
VARCHAR(255) | Discourse topic ID |
revision_number |
INTEGER | Version number of proposal |
evaluation |
JSONB | Full AI evaluation results |
title |
TEXT | Proposal title |
near_account |
VARCHAR(255) | Evaluator's NEAR account |
timestamp |
TIMESTAMP | When screening was performed |
revision_timestamp |
TIMESTAMP | When revision was created |
quality_score |
REAL | Computed quality score (0.0-1.0) |
attention_score |
REAL | Computed attention score (0.0-1.0) |
Primary Key: (topic_id, revision_number) - Prevents duplicate screenings
Indexes:
topic_id- Query all revisions of a proposalnear_account- Filter by evaluatortimestamp DESC- Sort by newest firstquality_score- Filter/sort by qualityattention_score- Filter/sort by attention- JSON indexes on
overallPass,relevant,material
{
// Quality criteria (6 total)
complete: { pass: boolean, reason: string },
legible: { pass: boolean, reason: string },
consistent: { pass: boolean, reason: string },
compliant: { pass: boolean, reason: string },
justified: { pass: boolean, reason: string },
measurable: { pass: boolean, reason: string },
// Attention scores (2 total)
relevant: { score: "high" | "medium" | "low", reason: string },
material: { score: "high" | "medium" | "low", reason: string },
// Computed values
qualityScore: number, // 0.0-1.0
attentionScore: number, // 0.0-1.0
overallPass: boolean, // true if ALL quality criteria pass
summary: string // 3-sentence summary
}| Endpoint | Method | Auth | Description |
|---|---|---|---|
/api/proposals/[id] |
GET | No | Get proposal details |
/api/proposals/[id]/summarize |
POST | No | AI summary of proposal |
/api/proposals/[id]/revisions |
GET | No | Get all revisions |
/api/proposals/[id]/revisions/summarize |
POST | No | AI summary of changes |
| Endpoint | Method | Auth | Description |
|---|---|---|---|
/api/discourse/latest |
GET | No | Get latest proposals from category |
/api/discourse/posts |
GET | No | Get all posts from Discourse |
/api/discourse/posts/[id]/revisions |
GET | No | Get post revisions |
/api/discourse/posts/[id]/revisions/summarize |
POST | No | AI summary of post changes |
/api/discourse/topics/[id]/summarize |
POST | No | AI discussion summary |
/api/discourse/replies/[id]/summarize |
POST | No | AI reply summary |
| Endpoint | Method | Auth | Description |
|---|---|---|---|
/api/screen |
POST | Yes | Screen proposal (no save) |
/api/saveAnalysis/[topicId] |
POST | Yes | Screen & save to DB |
/api/getAnalysis/[topicId] |
GET | No | Get screening results |
| Endpoint | Method | Auth | Description |
|---|---|---|---|
/api/chat/completions |
POST | No | NEAR AI Cloud proxy |
/api/agent |
POST | No | Agent with tools |
The platform uses NEP-413 wallet signatures for authentication:
- Connect Wallet - User connects NEAR wallet
- Sign Message - User signs message:
"Screen proposal {topicId}" - Verify Signature - Server validates using
near-sign-verify - Authorized Request - Include
Authorization: Bearer <token>header
// Client-side signing
const signature = await wallet.signMessage({
message: `Screen proposal ${topicId}`,
recipient: "social.near",
});
// API request
const response = await fetch(`/api/saveAnalysis/${topicId}`, {
method: "POST",
headers: {
Authorization: `Bearer ${signature}`,
"Content-Type": "application/json",
},
body: JSON.stringify({ title, content, revisionNumber }),
});Each API endpoint enforces an in-memory quota:
- Default: 5 requests per endpoint within 15 minutes
- Counted per NEAR account if connected, or per IP if anonymous
- If exceeded, returns
429 Too Many RequestswithRetry-After - Responses include
X-RateLimit-Remaining,X-RateLimit-Limit, andX-RateLimit-Reset
AI-generated summaries are cached to improve performance and reduce costs:
| Content Type | TTL | Cache Key |
|---|---|---|
| Proposal | 60 min | proposal:{topicId} |
| Revisions | 15 min | proposal:revision:{topicId} |
| Discussion | 5 min | topic:discussion:{topicId} |
| Reply | 30 min | reply:{replyId} |
Caches are in-memory and reset on server restart.
Provides three AI models, all hosted in GPU TEEs (Trusted Execution Environments) with end-to-end encryption and verifiable inference.
| Model | Context | Pricing |
|---|---|---|
| DeepSeek V3.1 | 128K | $1/$2.5 per M tokens |
| GPT OSS 120B | 131K | $0.2/$0.6 per M tokens |
| Qwen3 30B | 262K | $0.15/$0.45 per M tokens |
DeepSeek: deepseek-ai/DeepSeek-V3.1
- Hybrid thinking/non-thinking modes via chat templates
- Optimized for tool usage and agent tasks
- Faster thinking compared to previous versions
OpenAI: openai/gpt-oss-120b
- 117B parameters (MoE), 5.1B active per forward pass
- Configurable reasoning depth with chain-of-thought access
- Optimized for single H100 GPU with MXFP4 quantization
- Native tool use: function calling, browsing, structured outputs
Qwen: Qwen/Qwen3-30B-A3B-Instruct-2507
- 30.5B total parameters, 3.3B activated per inference
- Ultra-long 262K context window
- Non-thinking mode only
- Strong multilingual and coding capabilities
MIT ~ see LICENSE file for details
Your help would be much appreciated!
- Fork the repository
- Create a new branch (
git checkout -b update) - Commit your changes (
git commit -m 'message') - Push to the branch (
git push origin update) - Open a Pull Request