A fully-featured autonomous software engineer, written in typescript.
- π Autonomous problem solving - Give it a GitHub issue, bug report, or feature request
- π§ Automatic code generation - Writes, tests, and submits complete solutions
- π Full SWE-bench support - Benchmark on thousands of real GitHub issues
- π Multi-model support - Works with OpenAI, Anthropic, and open-source models
- β‘ Parallel execution - Run multiple instances simultaneously
- π― Interactive shell mode - Step through solutions interactively
- π Complete parity with Python SWE-agent functionality
- Node.js 18+ and npm
- Docker (for sandboxed execution)
- Git
# Clone the repository
git clone https://github.com/elizaos/swe-agent-ts.git
cd swe-agent-ts
# Install dependencies
npm install
# Build the project
npm run build
# Set up API keys (choose one)
export OPENAI_API_KEY=your_key
# OR
export ANTHROPIC_API_KEY=your_key
# Install globally for system-wide access
npm install -g .
# Now you can use 'sweagent' command anywhere
sweagent --help
# Have SWE-agent automatically fix a GitHub issue
npx sweagent run \
--agent.model.name gpt-4o \
--env.repo.github_url https://github.com/user/repo \
--problem_statement.github_url https://github.com/user/repo/issues/123
The agent will:
- Clone the repository
- Understand the issue
- Write and test a solution
- Create a patch file with the fix
# Create a new feature from a text description
echo "Create a REST API with CRUD operations for a todo list app" > task.md
npx sweagent run \
--agent.model.name gpt-4o \
--env.repo.path ./my-project \
--problem_statement.path task.md
# Work interactively with the agent
npx sweagent shell \
--repo ./my-project \
--config config/default.yaml
# In the shell, you can:
# - Ask it to implement features
# - Debug issues together
# - Review its proposed changes
Test the agent on real-world GitHub issues:
# Quick test on 3 instances
npx sweagent run-batch \
--instances.type swe_bench \
--instances.subset lite \
--instances.split dev \
--instances.slice :3 \
--agent.model.name gpt-4o
# Full benchmark with parallel execution
npx sweagent run-batch \
--instances.type swe_bench \
--instances.subset lite \
--instances.slice :50 \
--num_workers 5 \
--agent.model.name gpt-4o \
--instances.evaluate
# Run on your own test cases
cat > my_tests.json << EOF
[
{
"imageName": "python:3.11",
"problemStatement": "Fix the authentication bug in login.py",
"instanceId": "auth-001",
"repoName": "my-app",
"baseCommit": "main"
}
]
EOF
npx sweagent run-batch \
--instances.type file \
--instances.path my_tests.json \
--agent.model.name gpt-4o
# Run all tests
npm test
# Run specific test suites
npm test -- test-agent
npm test -- test-swe-bench
npm test -- test-environment
# Run with coverage
npm test -- --coverage
# Run linting
npm run lint
# Format code
npm run format
# Demo SWE-bench capabilities
node examples/demo_swe_bench.js
# Run comprehensive benchmark examples
./examples/run_swe_bench.sh
# Test basic functionality
node examples/test_swe_bench_simple.js
npx sweagent run \
--agent.model.name gpt-4o \
--env.repo.path ./my-app \
--problem_statement.text "The login form throws an error when email contains special characters"
npx sweagent run \
--agent.model.name claude-3-sonnet-20241022 \
--env.repo.github_url https://github.com/user/repo \
--problem_statement.text "Add dark mode support to the settings page"
npx sweagent run \
--agent.model.name gpt-4o \
--env.repo.path ./legacy-app \
--problem_statement.text "Refactor the user service to use async/await instead of callbacks"
swe-agent-ts/
βββ src/
β βββ agent/ # Core agent logic
β βββ environment/ # Execution environment
β βββ run/ # CLI and batch execution
β βββ tools/ # Agent tools and commands
β βββ utils/ # Utilities
βββ config/ # Configuration files
βββ examples/ # Example scripts and demos
βββ tests/ # Test suite
βββ docs/ # Documentation
# config/my_config.yaml
agent:
model:
name: gpt-4o
per_instance_cost_limit: 2.00
temperature: 0.7
tools:
execution_timeout: 30
max_consecutive_execution_timeouts: 3
# OpenAI GPT-4
export OPENAI_API_KEY=your_key
npx sweagent run --agent.model.name gpt-5 ...
# Anthropic Claude
export ANTHROPIC_API_KEY=your_key
npx sweagent run --agent.model.name claude-4-sonnet ...
# Local/Open-source models via LiteLLM
npx sweagent run --agent.model.name ollama/codellama ...
// Create custom tools for the agent
import { Bundle } from 'swe-agent-ts';
const customBundle = new Bundle({
name: 'my-tools',
commands: [
{
name: 'analyze',
description: 'Analyze code quality',
script: 'npm run analyze'
}
]
});
// Add custom hooks to monitor agent behavior
import { AbstractAgentHook } from 'swe-agent-ts';
class MyHook extends AbstractAgentHook {
onStepStart() {
console.log('Agent is thinking...');
}
onActionExecuted(step) {
console.log(`Executed: ${step.action}`);
}
}
-
Node Version: Ensure Node.js 18+
node --version # Should be v18.0.0 or higher
-
Build Errors: Clean and rebuild
npm run clean npm install npm run build
-
Docker Issues: Ensure Docker is running
docker ps # Should show running containers
-
API Keys: Verify environment variables
echo $OPENAI_API_KEY echo $ANTHROPIC_API_KEY
# Fork and clone the repository
git clone https://github.com/elizaos/swe-agent-ts.git
# Create a feature branch
git checkout -b feature/amazing-feature
# Make changes and test
npm test
# Submit a pull request
MIT License - see LICENSE file for details.
This TypeScript port is based on the original SWE-agent by Princeton University and Stanford University researchers.