AI-powered text completion platform with CLI tool, web editor, and Node.js server for seamless writing assistance.
The full API of this library can be found in api.md.
npm install -g @harpertoken/autofix-cligit clone https://github.com/harpertoken/autofix.git
cd autofix
npm install
npm run prepareFor containerized deployment:
# Pull from Docker Hub
docker pull harpertoken/autofix-server
# Or from GHCR
docker pull ghcr.io/harpertoken/autofix-server
# Run with environment variables
docker run -p 3001:3001 \
-e GEMINI_API_KEY=your_api_key \
-e SAMBANOVA_API_KEY=your_samba_key \
harpertoken/autofix-serverSee server README for detailed Docker usage.
# Basic completion
autofix "hello this is a"
# With custom style and mode
autofix "write a function to" --style technical --mode sentence
# Create new document with output file
echo "Once upon a time" | autofix new --output story.txt
# Edit existing file
autofix edit myfile.txt --style formal --mode paragraphnpm run dev
# Open http://localhost:3000This library includes TypeScript definitions for all request params and response fields. You may import and use them like so:
// POST /api/complete
{
"text": "hello this is a",
"mode": "sentence",
"style": "casual",
"provider": "auto",
"geminiModel": "gemini-3-pro-preview"
}
// Response
{
"suggestion": " test of the AI completion system."
}# Using curl to test the API
curl -X POST http://localhost:3000/api/complete \
-H "Content-Type: application/json" \
-d '{
"text": "The future of AI is",
"mode": "sentence",
"style": "technical",
"provider": "gemini",
"geminiModel": "gemini-2.5-flash"
}'// GET /api/status
{
"status": "ok",
"providers": {
"gemini": true,
"sambanova": true
}
}This tool requires AI provider API keys for text completion.
- Get a Google Gemini API key
- Set environment variable:
export GEMINI_API_KEY=your_api_key_here
- Get a SambaNova API key
- Set environment variable:
export SAMBANOVA_API_KEY=your_api_key_here
Note
SambaNova provides automatic fallback when Gemini hits rate limits (200 requests/day free tier).
- Real-time AI text completion
- Multiple completion modes: word, sentence, paragraph
- Writing styles: casual, formal, creative, technical
- Auto-save functionality
- Keyboard shortcuts support
- Modern React-based interface
- Live text completion with AI model switching
- Settings panel with craft options and model selection
- Keyboard shortcuts for model switching:
- Press
1for gemini-2.5-pro - Press
2for gemini-2.5-flash - Press
3for gemini-2.5-flash-lite - Press
Escapeto dismiss model switch prompt
- Press
- Responsive design
- Welcome modal for new users
- RESTful API endpoints
- Dual AI provider support
- Automatic fallback system
- Request/response validation
- Error handling and logging
- TypeScript throughout
- Pre-commit hooks (Husky)
- Automated testing (Vitest + Playwright)
- Code formatting (Prettier)
- Semantic release automation
Autofix uses a smart fallback system for maximum reliability:
- Primary: Google Gemini (configurable model, defaults to gemini-3-pro-preview)
- Fallback: SambaNova GPT-OSS-120B (when Gemini rate limited)
The system automatically detects rate limits and switches providers seamlessly. Users can also manually switch Gemini models via keyboard shortcuts in the web editor when suggestions fail.
// Automatic fallback on 429 errors
if (geminiError.status === 429) {
return await sambaNovaFallback(text, mode, style);
}When the library is unable to connect to the API,
or if the API returns a non-success status code (i.e., 4xx or 5xx response),
a subclass of APIError will be thrown:
const response = await client.search.recommend().catch(async (err) => {
if (err instanceof Autofix.APIError) {
console.log(err.status); // 400
console.log(err.name); // BadRequestError
console.log(err.headers); // {server: 'nginx', ...}
} else {
throw err;
}
});Error codes are as follows:
| Status Code | Error Type |
|---|---|
| 400 | BadRequestError |
| 401 | AuthenticationError |
| 403 | PermissionDeniedError |
| 404 | NotFoundError |
| 422 | UnprocessableEntityError |
| 429 | RateLimitError |
| >=500 | InternalServerError |
| N/A | APIConnectionError |
Certain errors will be automatically retried 2 times by default, with a short exponential backoff. Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict, 429 Rate Limit, and >=500 Internal errors will all be retried by default.
You can use the maxRetries option to configure or disable this:
// Configure the default for all requests:
const client = new Autofix({
maxRetries: 0, // default is 2
});
// Or, configure per-request:
await client.complete({
maxRetries: 5,
});Requests time out after 1 minute by default. You can configure this with a timeout option:
// Configure the default for all requests:
const client = new Autofix({
timeout: 20 * 1000, // 20 seconds (default is 1 minute)
});
// Override per-request:
await client.complete({
timeout: 5 * 1000,
});On timeout, an APIConnectionTimeoutError is thrown.
Note that requests which time out will be retried twice by default.
The "raw" Response returned by fetch() can be accessed through the .asResponse() method on the APIPromise type that all methods return.
This method returns as soon as the headers for a successful response are received and does not consume the response body, so you are free to write custom parsing or streaming logic.
You can also use the .withResponse() method to get the raw Response along with the parsed data.
Unlike .asResponse() this method consumes the body, returning once it is parsed.
const client = new Autofix();
const response = await client.complete().asResponse();
console.log(response.headers.get('X-My-Header'));
console.log(response.statusText); // access the underlying Response object
const { data: response, response: raw } = await client
.complete()
.withResponse();
console.log(raw.headers.get('X-My-Header'));
console.log(response.suggestion);Important
All log messages are intended for debugging only. The format and content of log messages may change between releases.
The log level can be configured in two ways:
- Via the
AUTOFIX_LOGenvironment variable - Using the
logLevelclient option (overrides the environment variable if set)
import Autofix from 'autofix';
const client = new Autofix({
logLevel: 'debug', // Show all log messages
});Available log levels, from most to least verbose:
'debug'- Show debug messages, info, warnings, and errors'info'- Show info messages, warnings, and errors'warn'- Show warnings and errors (default)'error'- Show only errors'off'- Disable all logging
At the 'debug' level, all HTTP requests and responses are logged, including headers and bodies.
Some authentication-related headers are redacted, but sensitive data in request and response bodies
may still be visible.
By default, this library logs to globalThis.console. You can also provide a custom logger.
Most logging libraries are supported, including pino, winston, bunyan, consola, signale, and @std/log. If your logger doesn't work, please open an issue.
When providing a custom logger, the logLevel option still controls which messages are emitted, messages
below the configured level will not be sent to your logger.
import Autofix from 'autofix';
import pino from 'pino';
const logger = pino();
const client = new Autofix({
logger: logger.child({ name: 'Autofix' }),
logLevel: 'debug', // Send all messages to pino, allowing it to filter
});This library is typed for convenient access to the documented API. If you need to access undocumented endpoints, params, or response properties, the library can still be used.
To make requests to undocumented endpoints, you can use client.get, client.post, and other HTTP verbs.
Options on the client, such as retries, will be respected when making these requests.
await client.post('/some/path', {
body: { some_prop: 'foo' },
query: { some_query_arg: 'bar' },
});To make requests using undocumented parameters, you may use // @ts-expect-error on the undocumented
parameter. This library doesn't validate at runtime that the request matches the type, so any extra values you
send will be sent as-is.
client.complete({
// ...
// @ts-expect-error baz is not yet public
baz: 'undocumented option',
});For requests with the GET verb, any extra params will be in the query, all other requests will send the
extra param in the body.
If you want to explicitly send an extra argument, you can do so with the query, body, and headers request
options.
To access undocumented response properties, you may access the response object with // @ts-expect-error on
the response object, or cast the response object to the requisite type. Like the request params, we do not
validate or strip extra properties from the response from the API.
By default, this library expects a global fetch function is defined.
If you want to use a different fetch function, you can either polyfill the global:
import fetch from 'my-fetch';
globalThis.fetch = fetch;Or pass it to the client:
import Autofix from 'autofix';
import fetch from 'my-fetch';
const client = new Autofix({ fetch });If you want to set custom fetch options without overriding the fetch function, you can provide a fetchOptions object when instantiating the client or making a request. (Request-specific options override client options.)
import Autofix from 'autofix';
const client = new Autofix({
fetchOptions: {
// `RequestInit` options
},
});To modify proxy behavior, you can provide custom fetchOptions that add runtime-specific proxy
options to requests:
Node [docs]
import Autofix from 'autofix';
import * as undici from 'undici';
const proxyAgent = new undici.ProxyAgent('http://localhost:8888');
const client = new Autofix({
fetchOptions: {
dispatcher: proxyAgent,
},
}); Bun [docs]
import Autofix from 'autofix';
const client = new Autofix({
fetchOptions: {
proxy: 'http://localhost:8888',
},
}); Deno [docs]
import Autofix from 'npm:autofix';
const httpClient = Deno.createHttpClient({
proxy: { url: 'http://localhost:8888' },
});
const client = new Autofix({
fetchOptions: {
client: httpClient,
},
});- Node.js: 20 LTS or later
- npm: 9+
- AI API Keys: At least one provider key required
- Node.js 20+
- Modern web browsers
- Vercel (deployment)
- Local development
- Playwright: E2E testing
- Vitest: Unit testing
- TypeScript: Type checking
- Prettier: Code formatting
- Husky: Git hooks
git clone https://github.com/harpertoken/autofix.git
cd autofix
npm install
npm run preparenpm run dev # Start development server
npm run build # Build for production
npm run check # TypeScript type checking
npm run test # Run unit tests
npm run test:e2e # Run E2E tests
npm run format # Format code with Prettier
npm run preflight # Run all checks (format, check, test, build)autofix/
├── apps/
│ ├── cli/ # Command-line interface
│ ├── client/ # React web application
│ └── server/ # Node.js API server
├── packages/
│ └── shared/ # Shared utilities and types
├── tests/ # Test files
└── scripts/ # Build and utility scripts
- Fork the repository
- Create a feature branch:
git checkout -b feature/your-feature - Make your changes with tests
- Run preflight:
npm run preflight - Commit with conventional format:
git commit -m "feat: add new feature" - Push and open a PR
This project uses Conventional Commits:
feat:- New featuresfix:- Bug fixesdocs:- Documentationstyle:- Code style changesrefactor:- Code refactoringtest:- Testingchore:- Maintenance
Releases are automated using semantic-release:
- Push to
maintriggers release - Version bumps based on commit messages
- NPM publishing for CLI package
- GitHub releases with changelogs
| Problem | Solution |
|---|---|
GEMINI_API_KEY required |
Get key from Google AI Studio |
| Rate limit errors | Add SAMBANOVA_API_KEY for fallback |
| No suggestions in web app | Press 1,2,3 keys to switch Gemini models |
| CLI not found | Run npm install -g @harpertoken/autofix-cli |
| Web app not loading | Check npm run dev output |
| Build failures | Run npm run preflight to check all issues |
Enable verbose logging:
DEBUG=autofix:* npm run devThis package generally follows SemVer conventions, though certain backwards-incompatible changes may be released as minor versions:
- Changes that only affect static types, without breaking runtime behavior.
- Changes to library internals which are technically public but not intended or documented for external use. (Please open a GitHub issue to let us know if you are relying on such internals.)
- Changes that we do not expect to impact the vast majority of users in practice.
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
We are keen for your feedback; please open an issue with questions, bugs, or suggestions.
MIT - See LICENSE file for details.
- Google Gemini - Primary AI provider
- SambaNova - Fallback AI provider