Skip to content

Commit 9a5b23a

Browse files
readme: add install step and clean up (#16)
1 parent d2d1eaa commit 9a5b23a

File tree

1 file changed

+23
-29
lines changed

1 file changed

+23
-29
lines changed

README.md

Lines changed: 23 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,26 @@
11
# ResilientLLM
22

3-
A simple but robust LLM integration layer designed to ensure reliable, seamless interactions across multiple APIs by intelligently handling failures and rate limits.
3+
A minimalist but robust LLM integration layer designed to ensure reliable, seamless interactions across multiple LLM providers by intelligently handling failures and rate limits.
4+
5+
---
6+
7+
This library solves challenges in building production-ready AI Agents due to:
8+
9+
- ❌ Unstable network conditions
10+
- ⚠️ Inconsistent error handling
11+
- ⏳ Unpredictable LLM API rate limit errors
12+
13+
### Key Features
14+
15+
- **Token estimation**: You don’t need to calculate LLM tokens, they are estimated for each request
16+
- **Rate limiting**: You don't need to manage the token bucket rate algorithm yourself to follow the rate limits by LLM service providers, it is done for you automatically
17+
- **Retries, backoff, and circuit breaker**: All are handled internally by the `ResilientOperation`.
18+
19+
## Installation
20+
21+
```bash
22+
npm i resilient-llm
23+
```
424

525
## Quickstart
626

@@ -16,7 +36,7 @@ const llm = new ResilientLLM({
1636
requestsPerMinute: 60, // Limit to 60 requests per minute
1737
llmTokensPerMinute: 90000 // Limit to 90,000 LLM tokens per minute
1838
},
19-
retries: 3, // Number of times to retry if req. fails for reasons possible to fix by retry
39+
retries: 3, // Number of times to retry when req. fails and only if it is possible to fix by retry
2040
backoffFactor: 2 // Increase delay between retries by this factor
2141
});
2242

@@ -35,32 +55,6 @@ const conversationHistory = [
3555
})();
3656
```
3757

38-
---
39-
40-
### Key Points
41-
42-
- **Rate limiting is automatic**: You don’t need to pass token counts or manage rate limits yourself.
43-
- **Token estimation**: The number of LLM tokens is estimated for each request and enforced.
44-
- **Retries, backoff, and circuit breaker**: All are handled internally by the `ResilientOperation`.
45-
46-
---
47-
48-
### Advanced: With Custom Options
49-
50-
```js
51-
const response = await llm.chat(
52-
[
53-
{ role: 'user', content: 'Summarize the plot of Inception.' }
54-
],
55-
{
56-
maxTokens: 512,
57-
temperature: 0.5,
58-
aiService: 'anthropic', // override default
59-
model: 'claude-3-5-sonnet-20240620'
60-
}
61-
);
62-
```
63-
6458
## Motivation
6559

6660
ResilientLLM is a resilient, unified LLM interface featuring circuit breaker, token bucket rate limiting, caching, and adaptive retry with dynamic backoff support.
@@ -76,7 +70,7 @@ The final solution was to extract tiny LLM orchestration class out of all my AI
7670
This library solves my challenges in building production-ready AI Agents such as:
7771
- unstable network conditions
7872
- inconsistent error handling
79-
- unpredictable LLM API rate limit errrors
73+
- unpredictable LLM API rate limit errors
8074

8175
This library aims to solve the same challenges for you by providing a resilient layer that intelligently manages failures and rate limits, enabling you (developers) to integrate LLMs confidently and effortlessly at scale.
8276

0 commit comments

Comments
 (0)