Skip to content

Conversation

@chigwell
Copy link

Summary
Adds LLM7.io to the Free Providers section of the generated README, including:

  • TOC entry and anchored section {#llm7io}
  • Rate limits (anonymous, free-token, paid tiers)
  • Python example using OpenAI-compatible API (base_url=https://api.llm7.io/v1)
  • A representative model list

Why
LLM7.io appears to provide a legitimate, OpenAI-compatible API with a clear free tier and paid plans. This makes it relevant for developers seeking free or low-cost LLM API access.

Changes

  • Updated generated README to include a new LLM7.io section under Free Providers
  • No changes to existing providers beyond TOC update for the new entry

Provider vetting against guidelines

  1. Legitimate company?
    Public site with documented API and branding, plus paid tiers (suggests an operating business rather than an anonymous endpoint).

  2. Proper API service?
    Exposes an OpenAI-compatible REST API (example added), supports text and image inputs for certain models, and documents rate limits.

  3. Business model?
    Yes. Tiers include anonymous (30 rpm), a free token tier (120 rpm), and paid plans (scaling to ~1,300 rpm).

  4. Legitimate services (no reverse engineering/resale)?
    The section lists a selection of models as presented by the provider. Some names resemble commercial families (e.g., “gpt-4o-mini”, “gpt-4.1-nano”). If maintainers deem these potentially problematic under guideline Update README with latest models #4, we can trim the list to only clearly open-source or vendor-provided SKUs in this README while keeping the provider entry. Open to guidance here.


Abuse considerations

As noted in the repository guidelines, new free endpoints often attract abusive traffic. The LLM7.io section includes explicit rate limits and notes the availability of higher paid-tier limits, which may help mitigate abuse at scale.


Preview

### LLM7.io {#llm7io}

An affordable LLM provider. Just start using powerful models instantly.

Last update: September 2025

Limits:
- 30 requests/min (anonymous, no sign-up)
- 120 requests/min (free token via https://token.llm7.io/)
- Up to ~1,300 requests/min on paid tiers

Example (Python):
# ... OpenAI-compatible usage with base_url="https://api.llm7.io/v1" ...

Models (selection):
- deepseek-r1-0528
- gemini-2.5-flash-lite (text, image)
- gpt-4o-mini-2024-07-18 (text, image)
- gpt-4.1-nano-2025-04-14 (text, image)
- gpt-o4-mini-2025-04-16 (text, image)
- qwen2.5-coder-32b-instruct
- ...

Checklist

  • Section added under Free Providers with anchor and TOC entry
  • Included rate limits, usage example, and representative model list
  • No other providers or limits altered
  • README regenerated from src/pull_available_models.py (generator inputs updated accordingly)

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @chigwell, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request expands the list of available resources for developers by adding LLM7.io to the 'Free Providers' section of the README. The update provides essential information about this new provider, including its OpenAI-compatible API, various rate limits, and a selection of supported models, making it easier for users to discover and utilize cost-effective LLM access.

Highlights

  • New Free Provider Added: LLM7.io has been integrated into the 'Free Providers' section of the README, offering an OpenAI-compatible API with a clear free tier.
  • Detailed Provider Information: The new entry for LLM7.io includes comprehensive details such as rate limits for anonymous, free-token, and paid tiers, a Python example demonstrating API usage, and a representative list of available models.
  • README Structure Update: The Table of Contents has been updated with a new entry for LLM7.io, and a dedicated anchored section has been added to the README, which was subsequently regenerated.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds LLM7.io as a new free provider, which is a valuable addition. The changes are well-documented in the pull request description. My main feedback is regarding the modification of the auto-generated README.md file. To ensure maintainability, these changes should be incorporated into the generation script (src/pull_available_models.py) instead of making direct edits to the README. I've also noted some minor areas for improvement in the model list for consistency and clarity.

Comment on lines +360 to +413
### [LLM7.io](https://llm7.io) {#llm7io}

An affordable LLM provider. Just start using powerful models instantly.

**Last update:** September 2025

**Limits:**
- 30 requests/min (anonymous, no sign-up)
- 120 requests/min (free token via https://token.llm7.io/)
- Up to ~1,300 requests/min on paid tiers

**Example Usage (Python):**
```python
import openai

client = openai.OpenAI(
base_url="https://api.llm7.io/v1",
api_key="unused" # Or get it for free at https://token.llm7.io/ for higher rate limits.
)

response = client.chat.completions.create(
model="gpt-4.1-nano-2025-04-14",
messages=[
{"role": "user", "content": "Tell me a short story about a brave squirrel."}
]
)

print(response.choices[0].message.content)
```

**Models (selection):**

* `deepseek-r1-0528`
* `gemini-2.5-flash-lite` (text, image)
* `mistral-small-3.1-24b-instruct-2503`
* `nova-fast`
* `gpt-4o-mini-2024-07-18` (text, image)
* `gpt-4.1-nano-2025-04-14` (text, image)
* `gpt-o4-mini-2025-04-16` (text, image)
* `qwen2.5-coder-32b-instruct`
* `roblox-rp`
* `bidara` (text, image)
* `mirexa` (text, image)
* `rtist`
* `mistral-small-2503`
* `open-mixtral-8x7b`
* `deepseek-ai/DeepSeek-R1-0528`
* `deepseek-v3-0324`
* `deepseek-r1`
* `l3.3-ms-nevoria-70b`
* `midnight-rose-70b-v2.0.3`
* `l3-70b-euryale-v2.1`
* `l3-8b-stheno-v3.2`
* `gemma-2-2b-it`

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The README.md file is auto-generated by src/pull_available_models.py and should not be edited directly, as indicated by the warning at the top of the file. Please add the information for LLM7.io to the generation script and re-run it to update the README. This ensures that future updates to the provider list are consistent and maintainable, and prevents your changes from being overwritten.

Comment on lines +392 to +413
* `deepseek-r1-0528`
* `gemini-2.5-flash-lite` (text, image)
* `mistral-small-3.1-24b-instruct-2503`
* `nova-fast`
* `gpt-4o-mini-2024-07-18` (text, image)
* `gpt-4.1-nano-2025-04-14` (text, image)
* `gpt-o4-mini-2025-04-16` (text, image)
* `qwen2.5-coder-32b-instruct`
* `roblox-rp`
* `bidara` (text, image)
* `mirexa` (text, image)
* `rtist`
* `mistral-small-2503`
* `open-mixtral-8x7b`
* `deepseek-ai/DeepSeek-R1-0528`
* `deepseek-v3-0324`
* `deepseek-r1`
* `l3.3-ms-nevoria-70b`
* `midnight-rose-70b-v2.0.3`
* `l3-70b-euryale-v2.1`
* `l3-8b-stheno-v3.2`
* `gemma-2-2b-it`

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This model list has a few areas for improvement to enhance clarity and consistency with the rest of the document:

  1. Sorting: The list is not alphabetically sorted. Sorting it would improve readability and align with the format of other provider sections.
  2. Typo: There appears to be a typo on line 398. gpt-o4-mini-2025-04-16 should likely be gpt-4o-mini-2025-04-16.
  3. Redundancy: The list seems to contain redundant entries for the same model, such as deepseek-r1-0528, deepseek-ai/DeepSeek-R1-0528, and deepseek-r1. It would be clearer to list each unique model only once.

Applying these changes would make the information more accessible to users.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant