-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/user rate limits #1342
base: main
Are you sure you want to change the base?
Feature/user rate limits #1342
Conversation
@classmethod | ||
def get_since(cls, user: User, since: datetime) -> Sequence["ChatMessage"]: | ||
"""Returns all chat messages for a given user, with the most recent message after 'since'""" | ||
return cls.objects.filter(chat__user=user, created_at__gt=since) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return cls.objects.filter(chat__user=user, created_at__gt=since) | |
cls.objects.filter(user=user).filter(created_at_gt=since).annotate(Sum("token_count"))["token_count__sum"] or 0 |
you can get the database to do the sum here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and this could now be moved to be a method belonging to the User
model?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but i think that there is a bigger problem here... the ChatMessage.tocken_count
only includes what the user has typed, and not the inlcuded files, I think we need to make this change first?
Context
Users can break each others sessions by consuming our token quota on backend models. We want users to only be able to consume their share of these quotas
Changes proposed in this pull request
Add a user rate limiter which imposes a max tokens per minute limit for all users across all models
Things to check