Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rate limiter not working as expected #75

Open
jose-zenledger opened this issue Apr 26, 2022 · 4 comments
Open

Rate limiter not working as expected #75

jose-zenledger opened this issue Apr 26, 2022 · 4 comments

Comments

@jose-zenledger
Copy link

jose-zenledger commented Apr 26, 2022

Perhaps this is an implementation issue. See the following file which shows the per second rate not working. If you set the rate limiter to 100 per second and you have 101 requests, it should take more than a second to finish since only 100/101 could have run in the first second and the last request would have to wait until the next second.

package main

import (
	"context"
	"github.com/go-redis/redis/v8"
	"github.com/go-redis/redis_rate/v9"
	"github.com/stretchr/testify/require"
	"os"
	"sync"
	"sync/atomic"
	"testing"
	"time"
)

type Limiter interface {
	Allow(ctx context.Context) (time.Duration, error)
}

type Rediser interface {
	Eval(ctx context.Context, script string, keys []string, args ...interface{}) *redis.Cmd
	EvalSha(ctx context.Context, sha1 string, keys []string, args ...interface{}) *redis.Cmd
	ScriptExists(ctx context.Context, hashes ...string) *redis.BoolSliceCmd
	ScriptLoad(ctx context.Context, script string) *redis.StringCmd
	Del(ctx context.Context, keys ...string) *redis.IntCmd
}

func NewRedisLimiter(r Rediser, key string, perSec int) Limiter {
	return &redisLimiter{
		limiter: redis_rate.NewLimiter(r),
		key:     key,
		perSec:  perSec,
	}
}

type redisLimiter struct {
	limiter *redis_rate.Limiter
	key     string
	perSec  int
}

func (l *redisLimiter) Allow(ctx context.Context) (time.Duration, error) {
	r, err := l.limiter.Allow(ctx, l.key, redis_rate.PerSecond(l.perSec))
	if err != nil {
		return 0, err
	}

	return r.RetryAfter, nil
}

func TestRedisLimiter_Allow(t *testing.T) {
	ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
	defer cancel()

	radd := os.Getenv("REDIS_ADDR") // set this in the env to host:port
	opts := redis.Options{Addr: radd}
	rc := redis.NewClient(&opts)
	defer rc.Close()

	perSecond := 100 // set the per second rate

	var val int64
	limiter := NewRedisLimiter(rc, "TestRedisLimiter_Allow", perSecond)
	runs := perSecond + 1 // one more than the per second rate (last request should be in the next second)
	wg := sync.WaitGroup{}
	wg.Add(runs)
	start := time.Now()
	for i := 0; i < runs; i++ {
		go func() {
			defer wg.Done()

			retryAfter, err := limiter.Allow(ctx)
			require.NoError(t, err)

			for retryAfter > 0 {
				time.Sleep(retryAfter)
				retryAfter, err = limiter.Allow(ctx)
				require.NoError(t, err)
			}

			atomic.AddInt64(&val, 1)
		}()
	}
	wg.Wait()
	elapsed := time.Since(start)
	require.GreaterOrEqual(t, elapsed, time.Second) // one more than the per second rate (last request should be in the next second)
	require.Equal(t, runs, int(val))
}

Here is a docker-compose.yml that should be able to run the test via docker compose up test after go mod init && go mod tidy:

version: "3.8"

services:
  test:
    image: golang:1.18
    volumes:
      - .:/app
    working_dir: /app
    environment:
      - REDIS_ADDR=redis:6379
    command: sh -c "go test -coverprofile=cover.out ./... -race && go tool cover -html=cover.out -o cover.html"
    depends_on:
      redis:
        condition: service_healthy


  redis:
    image: redis:alpine
    expose:
      - "6379"
    healthcheck:
      test: ["CMD", "redis-cli","ping"]

The only thing I can think of is that maybe the first second doesn't really count some how? I tried 100 per second for 1000 total and am getting a little over 9 seconds

@jose-zenledger
Copy link
Author

At 100 per second rate limit I get the following results:

101 requests: 36.880584ms
201 requests: 1.014007251s
301 requests: 2.021583626s
401 requests: 3.016126084s
501 requests: 4.026494002s
601 requests: 5.023141128s
701 requests: 6.014478627s
801 requests: 7.01344642s
901 requests: 8.029704919s

Seems like the alg is off for the first second. Any idea on whats going on?

@fracasula
Copy link

Seems like the alg is off for the first second. Any idea on whats going on?

Try with burst 1. #58 (comment)

@To-echo
Copy link

To-echo commented May 9, 2023

@jose-zenledger I also found the same problem, how did you solve it in the future? Thanks

@saeedvaziry
Copy link

I have the same issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants