Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[for discussion] Integrate a load test into the local dev tools and CI #1609

Closed
wants to merge 6 commits into from

Conversation

monacoremo
Copy link
Member

@monacoremo monacoremo commented Oct 5, 2020

This is an experiment on integrating a load test into the local dev tools and CI.

To run the loadtest locally, run postgrest-loadtest in nix-shell. This will run for 60s by default.

This is based on https://locust.io/, but happy to try other frameworks out if you have suggestions!

The locustfile could be extended with more complex reads, writes etc. that can then be selected via tags to simulate different workloads.

The Postgres database, PostgREST server, Nginx reverse proxy and the load all run on the same machine with this setup, which makes it easy to quickly run locally and in CI, but it impairs comparability.

Todos:

  • output a human readable report (perhaps with Github checks integration?)
  • validate that the results are sufficiently stable across runs
  • compare performance (e.g. to current master) and potentially fail if a certain threshold is passed?
  • make postgrest-loadtest an option in nix-shell in order to maintain fast nix-shell startup without cachix
  • add more cases to be tested (get with embed, post etc.)
  • ...

@wolfgangwalther
Copy link
Member

I love the idea. That will really be helpful. I wonder how comparable the loadtest will be in CI from run to run - but that's something that we should be able to find out pretty quickly.

If it were possible to somehow run both the current master build and the PR build at the same time - that could increase comparability a lot and also allow some automatic comparisons, that lead to a test failure above/below a certain threshold?

I have no idea about the whole CI pipeline that is set up (and especially the nix stuff), but there seems to be a bit of caching involved everywhere. Is caching possible across runs? And can you set up your own caches? If yes to both, something like the following could maybe work:

  • run all tests and build
  • move new build to cache-pr
  • only on the master branch: move build to cache-master as well
  • in the loadtest part: load builds from cache-pr and cache-master and run both to compare

So basically the current build of master would always be kept in a different cache to be able to run against the current PR build. Even if circle CI performance would be different between runs, running both builds at the same time should eliminate that.

Maybe I'm completely off here and this doesn't work at all... just wanted to throw that out!

@wolfgangwalther
Copy link
Member

Oh and I'm not sure you know about the stuff steve is doing, see #1600 (comment) ?

Maybe those can play together?

@monacoremo
Copy link
Member Author

monacoremo commented Oct 5, 2020

@wolfgangwalther Agree, it would be great to get a direct performance comparison! E.g. in nixpkgs, the build bot Ofborg generates a report that is accessible directly in Github: https://github.com/NixOS/nixpkgs/pull/99650/checks?check_run_id=1211150408

No idea how they are doing this though, will need to look into it.

By the way, results from this first run in CI (a bit hidden in the output right now, need to see how to make it more visible):

 Name                                                          # reqs      # fails  |     Avg     Min     Max  Median  |   req/s failures/s
--------------------------------------------------------------------------------------------------------------------------------------------
 GET /                                                            621     0(0.00%)  |     423      76    1768     390  |   10.39    0.00
 GET /articles                                                   6304     0(0.00%)  |     295       1    1872     260  |  105.48    0.00
 GET /orders?select=name                                         6373     0(0.00%)  |     295       1    1747     260  |  106.63    0.00
--------------------------------------------------------------------------------------------------------------------------------------------
 Aggregated                                                     13298     0(0.00%)  |     301       1    1872     270  |  222.50    0.00

Response time percentiles (approximated)
 Type     Name                                                              50%    66%    75%    80%    90%    95%    98%    99%  99.9% 99.99%   100% # reqs
--------|------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
 GET      /                                                                 390    490    560    600    740    860   1100   1100   1800   1800   1800    621
 GET      /articles                                                         260    370    440    480    600    710    860    960   1300   1900   1900   6304
 GET      /orders?select=name                                               260    370    440    490    610    700    830    920   1300   1700   1700   6373
--------|------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
 None     Aggregated                                                        270    380    450    490    610    710    860    960   1300   1900   1900  13298

--> 222 reqs/s and reponse time of 1.9s at 99.99%

I'll fix the styling fail and we'll see what comes out in a second run :-)

@wolfgangwalther happy to guide you through the Nix and CI setup if you're interested, would also be good to get your pointers where we can add to the documentation of that setup!

@monacoremo
Copy link
Member Author

monacoremo commented Oct 5, 2020

Second run:

 Name                                                          # reqs      # fails  |     Avg     Min     Max  Median  |   req/s failures/s
--------------------------------------------------------------------------------------------------------------------------------------------
 GET /                                                            847     0(0.00%)  |     290      69    1175     260  |   14.17    0.00
 GET /articles                                                   8244     0(0.00%)  |     201       1    1405     180  |  137.89    0.00
 GET /orders?select=name                                         8108     0(0.00%)  |     200       1    1222     180  |  135.61    0.00
--------------------------------------------------------------------------------------------------------------------------------------------
 Aggregated                                                     17199     0(0.00%)  |     205       1    1405     190  |  287.67    0.00

Response time percentiles (approximated)
 Type     Name                                                              50%    66%    75%    80%    90%    95%    98%    99%  99.9% 99.99%   100% # reqs
--------|------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
 GET      /                                                                 260    340    390    420    510    580    700    760   1200   1200   1200    847
 GET      /articles                                                         180    260    300    340    420    490    580    660    890   1400   1400   8244
 GET      /orders?select=name                                               180    250    300    330    420    490    580    690    910   1200   1200   8108
--------|------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
 None     Aggregated                                                        190    260    310    340    420    500    590    680    920   1300   1400  17199

I rebased on the Postgres 13 PR that got merged earlier, so let me re-run once more!

Edit:
Third run:

Name                                                          # reqs      # fails  |     Avg     Min     Max  Median  |   req/s failures/s
--------------------------------------------------------------------------------------------------------------------------------------------
 GET /                                                            865     0(0.00%)  |     261      58     853     240  |   14.47    0.00
 GET /articles                                                   8753     0(0.00%)  |     181       1     922     160  |  146.44    0.00
 GET /orders?select=name                                         8822     0(0.00%)  |     179       1    1271     160  |  147.59    0.00
--------------------------------------------------------------------------------------------------------------------------------------------
 Aggregated                                                     18440     0(0.00%)  |     184       1    1271     160  |  308.51    0.00

Response time percentiles (approximated)
 Type     Name                                                              50%    66%    75%    80%    90%    95%    98%    99%  99.9% 99.99%   100% # reqs
--------|------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
 GET      /                                                                 240    300    350    380    470    540    590    680    850    850    850    865
 GET      /articles                                                         160    230    280    300    380    450    520    600    780    920    920   8753
 GET      /orders?select=name                                               160    230    270    300    380    450    520    590    780   1300   1300   8822

--------|------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
 None     Aggregated                                                        160    230    280    310    390    460    530    600    780   1100   1300  18440

Not very consistent so far, might need to run the test for longer to even things in CI out.

@steve-chavez
Copy link
Member

Also love the idea!

Oh and I'm not sure you know about the stuff steve is doing, see #1600 (comment) ?
Maybe those can play together?

The benchmark I'm doing is mostly about how many req/s we can support on different EC2 instances. It shouldn't conflict with this PR. I use k6 for my load tests, but it's cool to use locust on our CI :).

compare performance (e.g. to current master) and potentially fail if a certain threshold is passed?

This should definitely be the goal! Failing on the threshold would let us know of drops in performance when changes happen.

Including a couple more scenarios on this PR would be ideal:

The above are done on a different db(Chinook). Using any of the tables from our fixtures for those would be cool.

nix/loadtest.nix Outdated Show resolved Hide resolved
@monacoremo monacoremo force-pushed the loadtest branch 3 times, most recently from b8690b0 to 8eef2c6 Compare October 18, 2020 12:30
when: always
- run:
name: Build all derivations from default.nix
command: nix-build
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The nix-build is now run after all the tests in the pipeline. How are the tests now run against the new build? It looks like they are run against the cached version from cachix, which would always be the last build from master? What am I missing?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point @wolfgangwalther , I'll need to add a comment to document this. This setup works correctly and makes the tests run a bit earlier in the job, failing faster if needed.

nix-env -iA in the earlier step also uses nix-build under the hood. At this point, only the things that are really needed to run the tests are built or pulled from the cache if available. nix-env -iA will never install anything stale from the cache.

This nix-build call at this step then just builds whatever is not yet built or pulled from the cache.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah cool. Just had a comment on that previous step, that it should maybe be renamend then. Deleted it again, after I realized there are actually 2 steps that have nix-env -iA in them. So a comment + rename of one of the steps would sure help, I guess.

I understand that we could basically run the tests and nix-build in parallel after we are done with Install testing scripts? Have you tried doing that with circleCI's "workspaces" feature?
It looks like you could split this into 3 jobs, where the test and build jobs run in parallel after the first one. I guess you could just copy the whole /nix folder between those jobs via persist_to_workspace / attach_workspace.

See here for reference:
https://circleci.com/blog/persisting-data-in-workflows-when-to-use-caching-artifacts-and-workspaces/
https://circleci.com/blog/deep-diving-into-circleci-workspaces/

One benefit of that would be that those jobs would both show up separately in the check list on github. I think this could then be extended to all the tests against different pg versions - to have them show up one by one on the check list. Only if that's wanted, of course.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using workspaces is a cool idea! We'll need to test how this performs, as the /nix/ folder can easily be a few GB large, having a few GHC versions for the static builds etc. Not sure if 'saving' and 'loading' the workspace could be a bottleneck (it would be with the CircleCI caching feature, but maybe workspaces work differently).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as the /nix/ folder can easily be a few GB large, having a few GHC versions for the static builds etc

Shouldn't those be fetched only at the nix-build stage?

But yeah performance needs to be tested. Very likely that it won't actually perform better.

@wolfgangwalther
Copy link
Member

Another question:

Looking at the report at https://app.circleci.com/pipelines/github/PostgREST/postgrest/426/workflows/6fb87532-cdd6-4936-a3ca-bfb181d088b3/jobs/6779.

The step "Run the test specs against PostgreSQL 9.5" took > 2 minutes. Looking at the output it shows that the lib and test-suite are rebuilt completely. However I understood, that this should have happenend in the step before ("Install testing scripts") already, because of nix-env -iA?

Is the same thing happening here twice? Or is nix-env -iA actually not building anything, but the test command builds it on-demand?

@monacoremo
Copy link
Member Author

Is the same thing happening here twice? Or is nix-env -iA actually not building anything, but the test command builds it on-demand?

Will also need to clarify/document that somewhere :-) Nix builds the whole environment required for running the tests (including all library dependencies, ghc, cabal-install and postgres), but the tests themselves are run using cabal-install directly.

The postgrest-test-* scripts that we define through Nix are intended for incremental testing/development, they don't use Nix to run build/run the tests themselves. Building/running tests with Nix would mean that we can't use the caching and incremental builds that cabal-install provides, which would be quite painful. So we use Nix to reproducibly define and build absolutely everything for running the tests, but then run the tests using cabal v2-test. We ripped out the caching for this last step as it was causing issues (cabal bug?), so this is why you are seeing those 2 min build time currently.

@wolfgangwalther
Copy link
Member

Thanks, I think I understand it a whole lot better now.

@wolfgangwalther
Copy link
Member

@monacoremo
I just started nix-shell --arg loadtest true on your branch and then switched to current master to run the loadtests there. I assume this should work because the nix shell stuff is loaded into memory by this time and doesn't care that the .nix files on disk are changed, right?

I was surprised to see, that when running postgrest-loadtest the executable was never re-built. I think this should be triggered automatically, just like with the spec-tests. Maybe it's just because all those builds for those commits have already been cached? It doesn't look like that from the output, though. Just noticed that postgrest-test-io behaves exactly the same - starts the tests immediately. However when I do nix-build after switching branches, this does load new builds from the cache.

I'm trying nix-build && postgrest-loadtest right now and I am getting results that look quite similar across runs... unfortunately across commits where they should not:

  • 350 req/s on current master
  • 350 req/s on 698fac8 (before the prepared statements commit)
    (both values are +- 2req/s across different runs on the same branch, so quite stable)

I expect the performance to be significantly less without prepared statements, as that's what Steves tests have shown.

I guess the results from nix-build are not used for the loadtest. I tried with a manual cabal v2-build as well, but that didn't work either (I didn't really expect it to).

Is there anything that I need to do differently?

Ah. I did something differently now: I stashed your changes and applied them to the master branch to be able to launch nix-shell --arg loadtest on the current branch. This seems to work, now I get 450 req/s for current master and only 350 req/s for the other commit mentioned above.

So this means that the executable that is used with the load/memory/io-tests is actually build when launching nix-shell. That seems to limit the use of postgrest-loadtets and the other commands inside nix-shell a alot, because I can't test my current changes.

Any way we can change this?

@wolfgangwalther
Copy link
Member

wolfgangwalther commented Nov 25, 2020

It looked like the numbers were quite consistent at the beginning. Ranged around +- 5 req/s. I expected to get better numbers, when running longer, so I took a couple of 30 min runs. Variance did not change. That's not what I would have expected, that questions reliability (repeatability) of those tests.

I wonder whether we can implement proper A/B testing where we always compare two postgrest executables with each other. Ideally in an alternating setting, so something like 5 sec A, 5 sec B, ... repeat for 10 times or something. That should be a lot more independent of current machine load and give a delta of req/s as a result. This delta should be much better to infer conclusions from.

The Postgres database, PostgREST server, Nginx reverse proxy and the load all run on the same machine with this setup, which makes it easy to quickly run locally and in CI, but it impairs comparability.

Another question: Why is there nginx running at all? Can the clients not make the requests directly to PostgREST? I understand that nginx (or another reverse proxy) is probably part of most setups - but we can't do much about it's performance, at least in this repo. Including it feels like taking resources (and focus) off of testing PostgREST itself.

@monacoremo
Copy link
Member Author

Haven't looked at this for a while, it's next up on my list.

There are a few interesting options here for maximum performance tests, like wrk. I got 3000rps on my local machine with it. Thinking about including a balanced load test with locust, and a max unbalanced load test with another tool.

@monacoremo
Copy link
Member Author

nginx helped make postgrest more stable with very highly concurrent loads, if I remember correctly - will test again

We will definitely be able to run the incremental build, now that we've figured out cabal v2-exec.

A/B testing could be possible based on our nightly releases - let's think about this.

@wolfgangwalther
Copy link
Member

I did a few tests as well last week, but haven't been able to write them up. some interesting numbers for sure. not exactly sure what to make of those.

@wolfgangwalther
Copy link
Member

Another question: Why is there nginx running at all? Can the clients not make the requests directly to PostgREST? I understand that nginx (or another reverse proxy) is probably part of most setups - but we can't do much about it's performance, at least in this repo. Including it feels like taking resources (and focus) off of testing PostgREST itself.

I tested it without the nginx stuff and it's working fine. Performance is actually the same, so it does not seem to hurt. The log output is a lot better (cleaner), because all the error logging from nginx is gone. For the simple case, I don't think we need nginx.


I also experimented a bit with a different number of users. Comparing last week's master branch and the commit right before prepared statements were introduced with 5s each step:

Data set
users master pre prep.
0 0,0 0,0
5 34,0 35,1
10 68,0 66,4
15 103,0 98,5
20 133,3 131,3
25 166,3 162,0
30 196,1 193,4
35 232,6 213,1
40 248,8 239,0
45 257,4 295,1
50 300,6 346,7
55 336,4 341,9
60 289,8 313,6
65 317,6 358,9
70 393,9 367,9
75 362,3 338,0
80 410,4 296,4
85 406,3 298,0
90 399,0 309,9
95 421,4 352,7
100 404,8 339,9
105 423,5 310,4
110 395,9 333,6
115 334,3 288,1
120 380,5 286,0
125 353,5 322,7
130 343,8 320,0
135 259,9 294,1
140 316,9 269,0
145 348,9 289,7
150 354,6 293,4
155 315,4 256,3
160 352,6 241,7
165 335,9 273,3
170 330,1 280,9
175 333,4 267,3
180 346,0 247,0
185 254,3 287,6
190 289,9 259,1
195 275,0 290,9
Scatter plot (red=master, blue=pre prep)

locust

Up to 30 users the numbers are exactly the same. Although a bit more variation, the same trend is still continued up to 55 users. From 60 users on the variation starts to increase massively. It makes no sense that perf first decreases then increases again with more users - so this is random variation for sure. I observed this variation with a lot longer run times as well. The trend is clear, though: The performance improvement with prepared statements is clearly visible across runs. However:

  • with lower numbers of users, the variation is low, but both branches show no difference
  • with higher numbers of users, the variation is high, but both branches perform differently

From the plot, one can tell that this makes the current locust test basically useless, because even with a huge performance difference you could end up randomly in a spot as shown with users=135 in the plot, which indicates that prepared statements perform worse than before.

I'm pretty sure that the reason is that with lower numbers of users locust is not able to saturate PostgREST, so we have idle times and all requests can be served. So the number of requests is basically the number of requests that locust can send, not that PostgREST can handle. Clearly locust requires a lot of resources and when we increase the number of users, locust and PostgREST have to share those, since we're running everything on the same machine. The most likely cause is just random scheduling differences between runs, I assume. We can't really fight those. Plus, when we want to run those tests in CI, we only have 2 cores available (I looked that up, I think it was 2 - very few for sure). I tried a few other things as well, to reduce variation - with no success.

There's one other thought I had: Why are we testing with a multi-user setup? We are forcing postgrest to use only 1 database connection with db-pool=1 already, and that makes sense, so the whole locust machinery of creating multiple users just adds a huge overload. If we were to test in a multi-threaded situation - we would also test our underlying http framework big-time, as that provides the multi-threadedness, right? But we don't need to do that.

For our dev and CI tools use-case, where we want to know how our "request to query" code performs - it would be best to just use a single threaded test runner, that performs 1 request, waits for the response and then sends the next request. This would need to be implemented most efficiently - so no python :D. We can then just hammer PostgREST with one query for 5s, then the next query for 5s etc. to get an idea of different code paths.


We could do this with a simple while loop around curl, I guess, but @monacoremo you posted a nice list:

There are a few interesting options here for maximum performance tests, like wrk. I got 3000rps on my local machine with it. Thinking about including a balanced load test with locust, and a max unbalanced load test with another tool.

I think what I asked for above, is what you mean by "unbalanced" load test, right?

I will now narrow down the list of benchmark tools in the linked repo to those that might make sense for us. I am only looking for:

Only very few of the tools do support http/2, unix sockets and all request methods. Some came close, but only one of the tools looks really promising to solve all of that: https://github.com/tsenart/vegeta. Some highlights of that:

-unix-socket string
Connect over a unix socket. This overrides the host address in target URLs

This seems to be - if at all supported - trouble everywhere else. Overriding the host address in target URLs is exactly what is needed, though - no fiddling with URIs to support sockets...

-cpus int
Number of CPUs to use (defaults to the number of CPUs you have)

That will be a good one to limit us to 1 CPU for CI.

http format
The http format almost resembles the plain-text HTTP message format defined in RFC 2616 but it doesn't support in-line HTTP bodies, only references to files that are loaded and used as request bodies (as exemplified below).
Although targets in this format can be produced by other programs, it was originally meant to be used by people writing targets by hand for simple use cases.

If I understand correctly, we can just write our test-cases as plain text files in HTTP message format. This is awesome, because it has exactly no boilerplate code at all.

vegeta is also available in nixpkgs already: https://github.com/NixOS/nixpkgs/blob/278d8641c8da8cea2fa506e7a887d887ba38b14c/pkgs/tools/networking/vegeta/default.nix

I will try to give vegeta a test run.

@steve-chavez
Copy link
Member

That's a nice find @wolfgangwalther. I also read about locust being slow before, but I didn't think it'd make that much of a difference.

I will try to give vegeta a test run

If vegeta has issues, I can vouch for k6. For my local db-pool=1 tests, I used it with a single virtual user, which for the unprepared/prepared comparison, tended to give 100 req/s more in favor of prepared statements(don't remember the numbers exactly but it was like 450 req/s vs 550 req/s).

@wolfgangwalther
Copy link
Member

Although vegeta is much nicer to handle, the numbers still jump up and down for me. That is req/s across the whole run. The more test runs I made, the more it became clear to me that average req/s is not the right metric for our devtools and CI.

We have a lot of unrelated factors that contribute to the performance of a single request (other processes running, cpu scheduling, even automatically adjusted cpu frequency and stuff like that) that create a lot of noise. Every once in a while there will be an outlier with huge response times - and that will kill the whole average. I found that the median response time (reported as 50th percentile) was already much more reliable.

After upgrading nix and vegeta to the latest version, I also had access to the minimum response time. This is much more reliable (see #1600 (comment) for some numbers). Minimum latency is a measurement of peak performance, when all the other conditions are in our favor. As such it is the most direct way to measure the performance of the actual code. The number itself might not have much meaning in a real world use-case, because there those other factors are important - but we don't want to measure the performance of the overall system. Just the code we've written, ideally only the changes we made.

In my tests, I used this PR and made a couple of changes:

  • removed nginx
  • updated nix
  • changed postgrest.conf to create /tmp/postgrest.socket (should probably create a tmp file with mktemp for this)
  • used the following command to run vegeta:
${vegeta}/bin/vegeta -cpus 1 \
  attack -max-workers 1 -workers 1 -unix-socket /tmp/postgrest.socket \
  -targets ./test/loadtest.http -rate 0 -duration 30s "$@" \
  | ${vegeta}/bin/vegeta report
  • tests were just one request so far, so ./test/loadtest.http looks like this:
GET http://postgrest/orders?select=name

I didn't test again with locust, but I would assume if we could find some kind of peak performance metric there, this would also be more reliable. However, I think the upside for vegeta as our test-runner is quite high, see my comment above about criteria.

If vegeta has issues, I can vouch for k6. For my local db-pool=1 tests, I used it with a single virtual user, which for the unprepared/prepared comparison, tended to give 100 req/s more in favor of prepared statements(don't remember the numbers exactly but it was like 450 req/s vs 550 req/s).

I couldn't find any unix socket support mentioned in the docs. But apart from that, it looks like it supports everything we need here as well. The "1 virtual user" would probably work as well. However, the virtual user concept + the requests written in javascript surely add overhead.

With vegeta I am getting ~ 2400 req/s with a single process and no concurrency. For peak performance measurements this is good, because the more requests we can make, the higher the chance to find that "true minimum". Also, to be fair, I did not run k6 on my machine and don't know how well it performs here.

@wolfgangwalther
Copy link
Member

wolfgangwalther commented Dec 6, 2020

A/B testing could be possible based on our nightly releases - let's think about this.

Nightly builds could be a problem, because this will be the static build - while for the actual test we want to switch to the incremental build, right?

Maybe we can do that on a commit basis. So postgrest-loadtest <sha1> would compare the current working directory vs that commit.

This could be done by something like this (inspired from here), in parts pseudo-code as comments:

# create a temp directory: /tmp/postgrest-sha1

git archive  <sha1> | tar -x -C /tmp/postgrest-sha1

cp -r dist-newstyle /tmp/postgrest-sha1 # to benefit from incremental builds for small differences

# build in temp directory
# launch postgrest in temp directory

# build in working directory
# launch postgrest in working directory

# hit both instances hard

Default for sha1 would be "master". So all PRs would automatically compare against current master.

@steve-chavez
Copy link
Member

I found that the median response time (reported as 50th percentile) was already much more reliable.

That'is really interesting @wolfgangwalther. I've just found that the median is pretty stable on my tests on #1600 (comment). I'll share the k6 output of 5 runs here, see the http_req_duration med:

med=1.41ms, 578.878085/s
    data_received..............: 5.3 MB 176 kB/s
    data_sent..................: 1.9 MB 65 kB/s
    failed requests............: 0.00%  ✓ 0   ✗ 17433
    http_req_blocked...........: avg=5.57µs  min=3.21µs   med=5.09µs  max=516.57µs p(90)=6.57µs   p(95)=7.61µs
    http_req_connecting........: avg=23ns    min=0s       med=0s      max=415.78µs p(90)=0s       p(95)=0s
    http_req_duration..........: avg=1.52ms  min=957.23µs med=1.41ms  max=145.69ms p(90)=1.66ms   p(95)=1.76ms
    http_req_receiving.........: avg=98.42µs min=49.37µs  med=97.74µs max=704.7µs  p(90)=125.98µs p(95)=136.43µs
    http_req_sending...........: avg=31.93µs min=14.22µs  med=28.37µs max=527.87µs p(90)=50.95µs  p(95)=58.36µs
    http_req_tls_handshaking...: avg=0s      min=0s       med=0s      max=0s       p(90)=0s       p(95)=0s
    http_req_waiting...........: avg=1.39ms  min=838.26µs med=1.28ms  max=145.33ms p(90)=1.53ms   p(95)=1.61ms
    http_reqs..................: 17433  578.878085/s
    iteration_duration.........: avg=1.7ms   min=1.1ms    med=1.59ms  max=146.79ms p(90)=1.86ms   p(95)=1.95ms
    iterations.................: 17433  578.878085/s
    vus........................: 1      min=1 max=1
    vus_max....................: 1      min=1 max=1                                                    
med=1.41ms, 582.899933/s
    data_received..............: 5.4 MB 178 kB/s
    data_sent..................: 2.0 MB 65 kB/s
    failed requests............: 0.00%  ✓ 0   ✗ 17555
    http_req_blocked...........: avg=5.4µs   min=3.29µs   med=4.89µs  max=503.17µs p(90)=6.38µs  p(95)=7.39µs
    http_req_connecting........: avg=23ns    min=0s       med=0s      max=404.73µs p(90)=0s      p(95)=0s
    http_req_duration..........: avg=1.51ms  min=971.63µs med=1.41ms  max=81.52ms  p(90)=1.66ms  p(95)=1.75ms
    http_req_receiving.........: avg=98.34µs min=49.09µs  med=97.35µs max=669.27µs p(90)=126.5µs p(95)=136.29µs
    http_req_sending...........: avg=31.99µs min=15.15µs  med=28.66µs max=538.74µs p(90)=51.16µs p(95)=57.94µs
    http_req_tls_handshaking...: avg=0s      min=0s       med=0s      max=0s       p(90)=0s      p(95)=0s
    http_req_waiting...........: avg=1.38ms  min=856.02µs med=1.28ms  max=81.39ms  p(90)=1.52ms  p(95)=1.61ms
    http_reqs..................: 17555  582.899933/s
    iteration_duration.........: avg=1.69ms  min=1.1ms    med=1.6ms   max=81.72ms  p(90)=1.85ms  p(95)=1.95ms
    iterations.................: 17555  582.899933/s
    vus........................: 1      min=1 max=1
    vus_max....................: 1      min=1 max=1                                                   
med=1.41ms, 570.758333/s
    data_received..............: 5.2 MB 174 kB/s
    data_sent..................: 1.9 MB 64 kB/s
    failed requests............: 0.00%  ✓ 0   ✗ 17189
    http_req_blocked...........: avg=5.63µs   min=3.19µs   med=5.18µs   max=328.34µs p(90)=6.81µs   p(95)=7.77µs
    http_req_connecting........: avg=12ns     min=0s       med=0s       max=216.94µs p(90)=0s       p(95)=0s
    http_req_duration..........: avg=1.54ms   min=982.29µs med=1.41ms   max=184.67ms p(90)=1.7ms    p(95)=1.86ms
    http_req_receiving.........: avg=101.12µs min=50.62µs  med=100.81µs max=702.54µs p(90)=129.31µs p(95)=140.84µs
    http_req_sending...........: avg=32.45µs  min=14.74µs  med=29.39µs  max=632.58µs p(90)=50.31µs  p(95)=58.57µs
    http_req_tls_handshaking...: avg=0s       min=0s       med=0s       max=0s       p(90)=0s       p(95)=0s
    http_req_waiting...........: avg=1.41ms   min=884.56µs med=1.28ms   max=184.53ms p(90)=1.55ms   p(95)=1.71ms
    http_reqs..................: 17189  570.758333/s
    iteration_duration.........: avg=1.73ms   min=1.11ms   med=1.6ms    max=184.84ms p(90)=1.9ms    p(95)=2.07ms
    iterations.................: 17189  570.758333/s
    vus........................: 1      min=1 max=1
    vus_max....................: 1      min=1 max=1                                                 
med=1.4ms, 583.66937/s
    data_received..............: 5.4 MB 178 kB/s
    data_sent..................: 2.0 MB 65 kB/s
    failed requests............: 0.00%  ✓ 0   ✗ 17577
    http_req_blocked...........: avg=5.56µs  min=3.18µs   med=5.14µs  max=347.51µs p(90)=6.79µs   p(95)=7.74µs
    http_req_connecting........: avg=14ns    min=0s       med=0s      max=261.8µs  p(90)=0s       p(95)=0s
    http_req_duration..........: avg=1.51ms  min=941.61µs med=1.4ms   max=153.5ms  p(90)=1.64ms   p(95)=1.73ms
    http_req_receiving.........: avg=98.86µs min=47.42µs  med=98.56µs max=633.2µs  p(90)=126.63µs p(95)=136.6µs
    http_req_sending...........: avg=32.62µs min=15.17µs  med=29.45µs max=507.07µs p(90)=52.19µs  p(95)=58.61µs
    http_req_tls_handshaking...: avg=0s      min=0s       med=0s      max=0s       p(90)=0s       p(95)=0s
    http_req_waiting...........: avg=1.38ms  min=833.4µs  med=1.26ms  max=153.3ms  p(90)=1.5ms    p(95)=1.59ms
    http_reqs..................: 17577  583.66937/s
    iteration_duration.........: avg=1.69ms  min=1.07ms   med=1.58ms  max=153.7ms  p(90)=1.83ms   p(95)=1.93ms
    iterations.................: 17577  583.66937/s
    vus........................: 1      min=1 max=1
    vus_max....................: 1      min=1 max=1                                           
med=1.42ms, 573.985382/s
    data_received..............: 5.3 MB 175 kB/s
    data_sent..................: 1.9 MB 64 kB/s
    failed requests............: 0.00%  ✓ 0   ✗ 17285
    http_req_blocked...........: avg=5.62µs  min=3.31µs   med=5.17µs  max=399.4µs  p(90)=6.57µs   p(95)=7.66µs
    http_req_connecting........: avg=14ns    min=0s       med=0s      max=251.85µs p(90)=0s       p(95)=0s
    http_req_duration..........: avg=1.54ms  min=951.16µs med=1.42ms  max=286.92ms p(90)=1.67ms   p(95)=1.76ms
    http_req_receiving.........: avg=99.29µs min=49.7µs   med=98.4µs  max=444.76µs p(90)=127.47µs p(95)=137.89µs
    http_req_sending...........: avg=32.45µs min=15.21µs  med=29.05µs max=579.37µs p(90)=51.42µs  p(95)=59µs
    http_req_tls_handshaking...: avg=0s      min=0s       med=0s      max=0s       p(90)=0s       p(95)=0s
    http_req_waiting...........: avg=1.4ms   min=840.34µs med=1.28ms  max=286.77ms p(90)=1.53ms   p(95)=1.62ms
    http_reqs..................: 17285  573.985382/s
    iteration_duration.........: avg=1.72ms  min=1.08ms   med=1.6ms   max=287.18ms p(90)=1.86ms   p(95)=1.96ms
    iterations.................: 17285  573.985382/s
    vus........................: 1      min=1 max=1
    vus_max....................: 1      min=1 max=1                                           

(For my tests, the minimum response time, http_req_duration min, doesn't seem to be as stable.)

Prior to the prepared statements change I'm getting a stable med=2 ms.

@wolfgangwalther wolfgangwalther changed the base branch from master to main December 31, 2020 14:11
@monacoremo monacoremo closed this Apr 12, 2021
@monacoremo monacoremo deleted the loadtest branch July 27, 2021 08:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

3 participants