You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The earlier responses seem pretty snappy, but they are not really returning a lot of data. In a real world system there will be a lot more data. Lets mock some with the faker gem.
314
+
315
+
### Add fake data for testing
316
+
317
+
Add the `faker` gem to your Gemfile
318
+
319
+
```ruby
320
+
gem 'faker', group: [:development, :test]
321
+
```
322
+
323
+
And add some seed data using the seeds file
324
+
325
+
```ruby
326
+
# This file should contain all the record creation needed to seed the database with its default values.
327
+
# The data can then be loaded with the rails db:seed command (or created alongside the database with db:setup).
328
+
#
329
+
# Examples:
330
+
#
331
+
# movies = Movie.create([{ name: 'Star Wars' }, { name: 'Lord of the Rings' }])
There are some things we can do to work around this. First we should add a config file to our initializers. Add a file named `jsonapi_resources.rb` to the `config/initializers` directory and add this:
376
+
377
+
```ruby
378
+
JSONAPI.configure do |config|
379
+
# Config setting will go here
380
+
end
381
+
```
382
+
383
+
#### Caching
384
+
385
+
We can enable caching so the next request will not require the system to process all 20K records again.
386
+
387
+
We first need to turn on caching for the rails portion of the application with the following:
388
+
389
+
```bash
390
+
rails dev:cache
391
+
```
392
+
393
+
To enable caching of JSONAPI responses we need to specify which cache to use (and in version v0.10.x and later that we want all resources cached by default). So add the following to the initializer you created earlier:
394
+
395
+
```ruby
396
+
JSONAPI.configure do |config|
397
+
config.resource_cache =Rails.cache
398
+
# The following option works in versions v0.10 and later
399
+
#config.default_caching = true
400
+
end
401
+
```
402
+
403
+
If using an earlier version than v0.10.x we need to enable caching for each resource type we want the system to cache. Add the following line to the `contacts` ressource:
404
+
405
+
```ruby
406
+
classContactResource < JSONAPI::Resource
407
+
caching
408
+
#...
409
+
end
410
+
```
411
+
412
+
If we restart the application and make the same request it will still take the same amount of time (actually a tiny bit more as the resources are added to the cache). However if we perform the same request the time should drop significantly, going from ~8s to ~1.6s on my system for the same 20K contacts.
413
+
414
+
We might be able to live with performance of the cached results, but we should plan for the worst case. So we need another solution to keep our responses snappy.
415
+
416
+
#### Pagination
417
+
418
+
Instead of returning the full result set when the user asks for it, we can break it into smaller pages of data. That way the server never needs to serialize every resource in the system at once.
419
+
420
+
We can add pagination with a config option in the initializer. Add the following to `config/initializers/jsonapi_resources.rb`:
421
+
422
+
```ruby
423
+
JSONAPI.configure do |config|
424
+
config.resource_cache =Rails.cache
425
+
# config.default_caching = true
426
+
427
+
# Options are :none, :offset, :paged, or a custom paginator name
428
+
config.default_paginator =:paged# default is :none
Now we only get the first 50 contacts back, and the request is much faster (about 80ms). And you will now see a `links` key with links to get the remaining resources in your set. This should look like this:
This will allow your client to iterate over the `next` links to fetch the full results set without putting extreme pressure on your server.
456
+
457
+
The `default_page_size` setting is used if the request does not specify a size, and the `maximum_page_size` is used to limit the size the client may request.
458
+
459
+
*Note:* The default page sizes are very conservative. There is significant overhead in making many small requests, and tuning the page sizes should be considered essential.
0 commit comments