-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance with many operations #15
Comments
Almost 5 months passed without any reply. So this project is not maintained anymore? |
It is, it's just lower priority than the 20 other projects I maintain. |
I understand your situation. I reacted based on your "Looking for contributors!" appeal on homepage. I thought you will be interested in our help with developing this project. Maybe I hoped for quick evaluation or a few hints based on your knowledge of jsdata ecosystem. Well if you are not interested, then we will develop this feature on our own. |
Your |
@radarfox are you still working on this? I would be interested in this - and revamping the localforage adapter to work with js-data v3. so we could join forces.. |
Hello Jason,
we are planning to migrate our app storage adapter from "js-data-localstorage" to "js-data-localforage" because of some bugs in localStorage implementation on some browsers which we would like to support. But the problem is that we ran into some serious performance issues.
Both adapters in current version save these storage items for a resource:
Our average storage now contains around 6000 items with total data around 4MB and the possibility is that the amount of items will be raising even more in time. We create/destroy all those data together as user logs in/out of our app (with
createMany
anddestroyMany
).So I made some storage performance testing on target Android v5 tablet (Samsung Galaxy Tab 4 10.1) and here are the results:
Tests on other devices, like iOS and desktop browsers came basically the same. The time that operations take, drastically rises on number of items, not on their data size.
The conclusion is that the current approach with many storage items is not acceptable under this storage utilization. None of our clients will wait 2 extra minutes just for offline The problem is in both localstorage and localforage adapters, but is much more affecting the localforage adapter in IndexedDB or WebSQL mode.
So what we should do to get rid of this problem?
I'm thinking of rewriting the whole adapter functionality so there is always 1 storage item for 1 resource. This item would basically contain the same data as current "ids map item" where ids would be substituted by the model data. Here is the example:
I'm aware of that performance with single model changes will suffer, because of a need to JSON stringify/parse the whole resource data into the single storage item. But the extra time should be much lower and acceptable as it is solving 2 minutes of waiting on "xxxMany" operations.
Maybe the best solution would be to add something like
storageStrategy
configuration, so you can choose on each resource if you want to use single or many storage items in adapter operations.As I'm writing this post, I have another idea for storage organization. Maybe it should work "hybrid" and allow both single and many approach at the same time. For example, for a situation where I had
storageStrategy: 'many'
on existing resource and then I switch tostorageStrategy: 'single'
and I don't want to loose the old data. I might look something like:So what do you think of that all? Hope I haven't flooded you with information, but I was trying to explain you all the important ideas and causes.
This issue is very important for our application and we hope you can help us solve it. Would be nice, if you contribute with some code, or at least by letting us implement it and then merge into your adapter production version, so we don't lose further features and fixes. Implementation of this (or any other working) solution is inevitable for us.
Thanks for your interest and developing these superb plugins ;)
The text was updated successfully, but these errors were encountered: