You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have some problems with the current state of the synchronisation model, now we can pull site by site the data.
But if we use a loop on each site to pull the content and there is too much content to pull, the system will create duplicates. This occurs on double run of the pull task.
To prevent this case we need to add a new layer on top of the queue table, the tasks table.
When we add/delete/edit an item into the website, then we add it to the queue as usual but the process of pulling the content never grab items from the queue directly.
Instead of this we add a new table "bea_csf_tasks" where we can have :
id | just for the index
date_added | timestamp when we add the item to the tasks
date_changed | timestamp when we changed the item to the tasks
status | different stats (todo, running, finished, error)
blog_id | blog_id for the task to run
ids | ids of the queue to process
The pull process will now grab the last line for him (blog_id:current_blog_id) and not currently running (status:todo) and change the state immediatly (status:running).
We finished so we can change state to finished (status:finished).
On the PHP side we can change pretty much nothing to the core, on every action launched into the process we will save the ids added to the queue table, then at "wp_shutdown" we create a ne worker line.
Hello,
We have some problems with the current state of the synchronisation model, now we can pull site by site the data.
But if we use a loop on each site to pull the content and there is too much content to pull, the system will create duplicates. This occurs on double run of the pull task.
To prevent this case we need to add a new layer on top of the queue table, the tasks table.
When we add/delete/edit an item into the website, then we add it to the queue as usual but the process of pulling the content never grab items from the queue directly.
Instead of this we add a new table "bea_csf_tasks" where we can have :
The pull process will now grab the last line for him (blog_id:current_blog_id) and not currently running (status:todo) and change the state immediatly (status:running).
We finished so we can change state to finished (status:finished).
On the PHP side we can change pretty much nothing to the core, on every action launched into the process we will save the ids added to the queue table, then at "wp_shutdown" we create a ne worker line.
This is inpired by https://github.com/humanmade/Cavalcade and https://github.com/humanmade/Cavalcade-Runner (if we want a real deamon worker runner). But this can be hard to implement.
The text was updated successfully, but these errors were encountered: