You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There needs to be some means of recovering from an update conflict in the event that two concurrent edits are made to the same record during a network fork.
Some work has already been done at a protocol level to afford all necessary metadata to handle this, but the implementation remains unfinished. #40 is also related, as author metadata needs to be included with records in order for revision conflicts to be understandable and manageable by humans.
Core logic for this would be best placed in the hdk_records crate. Specifically-
get_latest_header_hash should be deprecated in favour of some alternative logic which returns an error if there are multiple live updates for the same entry. The error must include structured data containing the HeaderHashes of the conflicting revisions, such that clients can use that information to recover.
update_record should also be changed to throw an error if the specified version is not the latest live HeaderHash for the associated Entry. This would prevent edits to old revisions under normal network conditions.
some means of recovering from a conflict must also be implemented. This may be a case of the author(s) of a conflicting version requesting an update of the record to the same content as other divergent branches (in which case, Holochain's DHT structure might automatically re-merge without additional custom logic); or, if not possible, it might require implementing some special API call to mark one branch as superceded by another. If the latter, this might imply new mutations be added to the ValueFlow GraphQL spec.
Once Holochain core has implemented it we will probably need to address EntryDhtStatus::Conflict and potentially change the way update_record functions to use new host API calls.
A few moving parts to be implemented & enabled, in addition to the core logic:
plumb errors through to the GraphQL client, such that records in conflict return an error that will be correctly interpreted by GraphQL for that specific record - i.e. any other non-conflicting records in the same request should still be returned successfully; the error cannot cause the whole query to fail.
add an integration test to prove that the conflict detection behaviour of update_entry works under normal conditions, and old revisions can't be edited.
add test for network forks, asserting correct error responses in the case of a conflict (after the network partition is resolved) and showcasing how to recover from a conflict.
The text was updated successfully, but these errors were encountered:
the original code didn't suit this context, and it was adapted from h-be/Acorn or hdk_crud.
It didn't suit because in Acorn HeaderHash is used as "id" where as in holo-rea EntryHash is used as "id"
hdk_crud creates a flat update tree, where all updates reference the original header
holo-rea is supposed to create a branching Update tree, which this now allows for.
However, this implementation is still very naive, and assumes that in fact the Update tree
mostly just nests children in one sequence, more like an array than a tree. This won't hold
true in practice, as we're dealing in offline-friendly distributed systems. Solving this properly
relates to issue #196 conflict resolution
There needs to be some means of recovering from an update conflict in the event that two concurrent edits are made to the same record during a network fork.
Some work has already been done at a protocol level to afford all necessary metadata to handle this, but the implementation remains unfinished. #40 is also related, as author metadata needs to be included with records in order for revision conflicts to be understandable and manageable by humans.
Core logic for this would be best placed in the
hdk_records
crate. Specifically-get_latest_header_hash
should be deprecated in favour of some alternative logic which returns an error if there are multiple liveupdates
for the same entry. The error must include structured data containing theHeaderHash
es of the conflicting revisions, such that clients can use that information to recover.ActionHash
#351update_record
should also be changed to throw an error if the specified version is not the latest liveHeaderHash
for the associatedEntry
. This would prevent edits to old revisions under normal network conditions.Once Holochain core has implemented it we will probably need to address
EntryDhtStatus::Conflict
and potentially change the wayupdate_record
functions to use new host API calls.A few moving parts to be implemented & enabled, in addition to the core logic:
update_entry
works under normal conditions, and old revisions can't be edited.The text was updated successfully, but these errors were encountered: