Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for optimistic concurrency #8

Open
scale-tone opened this issue May 11, 2015 · 5 comments
Open

Add support for optimistic concurrency #8

scale-tone opened this issue May 11, 2015 · 5 comments

Comments

@scale-tone
Copy link
Owner

DataContext.SubmitChanges() should fail, if there's a newer version of an entity in DynamoDB and/or in cache.

@andycmaj
Copy link

This looks like it could be the answer to #21, and something i was looking for as well...

Would you accept PR for this? And if so have you thought yet about how you would like to implement it?

@andycmaj
Copy link

hmmm doesn't look like this will be possible with your current update implementation. At least not if you wanted to leverage built-in dynamodb SDK's support for document versioning.

BatchWrites don't support DynamoDBVersion attributes in batched entities.

Would it be a good idea to internally fork between batched ops and DataContext.Save/Delete depending on whether the document type in question is versioned?

@scale-tone
Copy link
Owner Author

Hi Andy!

You're absolutely welcome to send a PR for that, I'd really appreciate your help!

For the implementation, I'd propose to keep being consistent with AWS Object Persistence Model. We could just start respecting the DynamoDBVersion attribute, in a way that if the entity type does have a property marked with that attribute, we do a conditional update. If no such properties exist in the type - we fall back to existing "last writer wins" implementation.

Yes, BatchWrites do not seem to support conditional expressions or anything. Yes, it makes perfect sense to do multiple parallel PutItem operations with properly configured PutItemOperationConfig.Expected values instead.

What I wouldn't recommend is to try using DynamoDBContext.Save() for implementing this. First, because it would be hard to do (note: in ExecuteUpdateBatchAsync() you only have Documents, not entities). And second, because we generally strive not to build on top of AWS Object Persistence. We can re-use attributes from it (to look consistent) but not utilize it's functionality.

@andycmaj
Copy link

so something like this for use of PutItem?

                table.PutItem(document, new PutItemOperationConfig()
		{
			Expected = new Document(new Dictionary<string, DynamoDBEntry>
			{
                                // In reality, we'd get "VersionNumber" from a previous check for any [Versioned]
                                // members of the original entity (before Document conversion?)
				{ "Version", document["VersionNumber"] }
			})
		});

questions:

  1. Why not use UpdateItem api?
  2. "do multiple parallel PutItem operations": This seems like it needs unpacking.... you mean just an unlimited amount of PutItemAsync calls in parallel and Task.WaitAll the results? Seems like performance and error handling would both be very tough there, no?

@scale-tone
Copy link
Owner Author

scale-tone commented Mar 13, 2017

Why not use UpdateItem api?

Well, just to be consistent with what BatchWrite operation currently does. It would be confusing, if in one case items are replaced and in other case they're amended.

just an unlimited amount of PutItemAsync calls in parallel and Task.WaitAll the results?

Something like that, except not Task.WaitAll(), but Task.WhenAll().

Seems like performance and error handling would both be very tough there, no?

Well, the performance definitely would be lower compared to what we have with BatchWrite (my experiments show it would be 5-10 times slower). But there're no other options, as we just can't use BatchWrite. Maybe just fine-tuning the task scheduler for those tasks (not in first version obviously).

Error handling should be OK. If any PUT fails, there'll be an exception, and the cache will be cleared. Some items yes, might be written to DynamoDB successfully, but that's the same thing, as we have with BatchWrite now (upon getting an exception the consumer is supposed to implement retries or whatever).

danieljurek pushed a commit to danieljurek/linq2dynamodb that referenced this issue Mar 24, 2017
* No more bulk operations for CUD, using async tasks instead
* Removing entities also respects version field
* Update caching behavior to handle partial failure in optimistic locking scenario
danieljurek pushed a commit to danieljurek/linq2dynamodb that referenced this issue Mar 24, 2017
* No more bulk operations for CUD, using async tasks instead
* Removing entities also respects version field
* Update caching behavior to handle partial failure in optimistic locking scenario
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants