Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Data corruption when uploading blob via NewAppendBlobClient =>AppendBlock API #24027

Open
vespian opened this issue Jan 30, 2025 · 17 comments
Open
Assignees
Labels
Client This issue points to a problem in the data-plane of the library. customer-reported Issues that are reported by GitHub users external to the Azure organization. needs-team-attention Workflow: This issue needs attention from Azure service team or SDK team needs-team-triage Workflow: This issue needs the team to triage. Storage Storage Service (Queues, Blobs, Files)

Comments

@vespian
Copy link

vespian commented Jan 30, 2025

Bug Report

We recently noticed that the data we upload to Azure Blob storage is not the same as the data we get when using AppendBlock API during the time when API is throttling our requests.

The problem is relativelly easy to replicate and can be summarised by the logs below:

Image

When uploading large blob (32MiB) chunks (4MiB) in a loop, one of the chunk uploads fails with ```500 Operation timeout outbut theAppendBlock` call does not return erorr so the upload continues. Later, when fetching uploaded blob for inspection using GetBlob, we get `200` stauts but `InternalError` Status text and the corrupted file. The corruption is basically about the chunk that failed to upload being missing.

The code we are using is opensource and can be found here: https://gitlab.com/gitlab-org/container-registry/-/blob/a7581d9e743bba63e8318f304a799a09ae9e4580/registry/storage/driver/azure/v2/azure.go#L645

One of the tests that fails due to this issue: https://gitlab.com/gitlab-org/container-registry/-/blob/a7581d9e743bba63e8318f304a799a09ae9e4580/registry/storage/driver/testsuites/testsuites.go?page=2#L1428-1449

The internal issue where we track this issue and where you can potentially find more details is here: https://gitlab.com/gitlab-org/container-registry/-/issues/1470

  • import path of package in question: github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob
  • SDK version: v1.6.0
  • output of go version 1.23.2`
@github-actions github-actions bot added Client This issue points to a problem in the data-plane of the library. customer-reported Issues that are reported by GitHub users external to the Azure organization. needs-team-attention Workflow: This issue needs attention from Azure service team or SDK team question The issue doesn't require a change to the product in order to be resolved. Most issues start as that Service Attention Workflow: This issue is responsible by Azure service team. Storage Storage Service (Queues, Blobs, Files) labels Jan 30, 2025
Copy link

Thanks for the feedback! We are routing this to the appropriate team for follow-up. cc @xgithubtriage.

@jhendrixMSFT jhendrixMSFT removed question The issue doesn't require a change to the product in order to be resolved. Most issues start as that Service Attention Workflow: This issue is responsible by Azure service team. labels Jan 30, 2025
@github-actions github-actions bot added the needs-team-triage Workflow: This issue needs the team to triage. label Jan 30, 2025
@jhendrixMSFT
Copy link
Member

jhendrixMSFT commented Jan 30, 2025

By default, the retry policy will retry requests that fail with a 500 (see the docs here). It looks like you're using the default behavior but let us know if I got that wrong.

If you have client-side logging enabled you should see the failed request and subsequent retry. So, assuming that the default behavior is in play, it would explain why you see a 500 server-side but not client side (assuming that the retried request succeeded). I looked through some of the links (thanks for providing detailed information, it's super helpful) but didn't see any client-side logs (let us know if I missed it). Would it be possible to reproduce the failure with client-side logging enabled?

@jhendrixMSFT
Copy link
Member

In the meantime, we'll also continue to investigate on our side.

@vespian
Copy link
Author

vespian commented Jan 30, 2025

@jhendrixMSFT Thank you for your reply and time.

Yes, we are using the default retry policy. I will work on enabling and reproducing the problem with client-side logs enabled tomorrow and try to provide you with more information.

@jhendrixMSFT
Copy link
Member

jhendrixMSFT commented Jan 30, 2025

We have a theory, but it doesn't quite align with your notes. According to the REST docs a 500 with error code OperationTimedOut states that "The operation may or may not have succeeded on the server side." So, we're wondering if the failed attempt actually succeeded and our retry policy retried the request (due to the 500), resulting in a duplicate block. But you state a block is missing, so it doesn't quite add up. Regardless, I don't think we should retry on an OperationTimedOut and am following up on that.

@vespian
Copy link
Author

vespian commented Jan 30, 2025

Block was missing or was out-of-order - I am 100% sure that it is one of these two options. If it was duplicated, I would have gotten negative offset when searching for the block.

In order to understand what exactly gets corrupted, I started sending blobs of data generated using Golangs ChaCha8 pseudo-random generator and made sure to log the seed that is used in the tests output.

I then compared the first 20-something bytes of the first 512 bytes block that tests reported as corrupted with the whole pseudo-random stream that was send to the backend (i.e. recreated it locally using the seed the tests logged) and the bytes were found 4MiB later in the stream as if backend skipped on the block.

I do not know if the block was uploaded out-of-order though - tests stopped comparing the stream and aborted the test after they found the first discrepancy.

@vespian
Copy link
Author

vespian commented Jan 30, 2025

In case you would to double-check my investigation:

    testsuites.go:107: using rng seed [232 124 25 71 34 224 30 24 232 124 25 71 34 224 30 24 232 124 25 71 34 224 30 24 232 124 25 71 34 224 30 24] for blobbers

and the output of one of the failing tests from the issue I linked to:

       	Test:       	TestAzureDriverSuite/TestConcurrentFileStreams
    blobber.go:125: 
        	Error Trace:	/builds/gitlab-org/container-registry/testutil/blobber.go:125
        	            				/builds/gitlab-org/container-registry/registry/storage/driver/testsuites/testsuites.go:2297
        	            				/builds/gitlab-org/container-registry/registry/storage/driver/testsuites/testsuites.go:1440
        	            				/usr/local/go/src/runtime/asm_amd64.s:1700
        	Error:      	Not equal: 
        	            	expected: []byte{0x14, 0x69, 0xdf, 0x43, 0x79, 0x7f, 0x4d, 0x5e, 0x19, 0x6f, 0xdb, 0x28, 0x8a, 0xa0, 0x98, 0x1, 0xbc, 0x35, 0xc0, 0x9f, 0x9d, 0xb0, 0x1, 0xa4, 0x5, 0x95, 0x13, 0x12, 0x9b, 0x75, 0xd9, 0x69, 0x69, 0x4f, 0x23, 0x51, 0xf5, 0xcc, 0x7a, 0x53, 0x8d, 0xe0, 0x14, 0x23, 0x94, 0x8c, 0xff, 0x2, 0xe2, 0x34, 0x14, 0x44, 0xe9, 0xd3, 0xd5, 0xdb, 0xd7, 0x1e, 0x1c, 0xf5, 0xfe, 0xb0, 0xf9, 0x20, 0xf1, 0xf1, 0x64, 0x12, 0x9a, 0x8f, 0x2c, 0xe0, 0x3c, 0x2c, 0xd6, 0x93, 0x61, 0x24, 0xb3, 0xf7, 0x1, 0x2c, 0x82, 0x7d, 0x90, 0xf0, 0x7a, 0xee, 0xda, 0x9a, 0xf, 0xc3, 0x36, 0x38, 0xd8, 0xef, 0xa8, 0x36, 0xa2, 0xc9, 0x3a, 0x1e, 0x6, 0x2b, 0x24, 0xd3, 0x6b, 0x46, 0xbc, 0x42, 0x8d, 0xb9, 0xa4, 0xb9, 0xa3, 0x1f, 0x99, 0x58, 0x89, 0xd1, 0x2, 0x49, 0xdc, 0x4c, 0x2e, 0x7f, 0xde, 0xe3, 0x68, 0x79, 0x3f, 0xeb, 0xea, 0x91, 0x73, 0xa2, 0x18, 0xaa, 0x9, 0x32, 0x11, 0xef, 0x4, 0xa, 0x4c, 0xaa, 0xce, 0xe4, 0x8e, 0x85, 0x87, 0x88, 0x8d, 0x46, 0x96, 0x16, 0xd4, 0x67, 0xc2, 0x4f, 0xf0, 0xb, 0xaa, 0x4d, 0x48, 0xfd, 0x4a, 0x6a, 0xe6, 0xbc, 0xaf, 0xf0, 0x7e, 0x5f, 0x71, 0x6f, 0xe5, 0x9e, 0x83, 0x37, 0x52, 0xec, 0x36, 0xbe, 0xab, 0x71, 0xbd, 0x88, 0x47, 0x6b, 0x6, 0x78, 0x96, 0x52, 0xe1, 0x68, 0x7e, 0xac, 0x6f, 0x3c, 0x4b, 0x83, 0x8d, 0xb, 0xd5, 0x17, 0x3a, 0x2, 0x33, 0x54, 0x3d, 0x8a, 0x4c, 0xff, 0x1d, 0x6e, 0xe3, 0xe5, 0x50, 0x4, 0xb2, 0x79, 0x39, 0xe9, 0xce, 0x2f, 0x6d, 0xfc, 0x63, 0xa6, 0x57, 0x61, 0x9d, 0x4d, 0x78, 0xb2, 0x3e, 0xbb, 0x1b, 0x6f, 0x0, 0x14, 0xfd, 0xb3, 0x8c, 0x71, 0xe8, 0x58, 0xec, 0x65, 0xd3, 0x46, 0x7e, 0xe2, 0x89, 0xc5, 0xe7, 0x1b, 0x4e, 0xb5, 0x7d, 0x4, 0xa9, 0xcd, 0x5a, 0xda, 0xf4, 0x2c, 0xad, 0x10, 0x49, 0x4d, 0xa, 0x1a, 0xd8, 0x3c, 0xba, 0x6c, 0xdc, 0xbd, 0x29, 0x7e, 0x6b, 0x5, 0xc8, 0xd2, 0x47, 0xc4, 0x51, 0xf7, 0x3e, 0x7a, 0xc6, 0xe5, 0x29, 0xdd, 0x69, 0xe8, 0x47, 0xe5, 0x8b, 0x83, 0xd1, 0x71, 0x1e, 0xed, 0x6d, 0x62, 0x87, 0x46, 0xbf, 0x95, 0xb4, 0xab, 0xcb, 0xac, 0x60, 0xac, 0x6e, 0x18, 0x35, 0xfa, 0x3f, 0xbe, 0x78, 0x3e, 0x91, 0xb0, 0x84, 0x78, 0xa9, 0xb5, 0x8e, 0x9c, 0x29, 0x4a, 0xe2, 0xb0, 0x98, 0x5d, 0xe7, 0x4b, 0x72, 0x5f, 0x22, 0x1c, 0xcd, 0xbb, 0xa5, 0x81, 0x51, 0xc7, 0xae, 0xb0, 0xd9, 0x64, 0xc4, 0xc6, 0xe9, 0x59, 0x5f, 0x3b, 0xc1, 0xbd, 0x63, 0x18, 0x92, 0x28, 0x13, 0x4f, 0xb5, 0xf0, 0x7d, 0xf7, 0xc2, 0xc7, 0x83, 0x16, 0x54, 0x5d, 0x4b, 0x2d, 0x19, 0x6d, 0x1c, 0x20, 0x5b, 0x2b, 0xbb, 0x5f, 0xf6, 0x26, 0x2a, 0x76, 0x5b, 0x2f, 0xc, 0xb4, 0x44, 0xf9, 0xa5, 0xc0, 0xf2, 0xd6, 0xfe, 0x7b, 0xdf, 0xaf, 0x1b, 0x52, 0x6, 0x2d, 0x46, 0xa1, 0x76, 0x72, 0x85, 0xd1, 0x11, 0x5f, 0x83, 0x4a, 0xa, 0xd5, 0x68, 0x62, 0xf3, 0x7d, 0xd, 0xca, 0x8c, 0x50, 0x8, 0x8b, 0x59, 0x24, 0x18, 0x5, 0x39, 0xe3, 0xb7, 0xa9, 0xfa, 0xe5, 0x9d, 0x3f, 0x2d, 0x77, 0xab, 0x1, 0x5, 0xa1, 0xe3, 0xba, 0xdb, 0x8e, 0x84, 0x90, 0xb0, 0xaa, 0x23, 0xb0, 0xee, 0xd, 0x25, 0xde, 0x39, 0x3f, 0xdd, 0x95, 0x4f, 0x9, 0x64, 0xd, 0x36, 0xe4, 0x29, 0x45, 0x22, 0xdb, 0x66, 0x66, 0x99, 0xc8, 0x52, 0x24, 0x3e, 0x58, 0x10, 0x9, 0x78, 0x68, 0xa7, 0xd7, 0xd, 0xc6, 0x3c, 0x15, 0x18, 0x52, 0xe6, 0x93, 0x5b, 0x3c, 0x3, 0x40, 0xfe, 0x9e, 0xc8, 0xa0, 0xdb, 0x79}
        	            	actual  : []byte{0x4e, 0x8a, 0x87, 0x62, 0xf, 0x93, 0x85, 0x91, 0xf3, 0xf1, 0x2f, 0xb8, 0x66, 0x4f, 0x8d, 0xb, 0x48, 0x72, 0x80, 0xbe, 0x30, 0x9e, 0xdf, 0x9c, 0xd2, 0xf4, 0x87, 0xd0, 0x98, 0xf1, 0x7d, 0x9d, 0x75, 0x3d, 0x8a, 0x83, 0x37, 0x7, 0x3f, 0xda, 0x89, 0xd2, 0x0, 0x81, 0x37, 0xd5, 0x2, 0x42, 0xf2, 0x5e, 0xdf, 0x25, 0xdd, 0x48, 0x1c, 0x65, 0x27, 0xb2, 0x2, 0xf3, 0x36, 0x3a, 0x1e, 0xe, 0xeb, 0x54, 0x5b, 0x87, 0x6e, 0x97, 0xcc, 0x8c, 0x54, 0xf6, 0x93, 0xc, 0xe3, 0x41, 0x36, 0x17, 0x63, 0xf3, 0x61, 0x1e, 0xbc, 0x8d, 0x66, 0xc9, 0x65, 0xb7, 0x2a, 0xe0, 0x3b, 0x9f, 0x1b, 0x1a, 0x5e, 0x8a, 0xb0, 0xa1, 0xc8, 0x64, 0x69, 0xa2, 0xaf, 0xd6, 0xff, 0xf8, 0xc3, 0xab, 0x8, 0x8, 0xdf, 0xa1, 0xad, 0x99, 0x3e, 0x85, 0x91, 0x24, 0x4e, 0xb3, 0xf4, 0x15, 0x27, 0x8d, 0x66, 0x11, 0x3c, 0x88, 0xaf, 0xa3, 0xd3, 0x81, 0xd7, 0x6c, 0xb1, 0xa3, 0x5d, 0x41, 0x1a, 0x26, 0x28, 0xb9, 0x6a, 0x4b, 0x55, 0x5, 0x23, 0x67, 0x88, 0x5b, 0xde, 0xeb, 0x28, 0x82, 0xa, 0x3b, 0xc8, 0x4b, 0x45, 0x41, 0xbd, 0xa6, 0x46, 0xb3, 0x1b, 0x10, 0xb7, 0xca, 0xfb, 0x0, 0x73, 0xf2, 0x1e, 0x44, 0x87, 0x4f, 0xfc, 0x39, 0xc7, 0x85, 0xcc, 0x78, 0xdf, 0x9f, 0xb3, 0xa0, 0x5d, 0xda, 0xfe, 0xb8, 0x16, 0xc9, 0xef, 0x61, 0xab, 0x2c, 0x87, 0x7, 0x19, 0xae, 0x89, 0x3f, 0xc8, 0x81, 0xf2, 0x6f, 0x80, 0x10, 0x73, 0x68, 0x48, 0x7, 0xab, 0x84, 0x3b, 0x48, 0x33, 0xc2, 0xeb, 0x10, 0x71, 0x88, 0xcb, 0x63, 0xcd, 0x16, 0x18, 0x52, 0xab, 0xab, 0x5, 0x93, 0x9d, 0x11, 0x83, 0x75, 0x39, 0x9f, 0x92, 0x5f, 0x4, 0xa, 0xf7, 0x70, 0xcf, 0x70, 0x77, 0x5c, 0x31, 0xfd, 0x38, 0x9e, 0x2, 0x54, 0x2, 0x2d, 0x9f, 0xdb, 0xdb, 0xaa, 0x84, 0xa4, 0xab, 0xa8, 0xb7, 0x32, 0x65, 0x71, 0xe0, 0x4b, 0x1, 0xf1, 0x66, 0x2a, 0xa7, 0x2a, 0xbd, 0xf5, 0xb0, 0x78, 0xe6, 0xa, 0x9b, 0x5a, 0xbb, 0x91, 0xf9, 0x9, 0x61, 0x12, 0x58, 0x54, 0xb6, 0xbb, 0x16, 0x2e, 0xfd, 0xc1, 0x61, 0xe7, 0xd4, 0xfd, 0xbd, 0x1, 0x10, 0x42, 0xd1, 0x3a, 0x57, 0x63, 0xaf, 0x4e, 0x70, 0x70, 0x12, 0x34, 0xad, 0xb0, 0xee, 0x91, 0x45, 0x40, 0xa7, 0x75, 0x65, 0x20, 0xb4, 0x30, 0xe, 0x1d, 0xc7, 0x43, 0x68, 0x9a, 0x5e, 0xa0, 0x57, 0x97, 0x6, 0x84, 0x38, 0xfb, 0x80, 0xcd, 0xa2, 0x30, 0x2b, 0xce, 0x22, 0x22, 0x76, 0xa0, 0x9, 0xf, 0x5e, 0xfa, 0xc1, 0xaa, 0x5f, 0xe, 0x97, 0x51, 0x19, 0xbd, 0xea, 0xe2, 0x60, 0x16, 0x15, 0xcc, 0x8a, 0x79, 0xdb, 0x43, 0x1e, 0xe3, 0x66, 0xb9, 0x48, 0xec, 0x2, 0xa2, 0xde, 0xf4, 0xea, 0x18, 0xf4, 0xf6, 0x6d, 0xc0, 0x1e, 0x14, 0xa1, 0x6c, 0x8b, 0x24, 0x9a, 0xd4, 0xf6, 0x18, 0x36, 0xbb, 0x5b, 0xff, 0xf3, 0x89, 0xcf, 0xcb, 0x9d, 0xbc, 0x74, 0x51, 0x7f, 0x64, 0xcb, 0x9d, 0x1, 0x9e, 0x2d, 0x4a, 0x4, 0x23, 0x34, 0x67, 0xa6, 0x95, 0x1c, 0xfe, 0x60, 0x9c, 0xd1, 0xc1, 0xb3, 0xe9, 0x3e, 0x95, 0xb8, 0x9b, 0xc3, 0x8e, 0x4e, 0xd8, 0x9e, 0x54, 0x4e, 0xb5, 0x6a, 0xe5, 0x45, 0x63, 0xca, 0x1f, 0xce, 0x85, 0x17, 0xdc, 0x63, 0x8d, 0xcf, 0xf5, 0xe, 0x19, 0x40, 0x51, 0x21, 0xfb, 0xc1, 0x9f, 0x76, 0xe5, 0x46, 0x29, 0xc6, 0xb1, 0x1b, 0xd1, 0x53, 0xb5, 0xae, 0x97, 0x4f, 0x31, 0xac, 0xd8, 0xc4, 0xd9, 0xe7, 0x50, 0xae, 0xc0, 0xe, 0xed, 0xe9, 0x3c, 0xe2, 0xbd, 0x40, 0x38, 0xb5, 0xf9, 0xd, 0x70, 0x96, 0x2e, 0xf0, 0x5f, 0xc2, 0x8a, 0x70, 0x21}

which translates to one of the paragraphs from that issue:

* corruption found at offset 49152, len: 512, left bytes: 8388096 => 24MiB boundary
  * expected bytes start with []byte{0x14, 0x69, 0xdf, 0x43, 0x79, 0x7f, 0x4d, 0x5e, 0x19, 0x6f, 0xdb,...
  * offset 40960 => 20MiB boundary
  * got bytes start with []byte{0x4e, 0x8a, 0x87, 0x62, 0xf, 0x93, 0x85, 0x91, 0xf3, 0xf1, 0x2f, 0xb8, 0x66, 0x4f,

@vespian
Copy link
Author

vespian commented Jan 30, 2025

I double-checked my findings with a simple script below and this may indeed be a duplicate block after all - apologies, I was counting in the wrong direction :(

It is pretty late for me, I should probably call it a day for today :)

$ go run  ./main.go
Expected sequence found at offset: 24
Got sequence found at offset: 20

import (
	"bytes"
	"fmt"
	"log"
	"math/rand/v2"
)

const dataSize = 32 * 1024 * 1024 // 32 MiB

func main() {
	// Define the seed
	seed := [32]byte{
		232, 124, 25, 71, 34, 224, 30, 24, 232, 124, 25, 71, 34, 224, 30, 24,
		232, 124, 25, 71, 34, 224, 30, 24, 232, 124, 25, 71, 34, 224, 30, 24,
	}

	// Create a new ChaCha8 random number generator with the seed
	rng := rand.NewChaCha8(seed)

	// Generate 32MiB of random data
	data := make([]byte, dataSize)
	_, err := rng.Read(data)
	if err != nil {
		log.Fatalf("Failed to generate random data: %v", err)
	}

	// Sequences to search for
	expected := []byte{0x14, 0x69, 0xdf, 0x43, 0x79, 0x7f, 0x4d, 0x5e, 0x19, 0x6f, 0xdb}
	got := []byte{0x4e, 0x8a, 0x87, 0x62, 0x0f, 0x93, 0x85, 0x91, 0xf3, 0xf1, 0x2f, 0xb8, 0x66, 0x4f}

	// Search for the sequences
	expectedOffset := bytes.Index(data, expected)
	gotOffset := bytes.Index(data, got)

	// Print the results
	fmt.Printf("Expected sequence found at offset: %d\n", expectedOffset/1024/1024)
	fmt.Printf("Got sequence found at offset: %d\n", gotOffset/1024/1024)
}
```


@jhendrixMSFT
Copy link
Member

Thanks so much for the help. Indeed, the consensus on our end is to not retry on OperationTimedOut. It does mean there would be some client-side recovery code to properly handle this case (we can help with that if you like).

@vespian
Copy link
Author

vespian commented Jan 30, 2025

sgtm.

@vespian
Copy link
Author

vespian commented Jan 31, 2025

Would it be possible to reproduce the failure with client-side logging enabled?

Do we still need it?

@jhendrixMSFT
Copy link
Member

No, we have enough info to move forward on this.

@vespian
Copy link
Author

vespian commented Jan 31, 2025

It does mean there would be some client-side recovery code to properly handle this case (we can help with that if you like).

I started working on it and created a draft MR. Do you think you could help me or point me to relevant documentation/code regarding how to handle the OperationTimeout case, namely the part where things might have succeded despite error returned:

The operation may or may not have succeeded on the server side. Please query the server state before retrying the operation.
  • when creating new append blob using NewAppendBlobClient(blobName).Create, can I assume that (Blob|Container|Resource)AlreadyExistError is caused by a previous retry that actually succeed and just carry on? For example here
  • same situation with StartCopyFromURL here - AFAIK Azure SDK is thread-safe so even if multiple copy-operations started in the background, only one of them will succed. Right?
  • in case of the AppendBlock in the Write call that lead to this issue, is the proper approach to:
    • disable retries for 500 status
    • in case when 500 status occured (how to check it, I do not see anything in AppendBlobClientAppendBlockResponse), do a GetProperties call and check the content length
    • if the last block has not been uploaded compared to the loop counter bytes, retry the upload again
    • is that correct?

@jhendrixMSFT
Copy link
Member

For proper sequence to detect and recover, I'll defer to @gapra-msft as this was also observed in AzCopy (the way they fixed it there is Azure/azure-storage-azcopy#2430 which might be of interest). The way she explained how it works in AzCopy is as follows.

  • use an AppendPositionAccessConditions in AppendBlockOptions. this way if the block is retried on a 500/OperationTimedOut but the first 500 actually succeeded on the service side it will fail with a 412 (precondition not met) which will not trigger a retry.
  • in case of 412, download the range of the last append to see if the block contains the expected data (the MD5 checksum can help if your blocks are 4MB or less)
  • if it was uploaded, continue with the next block, else upload the block again before moving to the next block

I believe that using an AppendPositionAccessConditions will not require disabling retries on 500 (Gauri can confirm), you'll just need to handle the 412 error case. When that happens, AppendBlock will return an *azcore.ResponsError and you can check the ResponseError.StatusCode for the 412.

@vespian
Copy link
Author

vespian commented Jan 31, 2025

Thank you, that makes a lot of sense for AppendBlock

Do you have any advice how to handle NewAppendBlobClient.Create and StartCopyFromURL maybe, as I mentioned in my previous comment?

@jhendrixMSFT
Copy link
Member

For Create I believe that the default behavior is that if the append blob already exists it will clear it. This is according to the REST docs. So, I believe you'll need to use an ETag in appendblob.CreateOptions.AccessConditions to prevent that. This is getting beyond my area of expertise, but I think you'll want to set the IfNoneMatch to an azcore.ETagAny which should mean "create this append blob if there's no match with any ETag" (somebody on the storage team can correct me if I'm wrong).

I think the same pattern applies to StartCopyFromURL but again I'll defer to the storage folks to confirm.

@vespian
Copy link
Author

vespian commented Jan 31, 2025

Thank you very much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Client This issue points to a problem in the data-plane of the library. customer-reported Issues that are reported by GitHub users external to the Azure organization. needs-team-attention Workflow: This issue needs attention from Azure service team or SDK team needs-team-triage Workflow: This issue needs the team to triage. Storage Storage Service (Queues, Blobs, Files)
Projects
None yet
Development

No branches or pull requests

4 participants