▲ | UltraSane 4 days ago | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
S3 has supported SHA-256 as a checksum algo since 2022. You can calculate the hash locally and then specify that hash in the PutObject call. S3 will calculate the hash and compare it with the hash in the PutObject call and reject the Put if they differ. The hash and algo are then stored in the object's metadata. You simply also use the SHA-256 hash as the key for the object. https://aws.amazon.com/blogs/aws/new-additional-checksum-alg... | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | thayne 4 days ago | parent [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
Unfortunately, for a multi-part upload it isn't a hash of the total object, it is a hash of the hashes for each part, which is a lot less useful. Especially if you don't know how the file was partititioned during upload. And even if it was for the whole file, it isn't used for the ETag, so, so it can't be used for conditional PUTs. I had a use case where this looked really promising, then I ran into the multipart upload limitations, and ended up using my own custom metadata for the sha256sum. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|