Locking an s3 object best practice?

s3 object lock pricing
s3 object lock cloudformation
s3 versioning
s3 concurrent writes
s3 compliance
s3 prevent overwrite
boto3 s3 object lock
s3 object tagging

I have an S3 bucket containing quite a few S3 objects that multiple EC2 instances can pull from (when scaling horizontally). Each EC2 will pull an object one at a time, process it, and move it to another bucket.

Currently, to make sure the same object isn't processed by multiple EC2 instances, my Java app renames it with a "locked" extension added to its S3 object key. The problem is that "renaming" is actually doing a "move". So the large files in the S3 bucket can take up to several minutes to complete its "rename", resulting in the locking process being ineffective.

Does anyone have a best practice for accomplishing what I'm trying to do?

I considered using SQS, but that "solution" has its own set of problems (order not guaranteed, possibility of messages delivered more than once, and more than one EC2 getting the same message)

I'm wondering if setting a "locked" header would be a quicker "locking" process.


order not guaranteed, possibility of messages delivered more than once, and more than one EC2 getting the same message

The odds of actually getting the same message more than once is low. It's merely "possible," but not very likely. If it's essentially only an annoyance if, on isolated occasions, you should happen to process a file more than once, then SQS seems like an entirely reasonable option.

Otherwise, you'll need an external mechanism.

Setting a "locked" header on the object has a problem of its own -- when you overwrite an object with a copy of itself (that's what happens when you change the metadata -- a new copy of the object is created, with the same key) then you are subject to the slings and arrows of eventual consistency.

Q: What data consistency model does Amazon S3 employ?

Amazon S3 buckets in all Regions provide read-after-write consistency for PUTS of new objects and eventual consistency for overwrite PUTS and DELETES.

https://aws.amazon.com/s3/faqs/

Updating metadata is an "overwrite PUT." Your new header may not immediately be visible, and if two or more workers set their own unique header (e.g. x-amz-meta-locked: i-12345678) it's entirely possible for a scenario like the following to play out (W1, W2 = Worker #1 and #2):

W1: HEAD object (no lock header seen)
W2: HEAD object (no lock header seen)
W1: set header
W2: set header
W1: HEAD object (sees its own lock header)
W2: HEAD object (sees its own lock header)

The same or a similar failure can occur with several different permutations of timing.

Objects can't be effectively locked in an eventual consistency environment like this.

Locking objects using S3 Object Lock, S3 Object Lock lets you store objects in Amazon S3 using a write once, read many (WORM) model. You can use it to view, configure, and manage the object lock  Used in combination with versioning, which protects objects from being overwritten, AWS S3 Object Lock enables you to store your S3 objects in an immutable form, providing an additional layer of protection against object changes and deletion. S3 Object Lock feature can also help you meet regulatory requirements within your organization when it comes to data protection.


Object tag can assist here, as changing a tag doesn't create a new copy. Tag is kind of key/value pair associated to object. i.e. you need to use object level tagging.

Managing Amazon S3 object locks, Ensure that your Amazon S3 buckets have Object Lock feature enabled in order to prevent the objects they store from being deleted. Object  To use S3 Object Lock, follow these basic steps: Create a new bucket with Object Lock enabled. (Optional) Configure a default retention period for objects placed in the bucket. Place the objects that you want to lock in the bucket. Apply a retention period, a legal hold, or both, to the objects that


Have you considered using a FIFO Queue for your usecase. Instead of best-effort ordering, a FIFO queue maintains the order of the messages from when they are sent to the queue to when they are polled. You can also ensure your objects are processed only once since Deduplication allows for exactly once processing.

S3 Object Lock - S3 best practice, Best practice rules for Amazon S3. AWS Simple Ensure that AWS S3 buckets use Object Lock for data protection and/or regulatory compliance. S3 Transfer  Recently introduced by Amazon S3, Object Lock stores objects using a write-once-read-many (WORM) model. Cloudian’s HyperStore v7.2 fully supports Object Lock, including all relevant S3 APIs and access control with permissions and bucket and IAM policies. Application users can now use Amazon SDKs with HyperStore software or appliances deployed in their on-premises infrastructure to protect data against ransomware threats and meet compliance requirements.


AWS S3 Best Practices, For example, you could use S3 Object Lock to help protect your AWS CloudTrail logs. Enable versioning. Versioning is a means of keeping multiple variants of an​  Security Best Practices for Amazon S3 Amazon S3 provides a number of security features to consider as you develop and implement your own security policies. The following best practices are general guidelines and don’t represent a complete security solution.


Security Best Practices for Amazon S3, detective controls using the s3-bucket-ssl-requests-only managed AWS Config rule. Consider Amazon S3 Object Lock. Based on AWS  Cloud Custodian. Cloud Custodian is a tool developed by Capital One to manage cloud resources according to defined policies. In other words, once you have settled on a S3 security strategy and have identified best practices, you can use Cloud Custodian to scan your resources and ensure that these practices are being met.


(PDF) Security Best Practices for Amazon S3, Recently introduced by Amazon S3, Object Lock stores objects using a https://​www.slideshare.net/AmazonWebServices/best-practices-for-  Amazon S3 Best Practice . Amazon S3 Object Lock can help . prevent accidental or inappropriate deletion of data. For example, one could use Amazon S3 .