A Summary of S3
Amazon S3 is the core object storage service on AWS, allowing you to store an unlimited amount of data with very high durability. It can be used to backup and archive data, host websites, Mobile and Cloud app hosting, disaster recovery and big data analytics.
Amazon S3 can be integrated with many other AWS cloud services, including AWS IAM, AWS KMS, Amazon EC2, EBS, EMR, DynamoDB, Redshift, SQS, Lambda and CloudFront.
Object storage differs from traditional block storage and file storage. Block storage manages data at a device level as addressable blocks, while file storage manages data at the operating system level as files and folders. Object storage manages data as objects that contain both data and metadata, manipulated by an API.
S3 Buckets are containers for objects stored in Amazon S3. Bucket names must be UNIQUE you cannot have the same name as someone else in the world. Each bucket is created in a specific region and data does not leave the region unless explicitly copied by the user.
Objects are stored in Buckets, the objects can be up to 5TB and can contain any kind of data. Objects contain both data and metadata and are identified by keys. Each Amazon S3 object can be addressed by a unique URL formed by the web services endpoint, the bucket name and the object key.
Amazon S3 has a minimalistic API—create/delete a bucket, read/write/delete objects, list keys in a bucket—and uses a REST interface based on standard HTTP verbs—GET, PUT, POST,
and DELETE. You can also use SDK wrapper libraries, the AWS CLI, and the AWS Management Console to work with Amazon S3.
Amazon S3 is highly durable and highly available, designed for 11 nines of durability of objects in a given year and four nines of availability.
Amazon S3 is eventually consistent, but offers read-after-write consistency for new object PUTs. Amazon S3 objects are private by default, accessible only to the owner. Objects can be marked public readable to make them accessible on the web. Controlled access may be provided to others using ACLs and AWS IAM and Amazon S3 bucket policies.
Static websites can be hosted in an Amazon S3 bucket. Prefixes and delimiters may be used in key names to organize and navigate data hierarchically much like a traditional file system.
Amazon S3 offers several storage classes suited to different use cases: Standard is designed for general-purpose data needing high performance and low latency. Standard-IA is for less
frequently accessed data. RRS offers lower redundancy at lower cost for easily reproduced data. Amazon Glacier offers low-cost durable storage for archive and long-term backups that
can are rarely accessed and can accept a three- to five-hour retrieval time. Object lifecycle management policies can be used to automatically move data between storage classes based on time.
Amazon S3 data can be encrypted using server-side or client-side encryption, and encryption keys can be managed with Amazon KMS. Versioning and MFA Delete can be used to protect against accidental deletion. Cross-region replication can be used to automatically copy new objects from a source bucket
in one region to a target bucket in another region. Pre-signed URLs grant time-limited permission to download objects and can be used to protect media and other web content from unauthorized “web scraping.” Multipart upload can be used to upload large objects, and Range GETs can be used to download portions of an Amazon S3 object or Amazon Glacier archive.
Server access logs can be enabled on a bucket to track requestor, object, action, and response.
Amazon S3 event notifications can be used to send an Amazon SQS or Amazon SNS message or to trigger an AWS Lambda function when an object is created or deleted.
Amazon Glacier can be used as a standalone service or as a storage class in Amazon S3. Amazon Glacier stores data in archives, which are contained in vaults. You can have up to
1,000 vaults, and each vault can store an unlimited number of archives. Amazon Glacier vaults can be locked for compliance purposes.