Since its first announcement in re:Invent 2018 and being in preview ever since, AWS Outposts has gone GA in the US, all EU countries, Switzerland, Norway, Australia, Japan, and South Korea.
It is a fully managed service from AWS that enables customers to host AWS infrastructure and AWS services in a private data centre (co-location or on-premises data centre). Customers only need to provide a power supply and connection (preferably using direct connect) once the appliance is delivered and AWS will manage the monitoring, maintenance and upgrading of the appliance. The key use cases for AWS Outposts include:
The Outpost device is connected and essentially part of a specific AWS region. The region treats a collection of up to 16 racks (the maximum for now) at a single location as a unified capacity pool which can be associated with subnets of one or more virtual private clouds (VPCs) in the parent region.
The services supported on Outpost include: Amazon Elastic Compute Cloud (EC2), Amazon Elastic Block Store (EBS), Amazon Virtual Private Cloud, Amazon ECS, Amazon Elastic Kubernetes Service, and Amazon EMR, with Amazon RDS for PostgreSQL and Amazon RDS for MySQL currently in preview. Applications running on Outpost can make use of services in the parent AWS region. Some of these services include: Amazon Simple Storage Service (S3), Amazon DynamoDB, Auto Scaling, AWS CloudFormation, Amazon CloudWatch, AWS CloudTrail, AWS Config, Elastic Load Balancing.
Outposts supports the following instance types: compute optimised (C5, C5d), memory optimised (R5, R5d), general purpose (M5, M5d), accelerated computing (G4) and storage optimised (i3en). Customers can also combine a mixture of these instance types in one order. For storage, Outpost supports EBS general purpose SSD (gp2) with a minimum size of 2.7TB.
Customers can request AWS Outposts via the AWS Console in all of the available regions highlighted above. Each available Resource ID (otherwise known as a SKU) consists of a set number of supported instance types or a mixture of the instance types. For example, see below two general purpose units available in Europe region. The Resource ID, OR-R2XVM9Q, consists of 2 m5.24xlarge, 2 c5.24xlarge, 2 r5.24xlarge with 11+ TB EBS storage. With this set up, customers can either launch 2 instances for each instance type with 24xlarge size or a number of smaller instance sizes (e.g. xlarge, 2x large) under the same instance family.
Refer to the pricing page for the list price of each configuration SKU. Customers can pay for the hardware in three ways - all upfront, partial upfront and no upfront.
Image building plays a key role in ensuring application environments can be easily reproduced either in a new region or for dev/test purposes. For example, an application that runs on Apache Web Server within a Linux OS can be packaged up as an image in such a way that the application can be reproduced by creating a running EC2 instance or on-premise VM from the image.
Today, customers usually build a golden image for this purpose, however, this is usually manually created, updated and maintained with little or no automation. In many cases, this is time consuming and given the manual touchpoint, susceptible to errors.
The new EC2 Image Builder is a service that enables customers to build and maintain secure OS images for Amazon Linux 2 and Windows Server. With an automated build pipeline, new images can be created that can be used with Amazon Elastic Compute Cloud (EC2) and on-premises virtualisation environments. Furthermore, the pipelines enable the image to be tested, hardened and distributed in a secure manner.
Managing access at scale for S3 buckets requires creating and monitoring the IAM policy, S3 Bucket Policy and S3 Access Control List that is applicable to a given user or role of an S3 bucket. In particular, the bucket policy can span across a number of use cases making it difficult to segregate and derive the level of access granted to a given user or role. Shared data sets that require different levels of permission for example, restricted read, unrestricted read, unrestricted write; creating and assigning the required permission to the individual user or role and managing these permissions long term can easily become unmanageable.
Furthermore, understanding and gaining visibility to the exposure of an S3 bucket and the policy that is granting that exposure earlier on required a deep analysis of all the applicable policies, thereby making it difficult to identify S3 buckets with public access or access from other AWS accounts (including third party AWS accounts).
The NEW S3 Access Point provides the simplicity for managing data access at scale for shared data sets that requires different levels of permission on Amazon S3. Each access point will have a given name and a network access type (virtual private cloud or internet) with a provision to define the level of public access and the access point policy. Hundreds of access points can be created for a given bucket to control different levels of permissions, for example: restrict access to a VPC, public access to the internet etc. This feature can be accessed from each S3 bucket for the S3 Management.
As permissions are changed during the lifetime of an S3 bucket, the NEW Access Analyser for Amazon S3 provides detailed visibility of buckets with public access and access via a third party account. In addition, it provides details of the policy (for example, bucket policy, access control list) that is currently responsible for providing the permission and the access level (read, write, list etc). To get started with Access Analyser for S3, enable the AWS Identity and Access Management (IAM) Access Analyser which will automatically enable Access Analyser for Amazon S3 on the S3 Management Console.
Furthermore, the AWS Identity and Access Management (IAM) Access Analyser was also announced which makes it easier to check and validate the current state of policies attached to resources or IAM roles. The services include Amazon S3 buckets, AWS KMS keys, Amazon SQS queues, AWS IAM roles and AWS lambda. Ultimately, customers that are not currently sure who has been granted access to these services and the level of access granted can quickly turn on this feature from the AWS management console at no additional cost.
Read the next set of announcements in Part 2, coming soon.