| May 20, 2013 |
We are excited to announce that AWS has been granted two Agency
Authority to Operate (ATOs) under the Federal Risk and Authorization
Management Program (FedRAMP) by the US Department of Health and Human
Services (HHS). FedRAMP is a mandatory U.S. government-wide program that
provides a standardized approach to security assessment, authorization,
and monitoring for cloud products and services.
Some of the major benefits of FedRAMP for agencies include:
- Significant savings in cost, time and resources
- Risk-based security management
- Enhanced transparency
Already numerous government agencies and other entities that provide
systems integration and other products and services to governmental
agencies are using the wide range of AWS services today. Now all U.S.
government agencies can leverage the AWS HHS ATO packages in the FedRAMP
repository to evaluate AWS, provide their own authorizations to use
AWS, and transition workloads into the AWS environment.
You can learn more by reading the AWS FedRAMP FAQs. |
| May 20, 2013 |
We are delighted to announce that AWS GovCloud (US)
has received an Agency Authority to Operate (ATO) from the US
Department of Health and Human Services (HHS) in compliance with the
Federal Risk and Authorization Management Program (FedRAMPSM). FedRAMP
is a U.S. government-wide program that provides a standardized approach
to security assessment, authorization, and continuous monitoring for
cloud products and services.
Leveraging the HHS authorization, U.S. government agencies can evaluate
AWS GovCloud (US) for their applications and workloads, complete their
own authorizations to use AWS, and deploy systems into the AWS
environment.
Agencies can immediately request access to the "Amazon Web Services - AWS GovCloud (US) Region" FedRAMP package by submitting a FedRAMP Package Access Request Form using package ID "AGENCYAMAZONGC".
Join us for our weekly AWS GovCloud (US) Region Office Hours on May 21st, 1:00 – 3:00 PM EST and the Intro to AWS GovCloud (US) Region webinar on June 12th, 1:30 – 2:30 PM EST to learn more about AWS FedRAMP Compliance.
Please visit our AWS GovCloud (US) home page and contact us to get started today!
|
| May 20, 2013 |
Amazon Elastic Load Balancing (ELB) now supports additional HTTP
methods specified in requests from client applications. Previously, ELB
restricted the set of supported HTTP methods to those commonly used by
conventional web applications.
With an increasing number of applications requiring support for new
HTTP extensions, customers have indicated that they would like more
control over the HTTP methods used by their applications. ELB will now
accept all HTTP methods sent to your applications. Some examples of
methods you can use include “PATCH” for Ruby on Rails 4+ applications,
and “REPORT” or “MKCALENDAR” for CalDAV applications.
To learn more, visit the Elastic Load Balancing Developer Guide. |
| May 16, 2013 |
We’re excited to announce seven new enhancements to Amazon Elastic Transcoder
that make it easier for you to encode and deliver your content to a
wider set of video devices and players. Amazon Elastic Transcoder is a
web service that converts your video files into versions that will play
back on devices like smartphones, tablets, PCs and web browsers.
Starting today you can use Amazon Elastic Transcoder to output content in three new ways:
- HTTP Live Streaming (HLS) support lets you create videos that
play on compatible players for Apple iOS, Android devices, set-top
boxes and web browsers. With HLS support, you can now easily deliver
your content without a streaming server – just point your users to the
video in Amazon S3 or Amazon CloudFront.
- WebM support lets you transcode content into VP8 video and
Vorbis audio for playback in browsers, like Firefox, that do not
natively support H.264 and AAC.
- MPEG-2 TS output container support lets you output transport streams that are commonly used in broadcast systems.
We’ve also added four features that make it even easier to use Amazon Elastic Transcoder:
- Multiple outputs per job make it easy to create different
renditions of the same content. Instead of having to create one
transcoding job per rendition, you can now create a single job to
produce multiple renditions. For example, with a single job you can
create H.264, HLS and WebM versions of the same video for delivery to
multiple platforms.
- Automatic video bit rate optimization takes the guesswork out
of choosing the right bit rate for your video content. With this
feature, Amazon Elastic Transcoder will automatically adjust the bit
rate in order to optimize the visual quality of your transcoded output.
- Enhanced aspect ratio and sizing policies make it easier to
resize your content to your output frame size. You can use these new
settings in transcoding presets to precisely control scaling, cropping,
matting and stretching options to get the output that you expect
regardless of how the input is formatted.
- Integration with Amazon S3 permissions and storage options
lets you set permissions on your output files from within Amazon Elastic
Transcoder. Your files are then created with the right permissions
in-place, ready for delivery to end-users.
You can learn more about Amazon Elastic Transcoder and these new features by visiting the detail page. To see these new features in action, don’t forget to register for the “What’s New with Amazon Elastic Transcoder” webinar on May 29, 2013 at 10am Pacific time. |
| May 15, 2013 |
We are excited to announce the availability of Parallel Scan,
a new feature that allows you to access your Amazon DynamoDB data even
more quickly than before. In addition, we have made it up to four times
cheaper to read large amounts of data out of Amazon DynamoDB. This also
reduces the cost of copying your data from Amazon DynamoDB to Amazon Redshift. You can read more about these improvements here.
With Amazon DynamoDB, customers get:
- Fast, predictable performance at any scale. Customers can typically achieve average latencies in the single-digit milliseconds for database operations.
- Durability and high-availability. DynamoDB stores data on
Solid State Drives (SSDs) and replicates it synchronously across
multiple AWS Availability Zones in an AWS Region.
- Seamless scalability. For example, you can easily grow your
DynamoDB table from 1,000 writes per second to 100,000 writes per second
using the AWS Management Console.
- Easy administration. Amazon DynamoDB is a fully-managed
service. You don’t need to worry about hardware or software
provisioning, setup and configuration, software patching, operating a
reliable, distributed database cluster, or partitioning data over
multiple instances as you scale.
Getting started with Amazon DynamoDB is easy with our free tier of service. To learn more, visit the Amazon DynamoDB Page.
|
| May 14, 2013 |
We are delighted to announce that the AWS Management Console for the AWS GovCloud (US) region now supports Amazon Simple Workflow (Amazon SWF)!
The AWS GovCloud (US) region
is designed to allow U.S. government agencies and contractors to move
more sensitive workloads into the cloud by addressing their specific
regulatory and compliance requirements. The console provides an
easy-to-use graphical interface to manage your AWS GovCloud (US)
resources.
Amazon Simple Workflow is a service to coordinate work across multiple
machines. With our APIs, ease-of-use libraries, and control engine, you
can build multi-step application components that are independent of any
single component’s state and progress. This allows you to change and
scale your business logic with greater selectivity and ease – across the
AWS Cloud or your own data centers.
With Amazon Simple Workflow, there is no need to write your own state
machine or infrastructure code. Instead, you can focus on writing the
business logic that makes your application unique. The AWS Management
Console provides an easy-to-use graphical interface to manage these
powerful capabilities of Amazon SWF.
Instructions on how to access the console are available in our Users Guide.
To learn more about the service, please visit the Amazon SWF detail page, the AWS GovCloud (US) home page and contact us to get started!
|
| May 14, 2013 |
We are excited to announce AWS OpsWorks now offers a convenient view of
the Amazon CloudWatch metrics generated by your OpsWorks instances.
Without any additional costs or setup you can see thirteen one-minute
metrics that provide an overview of the state of your instances. All
metrics are automatically collected, grouped, and filtered. You can
start with an overview of CPU, memory and load summarized by stack and
then drill down to specific layers and instances. All metrics can be
used to create alarms via Amazon CloudWatch.
A few clicks in the the AWS Management Console are all it takes to get your first application running on AWS OpsWorks. You can learn more by reading how to use the OpsWorks monitoring view or joining our AWS OpsWorks webinar on May 23, 2013 at 10:00 AM PST. |
| May 14, 2013 |
We are excited to announce developers can now add Elastic Load Balancing
(ELB) to their OpsWorks application stacks and get all the built-in
capabilities ELB is known for, including:
- Elastic Load Balancing automatically scales its request handling capacity in response to incoming application traffic.
- SSL certificates are stored using IAM credentials, allowing you to control who can see your private keys.
- Elastic Load Balancing spans multiple AZs for reliability, but provides a single DNS name for simplicity.
- Elastic Load Balancing metrics such as request count and request latency are reported by Amazon CloudWatch.
- By default, Elastic Load Balancing supports SSL termination at the
Load Balancer, including offloading SSL decryption from application
instances, centralized management of SSL certificates, and encryption to
back-end instances with optional public key authentication.
A few clicks in the AWS Management Console are all it takes to get your first application running on AWS OpsWorks. You can learn more by reading the OpsWorks documentation or joining our AWS OpsWorks webinar on May 23, 2013 at 10:00 AM PST. |
| May 08, 2013 |
We are delighted to make two important announcements today - a new AWS Direct Connect location in Seattle supporting the AWS US West (Oregon) region and support for AWS Direct Connect to the AWS GovCloud (US) region from any AWS Direct Connect location are both now available!
You can use AWS Direct Connect to create a dedicated network connection
from your datacenter, office, or colocation environment to AWS.
Connections are always made to a particular Direct Connect location, and
can run at either 1 Gbps or 10 Gbps.
Instructions on how to set up AWS Direct Connect are available in our AWS Direct Connect Users Guide. For AWS GovCloud (US) access, please see the AWS GovCloud (US) Users Guide.
Join us for our weekly AWS GovCloud (US) Region Office Hours on May 14th, 1:00 – 3:00 PM EST to learn more about AWS Direct Connect for the AWS GovCloud (US) region.
Sign in to your AWS Management Console to order AWS Direct Connect today!
|
| May 08, 2013 |
We are excited to announce the AWS Management Pack for Microsoft System Center. The AWS Management Pack enables you to view and monitor your AWS resources directly in the System Center Operations Manager
console. This way, you can use a single, familiar console to monitor
all your resources, whether they are on-premises or in the AWS cloud.
The AWS Management Pack gives you a consolidated view of your AWS
resources across regions and Availability Zones. It also has built-in
integration with Amazon CloudWatch so that the metrics and alarms
defined in Amazon CloudWatch surface as performance counters and alerts
in Operations Manager. With the AWS Management Pack, you can gain a deep
insight into the health and performance of your applications running
within the Amazon EC2 instances. The diagram view generated by the
management pack makes it easy to traverse between the application and
the infrastructure hosting it, with just a few clicks.
The AWS Management Pack is available for “System Center 2012 –
Operations Manager” and “System Center Operations Manager 2007 R2”.
To learn more or download the AWS Management Pack, visit https://aws.amazon.com/windows/system-center/.
|
| May 07, 2013 |
We are excited to announce support for up to 4,000 IOPS per Amazon EBS
Provisioned IOPS volume. This represents a fourfold increase from the
original Provisioned IOPS volume performance since its launch last year.
Provisioned IOPS volumes are designed to provide predictable, high
performance for I/O intensive workloads such as databases, distributed
file systems and other enterprise applications; all of which are
available on AWS Marketplace.
You can get up to 4,000 IOPS from one volume. For performance beyond
4,000 IOPS, you can attach and stripe multiple volumes to deliver
thousands of IOPS to your application. You can set the level of
performance you need and EBS will consistently deliver it over the
lifetime of the volume. To enable your Amazon EC2 instance to fully
utilize the IOPS provisioned on an EBS volume, we recommend launching
them as "EBS-optimized" instances,
which deliver dedicated throughput between Amazon EC2 and Amazon EBS.
The EBS-optimized option is currently available for our m1.large,
m1.xlarge, m2.2xlarge, m2.4xlarge, m3.xlarge, m3.2xlarge and c1.xlarge
instance types.
One way to get started with Amazon EBS Provisioned IOPS is to launch a
product from AWS Marketplace with 1-Click. AWS Marketplace is an online
store where you can find, buy, and quickly deploy software that runs on
AWS such as high-performance versions of MongoDB, NuoDB and OrangeFS. You can learn more about these products on the AWS Marketplace Provisioned IOPS information page.
Amazon EBS Provisioned IOPS volumes, EBS-optimized instances and
AWS Marketplace products are now supported in all AWS regions except
GovCloud. For more information on using Amazon EBS Provisioned IOPS
volumes, please see the Amazon EC2 Developer Guide. |
| May 06, 2013 |
We are excited to announce the General Availability (GA) release of the AWS SDK for Node.js.
This SDK enables developers to tap into the cost-effective, scalable,
and reliable AWS cloud from their Node.js applications. Since releasing
the Developer Preview of the AWS SDK for Node.js in December, we
expanded support to cover the full set of AWS services, collaborated
with the community to fine tune the SDK design patterns, and added a few
new features. The latest SDK now supports proxy servers, IAM roles on
EC2 instances, and optionally using a Stream interface on operations.
This release moves the SDK to a stable API. Read the Getting Started Guide to begin using the SDK in your Node.js project. |
| May 01, 2013 |
Amazon Elastic MapReduce (EMR)
now supports S3 Server Side Encryption. This feature is useful for any
customers that need to move or process large amounts of sensitive data.
S3 Server Side Encryption (S3 SSE)
makes it easy to encrypt data stored at rest in S3. With S3 SSE, every
S3 object is encrypted with a unique key; the key itself is encrypted
with a regularly rotated master key. Decryption happens automatically
when data is retrieved. S3DistCp
is an EMR feature that uses MapReduce to efficiently move large amounts
of data from S3 into HDFS, from HDFS to S3, and between S3 buckets. EMR
S3DistCp now supports S3 SSE.
To learn more about this feature, please visit EMR’s Developer Guide. |
| May 01, 2013 |
We are excited to announce the launch of our newest edge location
in Seoul, Korea to serve end users of Amazon CloudFront and Amazon
Route 53. This is our first edge location in Korea and each new edge
location helps to lower latency and improve performance for your end
users. We plan to continue to add new edge locations worldwide.
If you’re already using Amazon CloudFront or Amazon Route 53, you
don't need to do anything to your applications as requests are
automatically routed to this location when appropriate.
Amazon Route 53 is a highly available and scalable Domain Name
System (DNS) web service. Amazon Route 53 is designed to be fast, easy
to use, and cost-effective. It answers DNS queries with low latency by
using a global network of DNS servers. Queries for your domain are
automatically routed to the nearest DNS server, and thus answered with
the best possible performance.
Amazon CloudFront is a content delivery network (CDN) that can be
used to deliver your entire website, including dynamic, static and
streaming content using a global network of edge locations. It
integrates with other Amazon Web Services to give developers and
businesses an easy way to distribute content to end users with low
latency, high data transfer speeds, and no required minimum commitments.
We’d also like to invite you to join our “Whole Site Delivery with Amazon CloudFront”
webinar on May 16th at 10:00AM Pacific Time. In this webinar, we’ll
demonstrate how you can use Amazon CloudFront to help architect your
site to deliver both static and dynamic content (portions of your site
that change for each end-user). You can register here for this webinar.
Like all Amazon CloudFront edge locations, our Seoul edge location
supports all Amazon CloudFront features. With the addition of this
location, Amazon CloudFront now has a total of 40 edge locations
worldwide.
To learn more, please visit the detail page for Amazon CloudFront or Amazon Route 53. |
| Apr 30, 2013 |
We are excited to announce the launch of the Amazon Web Services Global Certification Program.
AWS Certifications designate individuals who demonstrate knowledge,
skills and proficiency with AWS services. This program is built around
the three primary roles for engineering teams delivering cloud-based
solutions: Solutions Architect, SysOps Administrator, and Developer.
Role-based certification credentials can be earned on three proficiency
levels: Associate, Professional and Master.
AWS certifications are designed to certify the technical skills and
knowledge associated with best practices for building secure and
reliable cloud-based applications using AWS technology. To earn an AWS
Certification, individuals must prove their proficiency by passing an
exam. Exams are administered through Kryterion
testing centers in more than 100 countries and 750 testing locations
worldwide. Once achieved, individuals can display the AWS Certified
logo on business cards and resumes to gain visibility for their AWS
expertise while fostering credibility with employers and peers.
The first certification to be offered is the “AWS Certified Solutions
Architect – Associate Level,” which certifies skills for technical
professionals and solutions architects involved in the design and
development of applications on AWS. Additional role-based
certifications, including certifications for Systems Operations (SysOps)
Administrators and Developers, will follow later this year.
To learn more about the AWS Certification Program, visit http://aws.amazon.com/certification. |
| Apr 25, 2013 |
We are excited to announce the addition of Amazon EBS-backed Amazon EC2
instances to give users more instance types to choose for their
development needs, including the AWS Free Usage Tier-eligible micro instance.
Users now have the ability to use micro instances to reduce costs in
development environments as well as larger instance types such as the
second generation M3 family for applications that demand more
performance. See offer terms for more details and other restrictions on the AWS Free Usage Tier.
A few clicks in the AWS Management Console are all it takes to get your first application running on AWS OpsWorks. You can learn more by reading our documentation walk-through. |
| Apr 24, 2013 |
AWS Marketplace applications are now available to all customers
using the AWS Sydney Region with 1-Click Deployment for building
solutions and running their businesses. Customers can easily find,
compare, and immediately start using the software listed in AWS
Marketplace, and experience lower latency when deploying in Sydney.
Over 100 software products for production, testing and development
purposes are currently available, including MongoDB, aiCache, Citrix Netscaler, F5, MicroStrategy
and others. Sellers continually add new AWS Marketplace products for
deployment the AWS Sydney Region. For more information or to find
software on AWS Marketplace, visit: http://aws.amazon.com/marketplace. |
| Apr 23, 2013 |
We are delighted to announce that the AWS Management Console is now available for access to the AWS GovCloud (US) region!
When you sign in using the AWS GovCloud (US) specific login page, the
console provides access to your AWS GovCloud (US) resources and is
completely isolated from other AWS regions. The console provides an
easy-to-use graphical interface to manage your AWS GovCloud (US)
resources. Consoles available today are Amazon EC2, Amazon VPC, Amazon
S3, Amazon RDS, Amazon CloudWatch, AWS Identity and Access Management
(IAM), Amazon SQS and Amazon SNS. The Amazon DynamoDB, Amazon EMR and
Amazon SWF consoles will be coming soon.
Instructions on how to access the console are available in our Users Guide.
Join us for our weekly AWS GovCloud (US) Region Office Hours
on April 30th, 1:00 – 3:00 PM EST for a live demonstration of the
console and to learn more about AWS GovCloud (US). Also, register now
for the Intro to AWS GovCloud (US) Region Webinar on May 15, 1:30 – 2:30 PM EST.
To learn more, please visit our AWS GovCloud (US) home page and contact us to get started!
|
| Apr 22, 2013 |
We are thrilled to announce that Amazon Redshift and Amazon EC2 High Storage Instances are now available in EU West (Ireland).
Amazon Redshift is a fully managed, petabyte-scale data warehouse
service that makes it simple and cost-effective to efficiently analyze
all your data using your existing business intelligence tools. The
service is optimized for analyzing data sets of several hundred
gigabytes to a petabyte or more and can provide significantly better
performance for less than one tenth the price of most data warehousing
solutions available to you today. Amazon Redshift also frees you from
all the muck associated with provisioning, monitoring, backing up,
patching, securing, and scaling your data warehouse.
High Storage Instances (hs1.8xlarge) are an Amazon EC2 instance type
that is ideal for customers whose applications require high sequential
read and write performance over very large data sets. High Storage
instances provide customers with 35 EC2 Compute Units (ECUs) of compute
capacity, 117 GiB of RAM, and 48 Terabytes of storage across 24 hard
disk drives, delivering over 2.4 Gigabytes per second of sequential I/O
performance.
Amazon Redshift and Amazon EC2 High Storage Instances are now
available in US East (N. Virginia), US West (Oregon) and EU West
(Ireland), with additional regions coming soon. To learn more visit the Amazon Redshift detail page and the Amazon EC2 instance type page. |
| Apr 18, 2013 |
We are thrilled to announce that we have expanded the query capabilities
of DynamoDB. We call the newest capability Local Secondary Indexes
(LSI). While DynamoDB already allows you to perform low-latency queries
based on your table’s primary key, even at tremendous scale, LSI will
now give you the ability to perform fast queries against other
attributes (or columns) in your table. This gives you the ability to
perform richer queries while still meeting the low-latency demands of
responsive, scalable applications.
With LSI we expand DynamoDB’s existing query capabilities with support
for more complex queries. Customers can now create indexes on
non-primary key attributes and quickly retrieve records within a hash
partition (i.e., items that share the same hash value in their primary
key).
The enhanced query flexibility that local secondary indexes provide
means DynamoDB can support an even broader range of workloads. At
Amazon, we have already come to start with DynamoDB as the default
choice for every application that does not require the flexibility of
relational databases like Oracle or MySQL. Customers tell us they’re
adopting the same practice, particularly in the areas of digital
advertising, social gaming and connected device applications where high
availability, seamless scalability, predictable performance and low
latency are very critical.
Today, local secondary indexes must be defined at the time you create
your DynamoDB tables. In the future, we plan to provide you with an
ability to add or drop LSI for existing tables. If you want to equip an
existing DynamoDB table to local secondary indexes immediately, you can
export the data from your existing table using Elastic Map Reduce, and import it to a new table with LSI.
Local secondary indexes are available today in all regions except
GovCloud. You can get started with DynamoDB and Local Secondary Indexes
by reading the developer guide.
|
| Apr 18, 2013 |
We have good news to share. Many of you have told us that data
encryption, at rest and in transit, is very important to you as you move
mission-critical database workloads to Amazon RDS. Today, Amazon RDS is
announcing support for Oracle’s Transparent Data Encryption and Native
Network Encryption in all regions. Both of these features are components
of Oracle’s Advanced Security option for the Oracle Database 11g
Enterprise Edition. Oracle Database 11g Enterprise Edition is available
on Amazon RDS for Oracle under the Bring-Your-Own-License (BYOL) model.
There is no additional charge to use these features.
Oracle Transparent Data Encryption encrypts data before it is written
to storage, and automatically decrypts data when reading from storage.
Oracle Transparent Data Encryption enables you to encrypt table spaces
or specific table columns using industry standard encryption algorithms
such as Advanced Encryption Standard (AES) and Data Encryption Standard
(Triple DES). Oracle Native Network Encryption encrypts the data as it
moves into and out of the database. Oracle Native Network Encryption
enables you to encrypt network traffic travelling over Oracle Net
Service using industry standard encryption algorithms such as AES and
Triple DES.
To learn more about using Oracle Transparent Data Encryption and
Native Network Encryption on Amazon RDS for Oracle, please visit our User Documentation |
| Apr 18, 2013 |
AWS customers who are new to Citrix NetScaler on AWS Marketplace
will receive $175 of EC2 Promotional Credit if they use a least 200
hours between April 15th and May 31st, 2013. Visit the Citrix NetScaler VPX Platinum Edition page on Marketplace to learn more.
|
| Apr 17, 2013 |
We are excited to announce that AWS Elastic Beanstalk
now supports AWS Identity and Access Management (IAM) roles. Elastic
Beanstalk makes it easy to deploy and run your applications on AWS, and
with IAM roles, these applications can securely access AWS services. IAM
roles manage the muck of securely distributing your AWS access keys out
to your EC2 instances that have been launched through Elastic
Beanstalk.
When creating new environments using the Elastic Beanstalk console or using the eb command line,
Elastic Beanstalk can automatically create an IAM role. You can also
create and assign existing roles to Elastic Beanstalk environments.
You can easily grant additional permissions to this role to allow
your application to access AWS services such as Amazon DynamoDB or
Amazon S3. To learn more about IAM roles and Elastic Beanstalk, visit
the AWS Elastic Beanstalk Developer Guide. |
| Apr 10, 2013 |
The AWS Storage Gateway
helps you connect your on-premises IT environment with the AWS cloud.
Today, we are excited to announce support for running the AWS Storage
Gateway on an additional virtualization platform, Microsoft Hyper-V.
You can use the AWS Storage Gateway in two configurations.
Gateway-Cached volumes provide local, low-latency access to your most
frequently used files while storing all your data in Amazon S3’s
elastic, highly durable storage infrastructure. Gateway-Stored volumes
provide scheduled off-site backups to Amazon S3 for your on-premises
data. With support for Microsoft Hyper-V in addition to
already-supported VMware ESXi, you can now run the AWS Storage Gateway
on two of the most popular virtualization platforms.
Learn more and get started by visiting the AWS Storage Gateway User Guide. |
| Apr 09, 2013 |
We are thrilled to announce that Amazon EMR is now available in the AWS GovCloud (US) Region!
Amazon Elastic MapReduce (Amazon EMR) is a web service that makes it
easy to process or analyze vast amounts of data using Hadoop in the AWS
cloud. Using Amazon EMR, you can instantly provision as much or as
little capacity as you like to perform data-intensive tasks without
having to worry about time-consuming set-up, management or tuning of
Hadoop clusters.
The AWS GovCloud (US) Region was built with Government customers in
mind. AWS GovCloud (US) is an isolated AWS Region designed to allow US
government agencies and customers to move more sensitive workloads into
the cloud by addressing their specific regulatory and compliance
requirements.
The AWS GovCloud (US) Region adheres to U.S. International Traffic in
Arms Regulations (ITAR) requirements. You can run workloads, including
all categories of Controlled Unclassified Information (CUI) and
government oriented publically available data, in the AWS GovCloud (US)
Region. Depending on your requirements, you can also run unclassified
workloads in the AWS GovCloud (US) region to utilize the unique
capabilities of this region.
Join us for our weekly AWS GovCloud (US) Region Office Hours on April 16th, 1:00 – 3:00 PM EST which will feature a guest speaker from the Amazon EMR team. Also, register now for the Intro to AWS GovCloud (US) Region webinar on May 15, 1:30 – 2:30 PM EST.
To learn more, please visit our AWS GovCloud (US) home page, read about our Public Sector case studies, and contact us to get started!
|
| Apr 08, 2013 |
We are excited to announce that AWS Elastic Beanstalk for .NET
now supports configuration files as well as integration with Amazon VPC
and Amazon RDS. AWS Elastic Beanstalk for .NET allows you to easily run
and manage your .NET applications on AWS using Windows Server 2008 R2
or Windows Server 2012.
Using configuration files, you can setup software on Amazon EC2
instances within your environment, without having to create a custom
AMI. For example, you can install and configure Windows services to run
on the same instances as your web application. You can also use
configuration files to provision resources such as an Amazon DynamoDB
table, an Amazon CloudWatch alarm, or an Amazon SQS queue. To learn more
about configuration files, visit the AWS Elastic Beanstalk Developer Guide.
Elastic Beanstalk .NET environments now seamlessly integrate with
Amazon RDS and can also run inside existing VPCs. Using Amazon VPC, you
can now easily deploy private .NET web applications including intranet
web applications and web service backends. To learn more about deploying
your Elastic Beanstalk application in a VPC or connecting to RDS, visit
"Using AWS Elastic Beanstalk with Amazon VPC" and "Using Amazon RDS" in the AWS Elastic Beanstalk Developer Guide. |
| Apr 04, 2013 |
We are excited to announce a price reduction of up to 26% on Windows
On-Demand EC2 instances. Today’s price drop continues the AWS tradition
of exploring ways to reduce costs and passing on the savings to our
customers. This reduction applies to the Standard (m1), Second-Generation Standard (m3), High-Memory (m2), and High-CPU (c1) instance families. All prices are automatically effective from April 1, 2013.
As an example, a typical Microsoft Windows Application Server running on
an m1 large On-Demand instance in US East (N. Virginia) would now cost
only $0.364 per hour instead of $0.460 per hour– a 20% drop in price,
which translates into more than $2000 in quarterly savings for running
10 such instances. The size of the reduction varies by instance family
and region. We encourage you to visit the AWS Windows page for more information about Windows pricing on AWS.
|
| Apr 03, 2013 |
We are excited to announce that we have extended the access policy
language to include support for policy variables. This new feature
allows you to define general purpose policies that include variables so
you do not have to explicitly list all the components of the policy.
For example, you can now use variables such as ‘username’ to create
policies that lock down users’ access to a specific S3 folder determined
by their username, or allow users to manage their own access keys and
assign the policy to a group instead of assigning an individual policy
to each user. This will simplify your policy management by reducing the
number of policies necessary to grant individualized access control to
AWS resources.
For more information about the access control language and policy variables, please visit the Policy Variables section of the Using IAM guide. To get started please visit the AWS Management Console
|
| Apr 02, 2013 |
We are thrilled to announce that Amazon Redshift and Amazon EC2 High Storage Instances are now available in US West (Oregon).
Amazon Redshift is a fully managed, petabyte-scale data warehouse
service that makes it simple and cost-effective to efficiently analyze
all your data using your existing business intelligence tools. The
service is optimized for analyzing data sets of several hundred
gigabytes to a petabyte or more and can provide significantly better
performance for less than $1,000 per terabyte per year, less than one
tenth the price of most data warehousing solutions available to you
today. Amazon Redshift also frees you from all the muck associated with
provisioning, monitoring, backing up, patching, securing, and scaling
your data warehouse.
High Storage Instances (hs1.8xlarge) are an Amazon EC2 instance type
that is ideal for customers whose applications require high sequential
read and write performance over very large data sets. High Storage
instances provide customers with 35 EC2 Compute Units (ECUs) of compute
capacity, 117 GiB of RAM, and 48 Terabytes of storage across 24 hard
disk drives, delivering over 2.4 Gigabytes per second of sequential I/O
performance.
Amazon Redshift and Amazon EC2 High Storage Instances are now
available in US East (N. Virginia) and US West (Oregon), with additional
regions coming soon. To learn more visit the Amazon Redshift detail page and the Amazon EC2 instance type page. |
| Apr 02, 2013 |
We are excited to announce that we are reducing Amazon S3 request
prices in all nine of our regions. We are lowering the prices for GET
requests by 60% and the prices for PUT, LIST, COPY, and POST requests by
50%. For example, in the US Standard Region, we are reducing the price
of every 1,000 PUT requests from $0.01 to $0.005 and the price of every
10,000 GET requests from $0.01 to $0.004.
We are happy to pass along these savings to you as we continue to
drive down our costs. The new lower prices for all regions can be found
on the Amazon S3 pricing page. New prices are effective April 1st and will be applied to your bill for all requests on or after this date. |
| Mar 26, 2013 |
We are excited to announce AWS CloudHSM, a new service enabling
customers to increase data security and meet compliance requirements by
using dedicated Hardware Security Module (HSM) appliances within the AWS
Cloud. The CloudHSM service allows customers to securely generate,
store and manage cryptographic keys used for data encryption in a way
that keys are accessible only by the customer.
AWS provides a variety of solutions for protecting sensitive data
within the AWS platform. But for some applications and data subject to
rigorous contractual or regulatory mandates for managing cryptographic
keys, additional protection is necessary. Until now, organizations’ only
options were to maintain data in on-premises datacenters or deploy
local HSMs to protect encrypted data in the cloud. Unfortunately, those
options either prevented customers from migrating their most sensitive
data to the cloud or significantly slowed application performance.
With AWS CloudHSM, customers maintain full ownership, control and
access to keys and sensitive data while Amazon manages the HSM
appliances in close proximity to their applications and data for maximum
performance.
For more information about Amazon HSM, visit http://aws.amazon.com/cloudhsm/. |
| Mar 19, 2013 |
Adobe ColdFusion 10 on AWS Marketplace is an easy and affordable
way to access powerful yet easy-to-use features to build high
performing, enterprise-ready applications that scale dynamically to meet
your business needs. Easily create interactive web applications
leveraging unique built-in HTML5 support. 1-Click deploy running on Windows Server or Ubuntu. |
| Mar 19, 2013 |
We are excited to announce the global availability of
EBS-optimized support for four additional instance types: m3.xlarge,
m3.2xlarge, m2.2xlarge, and c1.xlarge. EBS-optimized instances deliver
dedicated throughput between Amazon EC2 and Amazon EBS, with options
between 500 Megabits per second and 1,000 Megabits per second depending
on the instance type used.
EBS-optimized instances are designed for use with both Standard and
Provisioned IOPS EBS volumes. Standard volumes deliver 100 IOPS on
average with a best effort ability to burst to hundreds of IOPS, making
them well-suited for workloads with moderate and bursty I/O needs. When
attached to an EBS-optimized instance, Provisioned IOPS volumes are
designed to consistently deliver up to 2000 IOPS from a single volume,
making them ideal for I/O intensive workloads such as databases.
For more information on Amazon EBS-optimized EC2 instances, please see the Amazon EC2 detail page. |
| Mar 18, 2013 |
We are delighted to announce that Amazon Simple Workflow is now available in the AWS GovCloud (US) region!
Amazon Simple Workflow provides customers with the building blocks and
processing engine to handle the complexity of application infrastructure
programming and state machinery. This lets customers focus on the
business logic that makes them unique. By using Simple Workflow to
assemble applications with multiple interrelated steps, customers get
cluster-wide functionality to coordinate tasks, manage the state of
processes, retry on failed steps, and distribute task load.
Customers are using Simple Workflow’s building blocks to: create SaaS
analytics platforms, process big data, transcode video, manage web-based
orders, train machine learning algorithms, deploy enterprise mobile
assets, distribute mobile media, manage data center deployments,
optimize on-line advertising, and much more.
The AWS GovCloud (US) Region was built with Government customers in
mind. AWS GovCloud (US) is an isolated AWS Region designed to allow US
government agencies and customers to move more sensitive workloads into
the cloud by addressing their specific regulatory and compliance
requirements. The AWS GovCloud (US) Region adheres to U.S. International
Traffic in Arms Regulations (ITAR) requirements. You can run
workloads, including all categories of Controlled Unclassified
Information (CUI) and government oriented publically available data, in
the AWS GovCloud (US) Region. Depending on your requirements, you can
also run unclassified workloads in the AWS GovCloud (US) region to
utilize the unique capabilities of this region.
To learn more, please visit our AWS GovCloud (US) home page, read about our Public Sector case studies, and contact us to get started!
|
| Mar 13, 2013 |
Are you developing, testing, or migrating to SQL Server 2012? Your
task just got easier with Amazon RDS for SQL Server. We are pleased to
announce that you can upgrade existing SQL Server 2008 R2 DB Instances
to SQL Server 2012 starting today using the new Major Version Upgrade
feature. The Major Version Upgrade feature is available in all AWS
regions for all SQL Server editions, including Express, Web, Standard,
and Enterprise.
With the Major Version Upgrade feature, you can easily develop and test
your applications using the new features Microsoft has introduced as
part of SQL Server 2012. In addition, you can upgrade your existing SQL
Server 2008 R2 DB Instances and leverage new SQL Server 2012 features
with your applications. A few of these new features are highlighted
below:
-
Contained database – a database that is isolated from other SQL
Server databases including system databases like the ‘master’ database.
This simplifies the task of moving databases from one instance of SQL
Server to another by removing dependencies to other SQL Server
databases.
-
Columnstore index – a new type of index for data warehouse type
queries. Columnstore indexes can greatly reduce I/O and memory
utilization on large queries.
-
Sequence object – an object that acts as a counter similar to SQL Server’s identity column, but is not restricted to a single table.
-
User-defined roles – a new role management system in SQL Server 2012 which allows users to create custom server roles.
Upgrading your DB Instances to SQL Server 2012 is as easy as pushing a button on the Amazon RDS Console or using the Command Line Interface.
This new feature further enhances the many benefits Amazon RDS for SQL
Server offers to Microsoft SQL Server customers including easy
deployment, push button scalability, automated back-ups, point-in-time
restore, automated software upgrades and patching, and pay-as-you-go
flexibility.
For more information on upgrading your Amazon RDS for SQL Server DB Instances, please visit our Major Version Upgrade documentation.
To learn more about Amazon RDS for SQL Server, please visit the Amazon RDS for SQL Server detail page, our documentation and our FAQs.
|
| Mar 13, 2013 |
We have three pieces of good news to share. First, you can provision up
to 3 TB in storage and up to 30,000 IOPS for your database instances
(maximum realized IOPS will vary by engine type). Second, you can
convert an existing Amazon RDS
instance that uses standard storage to use Provisioned IOPS storage.
Finally, you can scale IOPS and storage independently. All three
features are enabled for Amazon RDS for MySQL and Oracle database
engines. Here are the details:
Provision up to 3TB storage and 30,000 IOPS: You
can now provision up to 3TB storage and 30,000 IOPS per database
instance – three times the previous limit of 1 TB and 10,000 IOPS per
database instance. For a workload with 50% writes and 50% reads running
on an m2.4xlarge instance, you can realize up to 25,000 IOPS for Oracle
and 12,500 IOPS for MySQL. However, by provisioning up to 30,000 IOPS,
you may be able to achieve lower latency and higher throughput. Your
actual realized IOPS may vary from the amount you provisioned based on
your database workload, instance type, and database engine choice. Refer
to the Factors That Affect Realized IOPS section of the documentation to learn more.
Convert an existing database instance to use Provisioned IOPS storage:
You can now convert database instances using standard storage to use
Provisioned IOPS storage and get consistent throughput and low I/O
latencies. Use the "Modify" action for your database instance on the DB
Instances page of the AWS Management Console,
check the "Use Provisioned IOPS" check box, specify the number of IOPS
required for your workload, and proceed through the wizard. You can also
perform this operation via the Amazon RDS APIs and the Command Line Interface.
Scale storage and IOPS independently: You can now
scale IOPS (in increments of 1000) and storage independently. The ratio
of IOPS provisioned to the storage requested (in GB) should be between 3
and 10. For example, for a database instance with 1000 GB of storage,
you can provision from 3,000 to 10,000 IOPS. You can scale the IOPS up
or down depending on factors such as seasonal variability of traffic to
your applications.
Amazon RDS Provisioned IOPS is available in all AWS Regions except
GovCloud. To learn more and get started with Amazon RDS Provisioned
IOPS, please refer to the Working with Provisioned IOPS storage section of the Amazon RDS User Guide.
You are also invited to attend the webinar Amazon RDS - Running low admin, high performance databases in the cloud
with the Amazon RDS team on March 27th, where you can learn more about
Provisioned IOPS storage and how customers are using it to successfully
run high performance applications. Register here.
|
| Mar 12, 2013 |
We are excited to share three announcements today that give you
more flexibility and lower costs when deploying Amazon ElastiCache, a
fully managed, in-memory, Memcached-compatible caching service.
- Amazon ElastiCache availability in Sydney. Starting today, customers
can use Amazon ElastiCache clusters for their scale-out caching needs
in the Asia Pacific (Sydney) Region. Sydney joins Singapore and Tokyo as
the third Region in Asia Pacific and as the ninth Region worldwide.
- Global expansion of enhanced (M3) cache nodes. The enhanced (M3)
cache nodes, built on the next generation AWS infrastructure, are now
available in all regions except AWS GovCloud.
- Reduced cache node pricing. We have reduced prices of M1 cache nodes
in the US West (Northern California), EU (Ireland), and South America
(São Paulo) Regions. As a result, the m1.small cache node is now 19%
cheaper in these regions. In addition, we reduced prices for M2, M3 and
C1 cache nodes in the South America (São Paulo) region.
Amazon ElastiCache improves the performance of your web applications
by retrieving data from a fast, in-memory cache instead of relying
entirely on disk-based storage. Unlike other caching mechanisms, Amazon
ElastiCache is fully managed so you don’t have to worry about
maintaining your own caching infrastructure. In addition, it is
Memcached-compatible, so existing Memcached-enabled applications should
work with ElastiCache without any code changes.
To learn more about the service, please see our recent presentation from the first AWS re:Invent Conference, and review the Getting Started Guide. You can launch a cache cluster with a few clicks using the AWS Management Console or a few simple API calls. For more information about the new cache node types and prices, please visit the Amazon ElastiCache page. |
| Mar 12, 2013 |
We are excited to announce the immediate availability of a new
feature: Amazon Machine Image
(AMI) Copy. AMI Copy enables you to easily copy your AMIs across AWS
regions, enabling the following scenarios:
Consistent and Simple
Multi-Region Deployment - You can copy an AMI from one region to
another, enabling you to easily launch consistent instances based on the same
AMI into different regions.
Scalability - You
can more easily design and build world-scale applications that meet the needs
of your users, regardless of their location.
Performance - You
can increase performance by distributing your application and locating critical
components of your application in closer proximity to your users. You can also
take advantage of region-specific features such as instance types or other AWS
services.
Even Higher Availability - You
can design and deploy applications across AWS regions, to increase
availability.
To use AMI Copy, simply select the AMI to be copied from within
the AWS Management Console, choose the destination region, and start the copy.
AMI Copy can also be accessed via the EC2 Command Line Interface or EC2 API as
described in the EC2 User’s Guide. Once
the copy is complete, the new AMI can be used to launch new EC2 instances in
the destination region.
There are no additional charges for using AMI Copy, but you will
be charged to transfer the AMI out
of the source region and to store the copied AMI in the destination region.
|
| Mar 11, 2013 |
We are excited to announce that AWS Elastic Beanstalk
now supports Node.js applications. Elastic Beanstalk already makes it
easier to quickly deploy and manage Java, PHP, Python, Ruby, and .NET
applications on AWS. Now, Elastic Beanstalk offers the same
functionality for Node.js applications.
Elastic Beanstalk for Node.js supports a wide number of configuration
settings to help you customize the environment for your application.
You can easily configure HTTP or TCP load balancing,
configure the version and command to launch your Node.js application,
and improve performance by offloading static content handling to Apache or Nginx.
To get started using Elastic Beanstalk for Node.js, visit the AWS Elastic Beanstalk Developer Guide. Also check out the walkthroughs for how to deploy Express and Geddy applications on Elastic Beanstalk.
Using Elastic Beanstalk, you can seamlessly integrate your
application with Amazon RDS. You can also use configuration files to
customize your Amazon EC2 instances or to provision additional AWS
resources such as Amazon DynamoDB tables and Amazon ElastiCache
clusters. To learn more about customizing your Elastic Beanstalk
environment, visit the AWS Elastic Beanstalk Developer Guide. |
| Mar 08, 2013 |
AWS customers who are new to SAP Hana One on the AWS Marketplace
will receive $120 of AWS Promotion Credit if they use at least 10 hours
of SAP Hana One between February 18 and March 31, 2013. Visit the SAP Hana One product page on Marketplace to learn more. |
| Mar 07, 2013 |
We are excited to announce a reduction in Amazon DynamoDB pricing in all
AWS regions. The price of indexed data storage has decreased by up to
75%. The price of provisioned throughput capacity has decreased by up to
35%.
We have also introduced a new Reserved Capacity offering. Reserved
Capacity pricing offers significant savings over the normal price of
DynamoDB provisioned throughput capacity. When you buy Reserved
Capacity, you pay a one-time upfront fee and commit to paying for a
minimum usage level for the duration of the Reserved Capacity term.
Using Reserved Capacity pricing, you can save up to 53% with a 1-year
term and up to 76% with a 3-year term.
With Amazon DynamoDB, customers get:
- Fast, predictable performance at any scale. Customers can typically achieve average latencies in the single-digit milliseconds for database operations.
- Durability and high-availability. DynamoDB stores data on
Solid State Drives (SSDs) and replicates it synchronously across
multiple AWS Availability Zones in an AWS Region.
- Seamless scalability. For example, you can easily grow your
DynamoDB table from 1,000 writes per second to 100,000 writes per second
using the AWS Management Console.
- Easy administration. Amazon DynamoDB is a fully-managed
service. You don’t need to worry about hardware or software
provisioning, setup and configuration, software patching, operating a
reliable, distributed database cluster, or partitioning data over
multiple instances as you scale.
Getting started with Amazon DynamoDB is easy with our free tier of service. To learn more, visit the Amazon DynamoDB Page.
|
| Mar 05, 2013 |
We are excited to announce that starting March 2013, the AWS Free
Usage Tier will include Amazon ElastiCache nodes. With this
announcement, customers can gain hands-on-experience with Amazon
ElastiCache at no-cost. Customers eligible for the AWS Free Usage tier
can now use up to 750 hours per month of a t1.micro cache node.
The expanded Free Usage Tier with Amazon ElastiCache t1.micro cache
nodes is available today in all regions, except for Asia Pacific
(Sydney) and AWS GovCloud. For more information about the AWS Free Usage
Tier, please visit the AWS Free Usage Tier page. To get started using Amazon ElastiCache, visit the Amazon ElastiCache detail page. |
| Mar 04, 2013 |
We are excited to announce a reduction in reserved instance
pricing for Amazon EC2 running Linux/UNIX, Red Hat Enterprise Linux, and
SUSE Linux Enterprise Server. Linux/UNIX Reserved Instance prices will
decrease by up to 27 % for Amazon EC2 for the Standard (M1),
Second-Generation Standard (M3), High-Memory (M2), and High-CPU (C1)
Instance families. New Reserved Instance prices will only apply to
Reserved Instances purchases made on or after March 5th. Today’s price
drop represents the 26th price drop for AWS, and we are delighted to
continue to pass along savings to you as we innovate and drive down our
costs.
With the new pricing, Reserved Instances will provide savings of up
to 65 % compared to On-Demand instances, and you will automatically
receive additional savings on your future purchases of Reserved
Instances in that AWS Region when you have more than $250,000 in active
upfront Reserved Instances. We recommend that you take this opportunity
to review your current usage and to determine if you would like to
purchase additional Reserved Instances.
To learn more, please visit the Amazon EC2 pricing page
for the complete list of new lower prices and an overview of the volume
discount program. For more details on optimizing your AWS costs, please
visit the AWS Economics Center. And during the month of March you can take advantage of a free trial of AWS Trusted Advisor
to generate a personalized report on how you can optimize your bill by
taking advantage of the new, lower Reserved Instance prices. |
| Mar 04, 2013 |
We are pleased to announce a significantly easier way for you to monitor a number of log files generated by your Amazon RDS
DB Instances. You so far had the option to monitor most of these
database logs by querying the database. You can now view database log
files directly using the AWS Management Console or download them using
Amazon RDS APIs to diagnose, trouble shoot and fix database
configuration or performance issues. This functionality is available for
all the database engines supported by Amazon RDS -- MySQL, Oracle and
SQL Server.
You have three different ways to access the log files using the AWS Management Console or Amazon RDS APIs:
- View: You can view the content of a log file as of a point in
time in the AWS Management Console. Just navigate to the "logs" section
corresponding to your DB Instance, select the log file you are
interested in and click "View".
- Watch: You can obtain real time updates to log files directly
using the AWS Management Console. Just navigate to the "logs" section
corresponding to your DB Instance, select the log file you are
interested in and click "Watch". You will then be able to see the last
few lines of the log file and any ongoing updates being made by the
database engine as they occur.
- Download: You can use the rds-download-db-logfile command to
download the content of your log files. Note that this functionality is
not currently available through the AWS Management Console. Visit the Downloading a Database Log File section of the Amazon RDS User Guide to learn more.
The type of log files available through this functionality by database engine are as follows:
- MySQL: You can monitor MySQL Error Log, Slow Query Log and
General Log directly through the AWS Management Console or Amazon RDS
APIs. While the MySQL Error Log is generated by default, you need to
configure DB Parameter Groups to enable the generation of Slow Query and
General Logs to the file system. All these log files are rotated every
hour and only the files corresponding to the last 24 hours are retained.
Visit the Working with MySQL Database Log Files section of the User Guide to learn more.
- Oracle: You can access Alert Log and Trace Files directly
through the AWS Management Console or Amazon RDS APIs. These files are
retained for seven days by default. You can configure the retention
period as per your needs. Visit the Working with Oracle Database Log Files section of the User Guide to learn more.
- SQL Server: You can access Error Log, Agent Log and Trace
Files directly through the AWS Management Console or Amazon RDS APIs.
These files are retained for seven days by default. You can configure
the retention period as per your needs. Visit the Working with SQL Server Database Log Files section of the User Guide to learn more.
|
| Feb 28, 2013 |
We have good news to share. Effective March 1, 2013, we are reducing
prices and expanding the free tier for AWS messaging services – the
Simple Queue Service (SQS) and Simple Notification Service (SNS).
1. SQS API prices will decrease by 50% to $0.50 per million API requests.
2. SNS API prices will decrease by 17% to $0.50 per million API requests.
3. The SQS and SNS free tiers will each expand to 1 million free API requests per month, up 10X from 100K requests per month.
Our customers constantly discover powerful new ways to build more
scalable, elastic and reliable applications using SQS and SNS. We
learned, for example, that some customers use SQS as a buffer ahead of their databases and other services. Other customers combine SNS + SQS to fanout/transmit identical messages to multiple queues.
Over time, we've optimized our own systems in order to lower costs, and
make SQS and SNS available for new customers and new workloads. Price
cuts coupled with SQS features such as long polling (which reduces
extraneous polling) and extensions to AmazonSQSAsyncClient (enables
easier batching of outgoing messages, and also pre-fetching of incoming
messages) give our customers a very cost-effective solution for their
messaging needs. SQS can be even more cost effective for customers that
use batching as SQS batches cost the same amount as single messages.
We are happy to pass along these savings to you as we continue to
innovate and drive down our costs. The expanded free tier and lower
prices will be available in all regions except GovCloud.
To learn more or get started with SQS and SNS, visit http://aws.amazon.com/sqs and http://aws.amazon.com/sns.
|
| Feb 27, 2013 |
We are excited to announce that Amazon CloudSearch is now available in
the the US West (Oregon), US West (N. California), and Asia Pacific
(Singapore) Regions. These new Regions join the US East (Northern
Virginia) Region, and the recently added EU (Ireland) Region.
Amazon CloudSearch is a fully-managed search service in the cloud that
allows customers to easily integrate fast and highly scalable search
functionality into their applications. With a few clicks in the AWS
Management Console, developers simply create a search domain, upload the
data they want to make searchable to Amazon CloudSearch, and the
service then automatically provisions the technology resources required
and deploys a highly tuned search index.
Getting started with Amazon CloudSearch is easy with our free trial program. To learn more, see the Amazon CloudSearch Page. |
| Feb 26, 2013 |
We are excited to announce the beta release of AWS Diagnostics for
Microsoft Windows Server. AWS Diagnostics for Microsoft Windows Server
is an easy to use tool that you can run on your EC2 Windows Server
instances. It is a very valuable tool not just for collecting log files
and troubleshooting issues, but also proactively identifying possible
areas of concern.
This tool can, for example, be used to diagnose configuration mismatch
issues between the Windows Firewall and the Amazon EC2 security group
that may affect your applications. It can even examine EBS boot volumes
from other instances and collect relevant logs for troubleshooting
Windows Server from the volume.
The AWS Diagnostics for Microsoft Windows Server is free for AWS Customer. You can learn more about it at http://aws.amazon.com/windows/awsdiagnostics/.
Also note that this tool is in beta release and your feedback will be
extremely useful in further improving this tool. You can give us
feedback here.
|
| Feb 20, 2013 |
We are excited to announce new AWS CloudFormation
deployment enhancements and expanded support for EBS-Optimized
instances. AWS CloudFormation makes it easy for you to provision and
configure a set of related resources, including installing and
configuring software on your Amazon EC2 instances.
Rolling Deployments for Auto Scaling Groups
You can now define update policies on Auto Scaling groups in
CloudFormation templates. These update policies describe how instances
in the Auto Scaling group are replaced or modified as part of a stack
update operation. Update policies give you control over the number of
instances that can be modified concurrently, the number of instances
that should remain in service, and the wait time between instances to be
updated. With rolling deployments, you reduce downtime when updating
your application.
To learn more about update policies for Auto Scaling, see the AWS CloudFormation User Guide.
Cancel and Rollback Action for Stack Updates
AWS CloudFormation now supports the ability to cancel a stack update.
Using the cancel action, you can interrupt the stack update operation
and trigger a rollback. The cancel action can be used in concert with
update policies to automate the cancellation and rollback of a
deployment. As you apply updates to your Auto Scaling group, you can
validate your deployment and decide to cancel the operation.
To learn more about the cancel action, visit the AWS CloudFormation User Guide.
EBS-Optimized Instances for Auto Scaling Groups
Starting today, you can also provision EBS-optimized instances inside
Auto Scaling groups using CloudFormation templates. EBS-optimized
instances provide dedicated throughput to Amazon EBS and an optimized
configuration stack to provide optimal EBS I/O performance.
To learn more about how you can leverage EBS-optimized instances for Auto Scaling groups, visit the AWS CloudFormation User Guide. |
| Feb 18, 2013 |
We are excited to announce AWS OpsWorks, a new application management
service for managing applications of any scale or complexity on the AWS
cloud. OpsWorks features an integrated experience for managing the
complete application lifecycle, including resource provisioning,
configuration management, application deployment, software updates,
monitoring, and access control.
Three key attributes of OpsWorks are:
Operational Control. OpsWorks promotes conventions and sane
defaults, such as template security groups. It also supports the ability
to customize any aspect of an application’s configuration. Developers
can reproduce exact configurations on new instances and apply changes to
all instances, ensuring consistency.
Automation. OpsWorks uses automation to simplify operations.
Users can leverage its event-driven configuration system and rich
deployment tools to efficiently manage an application over its lifetime.
OpsWorks supports customizable deployments, rollback, patch management,
auto scaling, and auto healing. Application updates can be deployed by
updating a single configuration and clicking a button, reducing the time
spent on routine tasks.
Flexibility. OpsWorks supports a wide variety of application
architectures and any software with a scripted installation. Because
OpsWorks uses the Chef framework, developers can use existing recipes or
leverage hundreds of community-built configurations.
A few clicks in the AWS Management Console are all it takes to get
your first application running on AWS OpsWorks. You can learn more by
visiting the AWS OpsWorks detail page or joining our Introduction to AWS OpsWorks webinar on March 18, 2013 at 10:00 AM PST. |
| Feb 14, 2013 |
We are thrilled to announce the availability of Amazon Redshift,
a fully managed, petabyte-scale data warehouse service that makes it
simple and cost-effective to efficiently analyze all your data using
your existing business intelligence tools.
Amazon Redshift is optimized for analyzing data sets of several
hundred gigabytes to a petabyte or more and can provide significantly
better performance at less than one tenth the price of most data
warehousing solutions available to you today. On-demand pricing starts
at $0.85 per hour for a 2-terabyte data warehouse, scaling linearly to a
petabyte or more. Reserved instance pricing lowers the effective price
to $0.228 per hour or under $1,000 per terabyte per year. Amazon
Redshift also frees you from all the muck associated with provisioning,
monitoring, backing up, patching, securing, and scaling your data
warehouse.
Amazon Redshift is available in US East (N. Virginia) with additional regions coming soon. To learn more, visit the Amazon Redshift detail page. |
| Feb 13, 2013 |
We have good news to share. Effective at the beginning of this
month (February 1, 2013), Amazon RDS is lowering Multi-AZ deployment
prices globally for MySQL and Oracle editions.
Amazon RDS Multi-AZ
deployments offer enhanced availability for your production database
workloads. When a database is run as a Multi-AZ deployment, Amazon RDS
operates a standby instance which maintains an up-to-date copy of the
primary database. In case of an instance, storage, or network failure,
Amazon RDS automatically initiates a failover from the primary to the
standby. This ensures minimal database availability impact to your
application.
Today, many customers run production workloads on Amazon RDS as
Multi-AZ deployments. However, we also know that there are many
customers who‘ve wanted to run Amazon RDS Multi-AZ but haven’t been
able to do so yet at the current price. So, we’re excited to lower our
Amazon RDS Multi-AZ to make it even easier for customers to run
production databases on Amazon RDS as Multi-AZ deployments.
For your quick reference, new pricing for an M1.Small instance for
On-demand MySQL and Oracle (BYOL) Multi-AZ deployments is shown in table
1 below.
Table 1: Amazon RDS for MYSQL and Oracle BYOL On-Demand Multi-AZ Deployment Prices for M1.Small DB Instance
| Region |
Old Price |
New Price |
Savings |
| US East (Northern Virginia) |
$0.180 |
$0.153 |
15% |
| US West (Northern California) |
$0.230 |
$0.167 |
27% |
| US West (Oregon) |
$0.180 |
$0.153 |
15% |
| AWS GovCloud (US) |
$0.240 |
$0.187 |
22% |
| Europe (Ireland) |
$0.230 |
$0.167 |
27% |
| Asia Pacific (Singapore) |
$0.230 |
$0.196 |
15% |
| Asia Pacific (Tokyo) |
$0.240 |
$0.204 |
15% |
| Asia Pacific (Sydney) |
$0.230 |
$0.196 |
15% |
| South America (Sao Paulo) |
$0.300 |
$0.204 |
32% |
|
To learn more about Amazon RDS Multi-AZ and our new prices, please visit Amazon RDS .
Sincerely,
The Amazon Relational Database Service Team |
| Feb 11, 2013 |
We are excited to announce the release of DNS Failover for Route
53, Amazon’s Domain Name System (DNS) web service. With DNS Failover,
Amazon Route 53 can help detect an outage of your website and redirect
your end users to alternate locations where your application is
operating properly. When you enable this feature, Route 53
health-checking agents will monitor each location (or "endpoint") of
your application to determine its availability. In the event an endpoint
fails, Route 53 will route traffic away from the failed endpoint and to
other, healthy endpoints. This helps add redundancy to your
applications and maintain high availability for your end users.
You can take advantage of Amazon Route 53’s traffic management
capabilities to improve the availability of your applications. For
example, if you host your website on Amazon EC2, you can now leverage a
simple backup site hosted on Amazon S3. You can also run your primary
application simultaneously in multiple AWS regions around the world,
with Route 53 automatically removing from service any region where your
application is unavailable.
Getting started is easy, and there are no upfront costs. See the Route 53 product page for full details and pricing.
To get started with DNS Failover for Route 53, read Jeff Barr’s blog post, visit the Route 53 product page, or review our walkthrough in the Amazon Route 53 Developer Guide. |
| Feb 07, 2013 |
We are excited to announce that AWS CloudFormation now supports
tags for Amazon RDS and Amazon S3 resources. AWS CloudFormation makes it
easy for you to provision and configure a set of related resources.
Tagged resources allow you to view your AWS usage based on your business
dimensions (such as cost centers, application names, or owners). With
cost allocation reports, you can also track your AWS costs by tag.
CloudFormation automatically adds preset tags to your EC2, S3, and
RDS resources. These preset tags contain the stack name and ID.
Additionally, you can specify your custom tags at the stack level and
have them propagate to all resources. Alternatively, you can specify
tags for specific resources in your template, allowing you fine-grained
control over what resources get tagged.
To learn more about tagging resources in CloudFormation, visit the AWS CloudFormation User Guide. |
| Feb 04, 2013 |
We are pleased to announce a new feature that enables you to receive
email and SMS notifications when certain events occur with your Amazon RDS
DB Instances, DB Parameter Groups, DB Security Groups, or DB Snapshots.
For example, you can be notified when a restore from snapshot has been
completed or when your multi-AZ DB Instance has initiated a failover.
Currently over 40 events are supported and can be subscribed to via the
new DB Event Subscription object found in the Amazon RDS Console, CLI,
and API. Event notifications for Amazon RDS are available with all
RDS-supported database engines (i.e. MySQL, Oracle, and SQL Server). In
addition, email notifications are supported in all AWS regions while SMS
notifications are currently only supported in the US East region.
For a complete list of supported events, as well as to learn more about
DB Event Subscriptions, please refer to the events section of the Amazon RDS User Guide.
For more information about using Amazon RDS, please visit our detail page, our documentation and our FAQs.
|
| Jan 31, 2013 |
We have a trio of announcements today that will help you run your applications globally at a reduced cost.
1. Global Expansion of Second Generation Standard Instances
Last year, we announced Second Generation Standard (M3) instances. M3
instances have the same CPU and memory ratio as First Generation
Standard (M1) instances but provide more CPU capability, and the option
of an instance type with 8 virtual cores. In this initial launch, M3
instances were only available in the Northern Virginia region, but now
you can launch instances as On Demand, Reserved or Spot instances in the
Oregon, Northern California, Ireland, Singapore, Tokyo, Sydney and
GovCloud (US) regions as well. We will launch M3 instances in the São
Paulo region in the coming weeks. For more on M3 instances, please
visit the Amazon EC2 instance type page.
2. Price reduction for Amazon EC2
We are reducing Linux On Demand prices for First Generation Standard
(M1) instances, Second Generation Standard (M3) instances, High Memory
(M2) instances and High CPU (C1) instances in all regions. All prices
are effective from February 1, 2013. These reductions vary by instance
type and region, but typically average 10-20% price drops . For complete
pricing details, please visit the Amazon EC2 pricing page.
3. Reduced Data Transfer Pricing
We are reducing prices for data transfer between AWS locations. Our new
lower pricing applies to data transfer between all 9 global AWS regions,
and from AWS regions to all global CloudFront edge locations.
Previously, we have charged normal internet bandwidth prices for data
transfer, but are now lowering these charges significantly -- allowing
you to even more cost effectively move data between regions for serving
customers in local geographies, for disaster recovery, and for many
other use cases. The new prices are effective February 1, 2013, and you
don’t need to do anything to take advantage of these new prices. To
learn more, please visit the Amazon S3 pricing page. |
| Jan 30, 2013 |
We are pleased to announce that Amazon Simple Workflow (SWF) is
now available in seven additional AWS regions. In addition, you can also
now use AWS Identity and Access Management (IAM) to control permissions
to SWF domains and APIs.
With domain-level permissions, you can control access to individual
SWF domains you’ve defined such as development, QA, and production
regions. For example, if you have an operations team that uses a
web-based app to interact with automated workflows, you can now control
which employees can view workflows and limit which employees may stop,
start, or interrupt them. With API permissions you can control access to
individual APIs within those domains. If your organization has
workflows that operate on sensitive information, you can enforce a
policy that makes sure only automated processes with the appropriate
permissions can execute these workflows.
With the addition of seven new regions, you can help reduce latency
and improve your applications’ performance by choosing to run Amazon SWF
workflows in the AWS region that is closest to your users and
processes. To learn more about all AWS offerings and locations, visit
the AWS Global Infrastructure page.
To learn more about SWF, visit the the SWF page. To learn more about extended IAM functionality with SWF please read Jeff Barr's blog for an overview and our Developer Guide for details. To learn ore about IAM, visit the the IAM page.
|
| Jan 28, 2013 |
Amazon Web Services is excited to announce the beta release of Amazon Elastic Transcoder,
a new AWS service that makes it easy to convert video between different
digital media formats in the cloud. Amazon Elastic Transcoder is
designed to transcode videos from popular formats that are stored in
Amazon S3 into versions that will work on devices like smartphones,
tablets and PCs. Amazon Elastic Transcoder provides you an easy, cost
effective way to get high quality video on an increasing array of
devices without having to become an expert in video or using expensive
software.
As with all Amazon Web Services, you pay only for what you use, and there are no up-front expenses or long-term commitments.
We built Amazon Elastic Transcoder to be:
- Easy-to-use – Amazon Elastic Transcoder is designed to be
very simple to use. You can easily get started by using the AWS
Management Console or the API. System transcoding presets make it easy
to get transcoding settings right the first time for popular devices and
formats.
- Low cost – Amazon Elastic Transcoder has easy to understand
pricing: you pay according to the duration of your video. Prices start
at just $0.015/minute for Standard Definition content, and $0.030/minute
for High Definition content with no minimums or monthly commitments.
Plus, you can use Amazon Elastic Transcoder as part of AWS' free usage
tier that lets you transcode up to 20 minutes of SD video or 10 minutes
of HD video a month free of charge. To see terms and additional
information, please visit the AWS Free Usage Tier page.
- Highly scalable – Amazon Elastic Transcoder scales to meet your growing and often unpredictable video transcoding requirements.
- Secure – You store your content in your own Amazon S3 buckets
so you only give us access to what you want to transcode. Amazon
Elastic Transcoder also makes use of AWS security best practices.
You can get started at aws.amazon.com/elastictranscoder with a few clicks in the AWS Management Console.
You can learn more by visiting the Amazon Elastic Transcoder detail page or joining our Introduction to Amazon Elastic Transcoder webinar on February 27, 2013 at 10:00 AM PST. |
| Jan 21, 2013 |
We are excited to announce the immediate availability of High
Memory Cluster Eight Extra Large (cr1.8xlarge) instances for Amazon EC2.
This new Amazon EC2 instance type is ideal for applications that
benefit from a high level of memory capacity, computational power and
network bandwidth. High Memory Cluster instances provide customers with
2 Intel Xeon E5-2670 8-core processors, 244 GiB of RAM, 240 GB of
SSD-based instance storage, and high bandwidth networking with support
for cluster placement groups. Customers can use these instances for a
variety of memory-intensive applications including in-memory databases
and analytics, graph and stream processing, engineering design, and
scientific computing.
High Memory Cluster instances are currently available in three
availability zones in the US East (N. Virginia) region. Support for
other regions is planned for the coming months. You can learn more about
the specifications and capabilities of High Storage instances for
Amazon EC2 by visiting the Amazon EC2 instance type page. Detailed pricing information is available on the EC2 pricing page. |
| Jan 15, 2013 |
In October, we launched Gateway-Cached volumes for AWS Storage
Gateway. This capability provides you with the ability to store your
application’s primary data in Amazon S3 while retaining frequently
accessed data on-premises in the form of a Gateway-Cached volume. We’re
excited to announce that AWS Storage Gateway for EC2 is now in the AWS Marketplace.
This gives you a cloud-hosted solution that can mirror your entire
production environment in case your on-premises infrastructure goes down
or if you choose to add additional on-demand compute capacity. You can
now make an EBS snapshot copy of a Gateway-Cached volume available to an
EC2 instance of your on-premise application using AWS Storage Gateway
in EC2. Using Amazon EC2, you can configure virtual machine images of
your application servers in AWS. When you have a DR scenario or you need
additional compute capacity, you can launch your application EC2
instances and an AWS Storage Gateway in EC2. You can then restore a
snapshot from your on-premises Gateway-Cached volume to a new volume for
your AWS Storage Gateway in EC2, connect your EC2 application instances
to your restored volume through iSCSI, and your DR or on-demand
environment is up and running. You only pay for these servers when you
need them, so you can have your DR or on-demand environment at the ready
without having to pay for capacity when it’s not in use.
Learn more and get started by visiting the AWS Storage Gateway User Guide. |
| Jan 14, 2013 |
We are pleased to announce a new feature that enables you to rename
an existing database instance. You can now change the name and endpoint
of your existing DB Instances via the AWS Management Console, Amazon RDS
API, or the Amazon RDS command line toolkit. DB instance renaming
feature is available with all RDS-supported database engines (i.e.
MySQL, Oracle, and SQL Server) and in all AWS regions.
You can use this feature for a number of use cases:
- Assume the name and endpoint of an existing DB Instance: Amazon RDS provides multiple options for data recovery include Point in Time Recovery, Read Replica Promotion, and Restore from DB Snapshot.
With the ability to “Rename”, you can now have these “new” DB Instances
assume the identity of an existing DB Instance, thus allowing you to
avoid updating your applications with a new endpoint.
- Create a new name and endpoint for an existing DB Instance:
As your applications grow on Amazon RDS, the role of your DB Instances
may evolve. “Rename” functionality will allow you to keep the names of
your DB Instances in sync with the new roles your instances may take on.
Please refer to the DB Instance section of the Amazon RDS User Guide to learn more.
For more information about using Amazon RDS, please visit our detail page, our documentation and our FAQs. |
| Jan 09, 2013 |
We're pleased to announce support for 64KB payloads in SNS. Previously,
SNS notifications were capped at 32KB. Customers tell us larger
payloads will enable new use cases that were previously difficult to
accomplish. When subscribing SQS queues to SNS topics, customers can
now take advantage of the full 64kb payloads that SQS allows.
64KB SNS payloads are available today in all regions.
Getting started with Amazon SNS and Amazon SQS is easy with our free tiers of service. To learn more, visit the Amazon SNS page and the Amazon SQS page.
|
| Jan 08, 2013 |
We are pleased to announce that starting today you can use Amazon
CloudWatch alarms to detect and shut down unused Amazon EC2 instances
automatically. Whether you are an individual developer who uses an
Amazon EC2 instance for occasional projects, or an IT professional who
manages many Amazon EC2 instances for multiple developers, you can now
use Amazon CloudWatch to avoid accumulating unnecessary usage charges.
Stop or Terminate EC2 Instances That are Unused or Underutilized
Amazon CloudWatch collects monitoring data for your AWS resources and
applications. Amazon CloudWatch alarms help you react quickly to issues
by emailing a notification to you or executing automated tasks when
data values reach a threshold you set. Starting today, you can also set
alarms that automatically stop or terminate Amazon EC2 instances that
have gone unused or underutilized for too long. For example, a student
who wants to stay within the AWS Free Usage Tier can set an alarm that
automatically stops an instance once it has been left idle for an hour.
Or, if you are a corporate IT administrator, you can create a group of
alarms that first sends an email notification to developers whose
instances have been underutilized for 8 hours, then terminates an
instance and emails both of you if utilization doesn't improve after 24
hours.
Get Started Using Automated Shutdown Alarms Today
It's easy to set Amazon CloudWatch alarms that detect and shut down idle
Amazon EC2 instances. To get started, first visit Amazon EC2 in the AWS
Management Console, select an instance, and click the 'Create Alarm'
button in the Monitoring tab that appears in the lower panel. Then,
enter an email address to notify, choose 'Stop' or 'Terminate', set a
utilization threshold that suits your needs, and you're done. If the
threshold is ever reached, Amazon CloudWatch will shut down the instance
and email you a notification.
You can also set these alarms using the Amazon CloudWatch console, AWS
SDKs, Amazon CloudWatch API, and command-line interface. For more
information, visit Create Alarms That Stop or Terminate an Instance in the Amazon CloudWatch Developer Guide. |
| Jan 04, 2013 |
We are pleased to announce the AWS Management Console now supports
tablet devices. The Management Console lets you access and manage Amazon
Web Services through a simple and intuitive web-based user interface.
You can access the Management Console using your tablet’s web browser to
view some of the changes we’re making that put your information front
and center, such as full-screen wizards and tables, and a new monitoring
view. To learn more about our tablet changes, please see our What's New guide or visit the Management Console.
We are also pleased to announce the availability of the AWS Management
Console app for Android phones. You can use this companion mobile app to
quickly and easily view and manage your existing EC2 instances and
CloudWatch alarms from your phone. To learn more or download the app,
please visit our mobile detail page.
|
| Jan 02, 2013 |
We are pleased to announce the availability of the Amazon ElastiCache Cluster Client for PHP. This Client supports Auto Discovery, a novel way to connect to your Amazon ElastiCache
cluster. Auto Discovery enables automatic discovery of cache nodes by
clients when the nodes are added to or removed from an ElastiCache
cluster.
As before, Amazon ElastiCache remains protocol-compliant with
Memcached, a widely adopted memory object caching system, so code,
applications, and popular tools that you use today with existing
Memcached environments will continue to work seamlessly with Auto
Discovery.
To get started you will need the Amazon ElastiCache Cluster Client
which is now available for both Java and PHP and can be downloaded from
the
Amazon ElastiCache Console. |