Amazon Data Pipeline promises to ease management of data stored in multiple locations
- 30 November, 2012 00:13
Amazon Web Services today launched Data Pipeline, a new tool designed to make it easier for users to integrate data stored in multiple disparate spots to manage and analyze it.
In addition to announcing Date Pipeline, AWS also announced two new services that could be ideal for processing big data and doing analytics. Both announcements came on the second and final day of AWS's first ever user conference in Las Vegas, named AWS re: Invent.
MORE RE INVENT: 5 Things to watch for at Amazon's first user conference
The news follows other data-related news from AWS yesterday when it announced Redshift, a cloud-based data warehousing tool. Data Pipeline is meant to be able to take data stored in Redshift, or AWS's other storage services, such as DynamoDB - the company's NoSQL database tool - or its Simple Storage Service (S3) and manipulate the data for easier management and exposure to analysis tools.
Data Pipeline has a drag-and-drop graphic interface lets users manipulate and glean insights from data stored either in AWS's cloud or on their own premise. For example during a demonstration, officials showed how a DynamoDB database can be configured to automatically replicate information into S3 or some business intelligence tool. "This is really meant to be a light-weight web service to integrate disparate data sets," says Matt Wood, AWS's Big Data guru.
The service rounds out AWS's storage and business intelligence options. Earlier this year AWS launched Glacier, a long-term storage service. At its user conference this week AWS announced that its S3 service now holds more than 1 trillion files, and Redshift was the highlight of AWS's announcement on the first day of the conference. AWS has also recently released a Big Data section of its AWS marketplace, which is a series of applications for business intelligence that are optimized to run in AWS's cloud.
In addition to the Data Pipeline news, AWS also announced two new instances types for its Elastic Cloud Compute (EC2) service, aimed specifically at helping users process large amounts of data. The Cluster high memory instance types comes with 240GB of RAM, and 2x 120GB of solid-state drive backed disk space. Amazon.com CTO Werner Vogels says these instance types are ideal for large scale in memory database analytics. The second is a high storage option, hs1.8xlarge, which comes with 117GB of RAM and 48TB of disk space. That news follows the announcement of new instance types the company launched just a few weeks ago, also aimed at high performance computing workloads.
Network World staff writer Brandon Butler covers cloud computing and social collaboration. He can be reached at BButler@nww.com and found on Twitter at @BButlerNWW.
- CSO Spotlight: Security-as-a-Service Gaining Popularity
- Virtual Storage Made Simple
- The Data Storage Imperative: Backup, Recovery and Archiving in Australia & New Zealand
- Magic Quadrant for Enterprise Backup/Recovery Software
- Benefits of Deploying Microsoft Exchange Server 2010 on Dell Compellent with Data Progression
Australia lags Mongolia in Internet speeds
Salesforce.com to buy Clipboard, shutting down service
Investor tips on how to propel a startup
Review: Nokia Lumia 520
Review: Nokia Lumia 520