Home » Media Types » Webinars

DSC Webinar Series: Data Transformation and Acquisition Techniques, to Handle Petabytes of Data 11.11.2014

  • Sean Welch 

Many organizations have become aware of the importance of big data technologies, such as Apache Hadoop but are struggling to determine the right architecture to integrate it with their existing analytics and data processing infrastructure. As companies are implementing Hadoop, they need to learn new skills and languages, which can impact developer productivity. Often times they resort to hand-coded solutions which can be brittle, impact the productivity of the developer and the efficiency of the Hadoop cluster.

To truly tap into the business benefits of the big data solutions, it’s necessary to ensure that the business and IT have simple tools-based methods to get data in, change and transform it, and keep it continuously updated with their data warehouse.

In this webinar you’ll learn how the Oracle and Hortonworks solution can:

Accelerate developer productivity
Optimize data transformation workloads for on Hadoop
Lower cost of data storage and processing
Minimize risks in deployment of big data projects
Provide proven industrial scale tooling for data integration projects
We will also discuss how technologies from both Oracle and Hortonworks can deploy the big data reservoir or data lake, an efficient cost-effective way to handle petabyte-scale data staging, transformations, and aged data requirements while reclaiming compute power and storage from your existing data warehouse.

Speakers:
Jeff Pollock, Vice President, Oracle
Tim Hall, Vice President, Hortonworks

Hosted by:
Tim Matteson, Co-Founder, Data Science Central

Tags: