Join us November 11th at 9am PT for our latest DSC's Webinar Series: Data Transformation and Acquisition Techniques, to Handle Petabytes of Data
Sponsored by: Hortonworks and Oracle
Space is limited. Register here
Many organizations have become aware of the importance of big data technologies, such as Apache Hadoop but are struggling to determine the right architecture to integrate it with their existing analytics and data processing infrastructure. As companies are implementing Hadoop, they need to learn new skills and languages, which can impact developer productivity. Often times they resort to hand-coded solutions which can be brittle, impact the productivity of the developer and the efficiency of the Hadoop cluster.
To truly tap into the business benefits of the big data solutions, it’s necessary to ensure that the business and IT have simple tools-based methods to get data in, change and transform it, and keep it continuously updated with their data warehouse.
In this webinar you’ll learn how the Oracle and Hortonworks solution can:
We will also discuss how technologies from both Oracle and Hortonworks can deploy the big data reservoir or data lake, an efficient cost-effective way to handle petabyte-scale data staging, transformations, and aged data requirements while reclaiming compute power and storage from your existing data warehouse.
Jeff Pollock, Vice President, Oracle
Tim Hall, Vice President, Hortonworks
Tim Matteson, Co-Founder, Data Science Central
Title: Data Transformation and Acquisition Techniques, to Handle Petabytes of Data
Date: Tuesday, November 11, 2014
Time: 9:00 AM - 10:00 AM PST
Again, Space is limited so please register early:
Reserve your Webinar seat now
After registering you will receive a confirmation email containing information about joining the Webinar.