Subscribe to DSC Newsletter

In-place Computing Model: for Big and Complex Data

As we've seen how in-place and in-memory work differently, today we are sharing more fundamentals of in-place computing model. This models was designed to solve "Big and Complex Data," - not just about size but more about the complexity. We see many analytic cases today incorporate multiple relations of data, for instance, when we try to solve a data mining case for an online retailer, we may need to analyze both product attributes (categories, colors, materials..) and customer attributes (age, gender, region, ...) and even more. Such an in-depth analysis greatly counts on the relational model which can be dealt by a relational database, the data size we encounter today, however, might be too heavy for the traditional database technology. A NoSQL database scales, nonetheless, does not fit the relational model.

The In-place Computing Model aims to fill in the gap between these two systems; it supports extended relational model while maintaining the performance as well as the scalability.  This computing is unconventional in two ways; first, it moves away from data retrieval to a data-centric model in the sense that computations take place where data resides. Second, it determines the data objects in macro data structure that works in the way how macro molecules do in living cells. These two principles altogether help to organize the big data complexity and contribute to the substantial performance improvement: 2 to 3 orders of magnitude compared to the existing in-memory databases.

For more details and technical insights, please visit our document here. For free trial or more information, go here

Views: 984

Tags: Big, NewSQL, NoSQL, algebra, data, in-memory, in-place, relational

Comment

You need to be a member of Data Science Central to add comments!

Join Data Science Central

Comment by Yuanjen Chen on June 17, 2014 at 8:38pm

Hi Frank, 

Thanks for the interest. The in-place computing model is a disruptive computing model that moves away from data-retrieval model so it's not Hadoop-dependent. We do have a product, called BigObject, serving as an embedded computing engine that can be integrated as a software library to enable real-time analytic functions. Our main customers will be software vendors that want to add interactive analytic feature to their products. Please refer to www.macrodatalab.com for more information.

Comment by Frank Quintana on June 9, 2014 at 11:32am

Very interesting!

Who are your customers? Do you supply analytics or an analytic tool is needed?

Are you Hadoop or Cloudera certify? Who are your competitors and partners?

Videos

  • Add Videos
  • View All

© 2019   Data Science Central ®   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service