This is my first post here. I'm glad to introduce this newly launched big data analytic engine, the BigObject. In the past 2 years we have been working on an optimal approach to handle big data for analytic purposes and challenging the existed models, some assumptions of which are no longer valid. For example, as the data size grows so rapidly, is it still practical that we stick to the relational models neglecting the time spending in data retrievals? What impact did the advent of 64bit processor cause to change the way we treat data and code? The approach we adopt, in-place computing, roots from the concept "computations take place where data are stored." We believe that making data components ready for CPU to compute instead of moving them around is the most efficient way.
Moreover, we manage another "V" factor that is ignored, the Valence, the degree of inter-dependency among each data components. The data with high valence causes the data shuffling among the computing nodes and hence slows down the computations. The in-place computing approach can avoid this problem.
Another important thing to mention is, the BigObject is dedicated to deliver affordable computing power. It fully utilizes the virtually infinite address space of 64bit architecture and trades space for time. For the records, a laptop with 8G memory can compute 100 millions of data within 5 seconds.
A free trial is available for download now. Come visit us on www.thebigobject.com for more details and send inquiry for the access code!