**The BigObject® - A Computing Engine Designed for Big Data**

BigObject® presents an in-place* computing approach, designed to solve the complexity of big data and compute on a real-time basis. The mission of the BigObject® is to deliver affordable computing power, enabling enterprises of all scales to interpret big data. With the advances in what a commodity machine can perform, it therefore brings the possibility of what-if analysis with big data, facilitating fact-based decision-making.

**A 1,000x acceleration is not only about time efficiency, it’s about POSSIBILITY**

One of the key success factors in the era of big data is velocity – it determines how people deal with data and how comprehensive the interpretation can be. In the era of big data, we are exposed to a variety of data sources and struggle to achieve insight by connecting billions of data records. This computing effort will end up in vain if it takes more time than people expect. If we cannot make decisions at the most opportune time, the data itself may become obsolete.

“We can achieve way more if we can cut experiment times, to say a 100 times shorter, then we can do 10 experiments per day! A single experiment now takes us 72 hours every time we change a parameter. You can imagine how progress in genomics can leap if we accelerate the proof of concept in DNA sequencing.”

The change in velocity can change the way we behave. With a more feasible approach decision makers can verify more what-if scenarios leading to better predictions.

“Every Friday when we are all sitting around the table to make sales forecasts, promotional strategies and logistics plans; it is a headache. We have over 200,000 SKUs and nearly 8,000 stores in total. Once a slight change in the sales volume of one single product in one single shop happens, the conclusion can be totally different. We, the sales team, need to recalculate the overall amount in each region, the product team needs to find out what promotion kits shall go and the supply chain team will need to re-route replenishment. Just a few sets of predictions take HOURS, making everyone crazy. This will be more pleasant if every adjustment takes as long as a sip of coffee.”

This is exactly the pain we are going to relieve. Every data entry and computation will be more timely, even in real time. People will be able to perform more trial-and-error simulations thus aiding their analysis and resulting in more accurate predictions and better profitability. Now you can rely on real-time analytic models to support your decisions rather than “the golden gut.”*

**In-memory vs. In-place: Driving a Porsche in an alley vs. Driving a Tesla on the freeway!**

There are numerous in-memory data discovery tools out there known for rapid calculations. In general, they exploit memory speed and hold the whole dataset in the RAM to eliminate slow disk access. As the data volume grows, however, users need to either scale-up more memory ranks to fulfill computations or confront a bottleneck where performance degrades drastically, not to mention a concurrency of multiple-users that will make this worse.

The principle of in-place technology, on the other hand, is to utilize 64bit address space – perceived as virtually infinite - to trade space for time. The major difference is that the BigObject® sends the code to the data, preventing latency caused by queries while existing in-memory technologies move the data to memory. Furthermore, data are loaded to memory on demand without swapping or juggling; the process time grows linearly along with data size while traditional in-memory performance may decline exponentially. With an in-place approach, there is no need to invest in additional hardware equipment such as memory ranks; a standard PC can become a powerful big data analytic machine as long as it is equipped with a 64bit processor. For the record, a laptop with 8G memory can compute 100 of millions of bits of data within 5 seconds.

Compared to other analytic tools, the BigObject® does not implement indexing to accelerate data discovery; in this case ad-hoc flexibility remains to execute instant computation in both read and write formats - which is unseen in other analytic tools.

Looking at the history of civilization, technology has always evolved based on the given conditions of a moment. The assumption which embraces the traditional computing model that heavily relies on data retrieval - resources are limited - faded away upon the introduction of 64bit architectures. That is, the enhancement of addressable space is changing the discipline of how we handle data and code, and in-place computing is the disruptive technology redefining this game. Nowadays we are overwhelmed by data; the challenge is to sort out hidden clues quickly enough for timely decision-making. When the speed leaps so much that we can find answers immediately, it changes our mindset. We see great potential in exploring a new territory of applications in business as well as science, which may trigger another industrial revolution with the implementation of in-place computing models.

The BigObject® package is free to download now. Please visit www.thebigobject.com for more information.

*In-place Computing: An unconventional computing model introduced by the BigObject®. It means that computations take place where data are stored.

*The Golden Gut: The term was mentioned by Thomas Davenport to describe that most people make decisions based on instincts and bold guessing.

© 2019 Data Science Central ® Powered by

Badges | Report an Issue | Privacy Policy | Terms of Service

**Most Popular Content on DSC**

To not miss this type of content in the future, subscribe to our newsletter.

**Technical**

- Free Books and Resources for DSC Members
- Learn Machine Learning Coding Basics in a weekend
- New Machine Learning Cheat Sheet | Old one
- Advanced Machine Learning with Basic Excel
- 12 Algorithms Every Data Scientist Should Know
- Hitchhiker's Guide to Data Science, Machine Learning, R, Python
- Visualizations: Comparing Tableau, SPSS, R, Excel, Matlab, JS, Pyth...
- How to Automatically Determine the Number of Clusters in your Data
- New Perspectives on Statistical Distributions and Deep Learning
- Fascinating New Results in the Theory of Randomness
- Long-range Correlations in Time Series: Modeling, Testing, Case Study
- Fast Combinatorial Feature Selection with New Definition of Predict...
- 10 types of regressions. Which one to use?
- 40 Techniques Used by Data Scientists
- 15 Deep Learning Tutorials
- R: a survival guide to data science with R

**Non Technical**

- Advanced Analytic Platforms - Incumbents Fall - Challengers Rise
- Difference between ML, Data Science, AI, Deep Learning, and Statistics
- How to Become a Data Scientist - On your own
- 16 analytic disciplines compared to data science
- Six categories of Data Scientists
- 21 data science systems used by Amazon to operate its business
- 24 Uses of Statistical Modeling
- 33 unusual problems that can be solved with data science
- 22 Differences Between Junior and Senior Data Scientists
- Why You Should be a Data Science Generalist - and How to Become One
- Becoming a Billionaire Data Scientist vs Struggling to Get a $100k Job
- Why do people with no experience want to become data scientists?

**Articles from top bloggers**

- Kirk Borne | Stephanie Glen | Vincent Granville
- Ajit Jaokar | Ronald van Loon | Bernard Marr
- Steve Miller | Bill Schmarzo | Bill Vorhies

**Other popular resources**

- Comprehensive Repository of Data Science and ML Resources
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- 100 Data Science Interview Questions and Answers
- Cheat Sheets | Curated Articles | Search | Jobs | Courses
- Post a Blog | Forum Questions | Books | Salaries | News

**Archives**: 2008-2014 | 2015-2016 | 2017-2019 | Book 1 | Book 2 | More

**Most popular articles**

- Free Book and Resources for DSC Members
- New Perspectives on Statistical Distributions and Deep Learning
- Time series, Growth Modeling and Data Science Wizardy
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- Comprehensive Repository of Data Science and ML Resources
- Advanced Machine Learning with Basic Excel
- Difference between ML, Data Science, AI, Deep Learning, and Statistics
- Selected Business Analytics, Data Science and ML articles
- How to Automatically Determine the Number of Clusters in your Data
- Fascinating New Results in the Theory of Randomness
- Hire a Data Scientist | Search DSC | Find a Job
- Post a Blog | Forum Questions

## You need to be a member of Data Science Central to add comments!

Join Data Science Central