We have been using tables in the relational database, mostly for the transactional purposes, and that proves effective. Considering the data size and analytic purpose, however, the data structure might need to be redesigned for better efficiency.
To determine how to decompose the complexity of big data, we have observed the way the organisms function. In the physical world, the universe is organized into a hierarchy of information levels. Biologists employ macromolecules as the basic functional units to describe living cells and their behaviors, even though each macromolecule is composed of hundreds of millions of molecules. It would be extremely hard to define a living cell in molecule level or atom level due to its complexity.
The complexity in the physical world is naturally broken down by organizing smaller components into a meaningful form (or system), accompanied with a set of meaningful functions or behaviors. For example, DNA, one type of macromolecules, can store genetic information and perform replication, transformation and mutation. In other words, DNA itself serves as an information storage unit and as a basic functional unit. So, rather than dealing with hundreds of millions of molecules, biologists now only deal with a limited amount of macromolecules to analyze living cells. The complexity is thus reduced to a great extent that people can easily comprehend with. The similar reduction phenomena can be seen in other sciences such as astrophysics, macromolecular chemistry, or economics.
This observation inspires us to ask the second question for big data computing.
”Is it possible to determine the basic functional units (macromolecules) and behaviors of big data in order to break down the complexity?”
The BigObject deploys "macro data structure, MDS" as the basic function unit as the macromolecules to the living cells, in a way it can compute and store level in a macro level. To see more details please refer to the technical whitepaper or download the free trial.