Subscribe to DSC Newsletter

Big Data & Associative Technology (the future)

Hello
Imagine being able to do anything your mind can imagine, well now it can..
The 6th Normal Form is not a TERM.. it is a GOAL.. the "holy Grail" if you will of data management and more importantly "Information" management.. Imagine being able to Store IDEAS as opposed to disconnected bits of data..

A brief "INCORRECT" comment on WIKI.. "A relvar R [table] is in sixth normal form (abbreviated 6NF) if and only if it satisfies no nontrivial join dependencies at all — where, as before, a join dependency is trivial if and only if at least one of the projections (possibly U_projections) involved is taken over the set of all attributes of the relvar [table] concerned.[Date et al.]["

The TRUE definition of 6th Normal form is OBJECT Database.. where each and every piece of information/data is ATOMIC in nature and can be associated with any other piece of data/information, and thus NO restrictions.. NO constraints.. NO tables, NO Rows, NO VIEWS, NO CUBES... the correct term is "ASSOCIATIVE DATABASE" or Information system as it is in 3 dimensions by default.. and technically (N) dimensions

the advantages are thus:
100x SQL/ROW/TABLE speed
1/3 disk space
1 EXABYTE capacity - Single instance storage.. (no piece of data is ever duplicated)
Security - Un-hackable - there is nothing to hack into
NO QUERIES - we use filtering
NO TABLES - thus no indexing to worry about
Automated data aggregation - as many sources as required..

and that is just the start.. ;-)

let me know if you are interested in seeing it.. ? as a scientist. I think you would find it fascinating.. Data warehouses have met their match

send me your external email and I will send you more info if you like, and yes this is going to market as we speak ..
JM
917-751-3131

I have posted a video that explains & demos a great deal, but If you want more information, I can add you to my Dropbox.com shared folder ifyou would like more information and a video is available there as well 

Views: 795

Tags: Big, Compliance, Data, Science, aggregation, analytics

Comment

You need to be a member of Data Science Central to add comments!

Join Data Science Central

Comment by Yogeeshachar B K on December 23, 2017 at 8:59am

Hi Jean, 

Kindly share your dropbox folder. My email is [email protected]

Regards, 

Yogeesh

Comment by Jean Michel LeTennier on June 16, 2013 at 5:38am

Some Metrics:

Here are some calculations to set the stage:

 A record with 50 columns of data represents 2500 triples, if you include both directions, (which we do). Because every possible associative path is maintained, discovery of all associations is implicit from every data point. 

 

We assimilate 1 million records of 50 columns of data in typically < 30 minutes (best case 10 minutes, avg 20 minutes) on a 4 core, 4 GB laptop.

 That's the equivalent of 1,000,000 * 2500 triples or 2.5 billion triples

in 30 minutes, worst case performance. 

2.5 billion triples in 1800 seconds (30 minutes * 60 seconds per minute), is 1.389 million triples per second. Because of the proprietary way we reference and store information as composite multi-dimensional information atoms, we are able to produce the functional equivalent of 2.5 billion triples in less than 30 minutes, operating with a sustained throughput of 30,000 composite 'atomic' transactions per second on a laptop

 Since we don't store the triples as triples, yet maintain the equivalent 'associative' capability triples have, we can get a huge assimilation performance equivalent benefit over triple stores, with a better, faster and more efficient retrieval and storage.

© 2019   Data Science Central ®   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service