The problem this blockchain project is trying to solve is allowing a number of people to create a knowledge product in a decentralized way, without a person dictating who should contribute and who should not. Whose contribution to the knowledge product will be accepted or rejected is done by a consensus of all participating members. The system should allow transparently assessing and scoring the contribution of each of the participant to the final product. The system should also allow anonymous or semi-anonymous contribution.

A knowledge product is a product aiming at answering a specific policy issue or exploring an issue in depth to better understand it. For example a knowledge product can be built on the question: how can we fight desertification in a given country?

Our assumption is that the knowledge product can be mapped as set of concepts and relations between these concepts. We can therefore consider that the final product is a graph where we have a certain number of nodes and a certain number of relations between the nodes. We can see nodes as concepts and links as relations between concepts. For example, when we say “deforestation causes desertification”, deforestation is a node, desertification is a node and “causes” is a link between the two nodes. There can be a link between two nodes, a link between two links and a link between a node and a link. There are a fixed set of categories of links or nodes that can be used to build the knowledge product. These categories are proposed by the participants and start being used when a minimum level of consensus on their relevance has been reached in the network.

A participant can contribute to building the knowledge product in one of these ways:

• Propose a new category of node, which is equivalent to a new group of concepts. It will start be used by people in the next block if it receive a minimum score of relevance and originality.

• Propose a new node of existing category with attached text, which is equivalent to proposing to include a concept that is useful for understanding the topic in the knowledge product. It will immediately start being used. However it will receive relevance and originality scores from participants that will be aggregated at the end of the process.

• Propose a new category of link, which is equivalent to a group of relation between concepts. It will start be used by people in the next block if it receive a minimum score of relevance and originality.

• Propose a link of existing category with attached text. The link is between two nodes, two links, or a link and a node. For example if “deforestation” and “desertification” are already existing concepts in the knowledge product, a patrician can propose the link “deforestation causes desertification”. It’s a new knowledge piece that expresses a relation between two concepts.

• Propose a coefficient of relevance to the topic to any number of links or nodes as his vote on the question: how is the concept or relation relevant to the topic?

• Propose coefficient of similarity between two nodes or two links, trying to give his opinion on whether a new knowledge piece is already similar to another knowledge piece already in the document.

• Propose a coefficient of originality to a node or a link proposed by someone else: to which extend does it add to the knowledge document?

The final knowledge product will be built based on the relevance, originality and similarity of the nodes.

All participants have initial dotation in coins. Any contribution reduce their coins as a way to limit their participation and allow others to participate. A participant can modify a coefficient he has already given to a node ore link (originality, similarity, relevance) and he will lose only a fraction of what he lost when he has initially given the coefficient.

A participant can also transfer coins to another participant. We consider the operations described above as the transactions. Each transaction is adding knowledge to the product and reducing the dotation of the originator of the transaction.

After a certain number of transactions, a block has to be created by a miner and added to the chain. The miner will only put transactions that are genuine and that have received enough consensus.

Once a consensus has been reached, the contribution of each participant will be computed based on the relevance, the originality and the similarity to other concepts of all nodes and links in the final graph. The more the person has contributed original knowledge pieces, the higher score he gets. He get less score if the knowledge piece is similar to another one that has already been included, or if the knowledge piece is not relevant to the question.

This is how it works: a topic is defined, those allowed to participate are given initial dotation. Then, they start proposing transactions. A miner at some point will take a maximum set of compatible transactions and create a block. The block freezes the situation and it is from the frozen situation that the originality of new contributions will be assessed. The miner is given score for contributing to secure the blockchain. We can use proof of stake to secure the blocks.

© 2019 Data Science Central ® Powered by

Badges | Report an Issue | Privacy Policy | Terms of Service

**Most Popular Content on DSC**

To not miss this type of content in the future, subscribe to our newsletter.

**Technical**

- Free Books and Resources for DSC Members
- Learn Machine Learning Coding Basics in a weekend
- New Machine Learning Cheat Sheet | Old one
- Advanced Machine Learning with Basic Excel
- 12 Algorithms Every Data Scientist Should Know
- Hitchhiker's Guide to Data Science, Machine Learning, R, Python
- Visualizations: Comparing Tableau, SPSS, R, Excel, Matlab, JS, Pyth...
- How to Automatically Determine the Number of Clusters in your Data
- New Perspectives on Statistical Distributions and Deep Learning
- Fascinating New Results in the Theory of Randomness
- Long-range Correlations in Time Series: Modeling, Testing, Case Study
- Fast Combinatorial Feature Selection with New Definition of Predict...
- 10 types of regressions. Which one to use?
- 40 Techniques Used by Data Scientists
- 15 Deep Learning Tutorials
- R: a survival guide to data science with R

**Non Technical**

- Advanced Analytic Platforms - Incumbents Fall - Challengers Rise
- Difference between ML, Data Science, AI, Deep Learning, and Statistics
- How to Become a Data Scientist - On your own
- 16 analytic disciplines compared to data science
- Six categories of Data Scientists
- 21 data science systems used by Amazon to operate its business
- 24 Uses of Statistical Modeling
- 33 unusual problems that can be solved with data science
- 22 Differences Between Junior and Senior Data Scientists
- Why You Should be a Data Science Generalist - and How to Become One
- Becoming a Billionaire Data Scientist vs Struggling to Get a $100k Job
- Why do people with no experience want to become data scientists?

**Articles from top bloggers**

- Kirk Borne | Stephanie Glen | Vincent Granville
- Ajit Jaokar | Ronald van Loon | Bernard Marr
- Steve Miller | Bill Schmarzo | Bill Vorhies

**Other popular resources**

- Comprehensive Repository of Data Science and ML Resources
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- 100 Data Science Interview Questions and Answers
- Cheat Sheets | Curated Articles | Search | Jobs | Courses
- Post a Blog | Forum Questions | Books | Salaries | News

**Archives**: 2008-2014 | 2015-2016 | 2017-2019 | Book 1 | Book 2 | More

**Most popular articles**

- Free Book and Resources for DSC Members
- New Perspectives on Statistical Distributions and Deep Learning
- Time series, Growth Modeling and Data Science Wizardy
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- Comprehensive Repository of Data Science and ML Resources
- Advanced Machine Learning with Basic Excel
- Difference between ML, Data Science, AI, Deep Learning, and Statistics
- Selected Business Analytics, Data Science and ML articles
- How to Automatically Determine the Number of Clusters in your Data
- Fascinating New Results in the Theory of Randomness
- Hire a Data Scientist | Search DSC | Find a Job
- Post a Blog | Forum Questions

## You need to be a member of Data Science Central to add comments!

Join Data Science Central