Subscribe to DSC Newsletter

Explainable Artificial Intelligence (XAI)

This article was written by Mr. David Gunning
XAI is addressing the need for machine-learning systems able to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future.

Dramatic success in machine learning has led to a torrent of Artificial Intelligence (AI) applications. Continued advances promise to produce autonomous systems that will perceive, learn, decide, and act on their own. However, the effectiveness of these systems is limited by the machine’s current inability to explain their decisions and actions to human users. The Department of Defense is facing challenges that demand more intelligent, autonomous, and symbiotic systems. Explainable AI—especially explainable machine learning—will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.

The Explainable AI (XAI) program aims to create a suite of machine learning techniques that:

  • Produce more explainable models, while maintaining a high level of learning performance (prediction accuracy); and
  • Enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.

New machine-learning systems will have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future. The strategy for achieving that goal is to develop new or modified machine-learning techniques that will produce more explainable models. These models will be combined with state-of-the-art human-computer interface techniques capable of translating models into understandable and useful explanation dialogues for the end user. Our strategy is to pursue a variety of techniques in order to generate a portfolio of methods that will provide future developers with a range of design options covering the performance-versus-explainability trade space.

To read the full original article click here. For more artificial intelligence related articles on DSC click here.

DSC Resources

Popular Articles

Views: 2893

Comment

You need to be a member of Data Science Central to add comments!

Join Data Science Central

Comment by CARLOS GABRIEL QUEIROZ MOUTINHO on January 7, 2018 at 6:30pm

DEMONSTRAÇÃO MODELO DENSIDADE
(Padrão Monte Carlo)
x = Economia Normativo-Descritiva;
y = Modelagem Regulatória;
z = Planejamento Executivo

(x; y; z) = CRA RJ 20-86.010

 1st PROCESS (linear)... integration ... validation ... composition as structure ... pattern of data ... cloud segmented as array of data ... qualication ... REPEAT PROCESS ; 2nd PROCESS (exponencial).... CONJUGATION ... REASONABLE PATTERN (Logical struture)  .... etc

http://gmoutinhoconsultoria.blogspot.com

http://ucam-macrofinancas.blogspot.com

http://eventoshaole.blogspot.com

CRA RJ 20-86.010

Comment by CARLOS GABRIEL QUEIROZ MOUTINHO on January 7, 2018 at 6:21pm

 1st PROCESS (linear)... integration ... validation ... composition as structure ... pattern of data ... cloud segmented as array of data ... qualication ... REPEAT PROCESS ; 2nd PROCESS (exponencial).... CONJUGATION ... REASONABLE PATTERN (Logical struture)  .... etc

http://gmoutinhoconsultoria.blogspot.com

http://ucam-macrofinancas.blogspot.com

http://eventoshaole.blogspot.com

CRA RJ 20-86.010

Follow Us

Videos

  • Add Videos
  • View All

Resources

© 2018   Data Science Central   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service