Automatic Adjoint Differentiation (AAD) and back-propagation are key technologies in modern machine learning and finance. It is back-prop that enables deep neural networks to learn to identify faces on photographs in reasonable time. It is AAD that allows financial institutions to compute the risks of complex derivatives books in real time. The two technologies share common roots.
See the AAD book here: https://www.amazon.com/Modern-Computa...
This workshop, given at Kings College London on 28-29 March 2019 and entirely recorded here, explains back-prop and AAD in deep detail, demystifies them in words, mathematics and C++ code, investigates their similarities and differences and provides viewers with a thorough understanding necessary to successfully implement these technologies in their own projects.
All the material, including slides, C++ code, excel add-ins, tensorFlow notebooks and more is freely available on GitHub: https://github.com/asavine/CompFinance
Below is the video presentation:
About the Author
Antoine Savine is a French mathematician, academic and a leading practitioner with financial derivatives. Antoine was the Global Head of Derivatives Research at BNP for more than ten years, before moving to Danske Bank in Copenhagen. He is an expert C++ programmer and one of the key contributors to Danske Bank's xVA system, which won the In-House System of the Year 2015 Risk award. His current interests are in the combination of Deep Learning with financial modeling to unify derivatives risk management with CVA/XVA, FRTB, CCR, MVA and other capital calculations, and resolve related numerical and computational bottlenecks.