Subscribe to DSC Newsletter

The implications of Huang’s law for the Artificial Intelligence stack

 Nvidia CEO Jensen Huang proposed an idea which the media has labelled ‘Huang’s law’ along the lines of Moore’s law.

 

Moore’s predicts that the number of transistors in an integrated circuit doubles every two years.

 

As per Huang’s law, GPU performance will double every two years.

 

Whether or not you subscribe to the idea of Huang’s law – the concept itself has far reaching implications for the AI stack – especially for Edge devices

Nvidia claims that innovation is not only in the chip – but across the entire AI stack.

The impact of the cross-stack approach means going beyond the cloud by

  1. Training on Edge devices
  2. Distributed training of models (model parallelism)

The Nvidia egx-edge computing strategy  implements a cloud native approach through Kubernetes.  Thus, you can implement a full CI/CD edge strategy but with training on edge devices. Model parallelism i.e. Distributed training of models on the edge (pdf) is also a part of this approach.

Both training on edge devices and distributed training could have a profound impact on next generation AI applications like those in healthcare or using 5G

To conclude, nomenclature aside, the strategy has profound implications for the future of AI.

Image source: Nvidia

Views: 248

Comment

You need to be a member of Data Science Central to add comments!

Join Data Science Central

Videos

  • Add Videos
  • View All

© 2020   TechTarget, Inc.   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service