Subscribe to DSC Newsletter

Speedup your Machine Learning applications without changing your code

Emerging cloud applications like machine learning, AI and big data analytics require high performance computing systems that can sustain the increased amount of data processing without consuming excessive power. Towards this end, many cloud operators have started adopting heterogeneous infrastructures deploying hardware accelerators, like FPGAs, to increase the performance of computational intensive tasks. However, most hardware accelerators lack of programming efficiency as they are programmed using not-so widely used languages like OpenCL, VHDL and HLS.

According to a survey from Databricks in 2016, 91% of the data scientists care mostly about the performance of their applications and 76% care about the easy of programming. Therefore, the most efficient way for the data scientists to utilize hardware accelerators like FPGAs, in order to speedup their application, is through the use of a library of IP cores that can be used to speedup the most computationally intensive part of the algorithm. Ideally, what most data scientists want is better performance, lower TCO and no need to change their code.

Views: 1166

Tags: cloud, computing, learning, machine, ml, spark

Comment

You need to be a member of Data Science Central to add comments!

Join Data Science Central

Comment by Chris Kachris on February 24, 2019 at 10:27pm

The dataset refers to the largest MNIST 8M on SVM format (refers to 8 million data - 24GB)

you can find the dataset in the following link:

https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/multiclass....

Comment by Thomas Loock on February 24, 2019 at 8:52am

The MNIST dataset is less than 20 Megabytes and not 24 GBytes.

MNIST Home

About what dataset are you talking?

Videos

  • Add Videos
  • View All

© 2019   Data Science Central ®   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service