*By Rohan Kotwani.*

KernelML is brute force optimizer that can be used to train machine learning models. The package uses a combination of a machine learning and monte carlo simulations to optimize a parameter vector with a user-defined loss function. KernelML doesn’t try to compete TensorFlow in computing the derivatives of non-linear *activation* functions. As far as I can tell from playing around with **tf.gradients**, the derivative in the example, shown below, only has a constant value for a known x. If anyone knows how to back-propagate these errors, I would be interested in learning how to do so. KernelML differs from PyTorch in a major way: it doesn’t *really* model the distribution for each parameter. KernelML samples the parameter space for a loss function around a global or local minima that can be used to form *weak confidence intervals*.

The goal of this experiment was to find potential use cases where kernelml provides some benefit over existing packages, such as TensorFlow. In this example, we will build an autoencoder to construct latent variables from data. We can define the latent layer to be a non-linear system and make the partial derivative of the output, with respect to the input parameters, non-constant. We will fit an autoencoder to the higgs boson training dataset’s features while forcing a non-linear latent variable structure and constraining some of the parameters to be positive. A model will then be built, with Keras, to predict the target (binary) variable.

1. The parameters in each layer can be non-linear

2. Each parameter can be sampled from a different random distribution

3. The parameters can be transformed to meet certain constraints

4. Network combinations are defined in terms of matrix operations

5. Parameters are probabilistically updated

6. Each parameter update samples the loss function around a local or global minima

An autoencoder is a neural network that models a representation of the input data. Say that we would like to find a representation for a dataset, i.e, X. The autoencoder will use X as both the input and the output, but will constrain the intermediate layers to have fewer “degrees of freedom” than the data’s dimensions. For example, if X has 32 dimensions, the number of neurons in the intermediate will be less than 32. An autoencoder with non-linear activation layers is shown below. Just for fun, I made the first layer have the same form as Einstein’s field equations.

This auto encoder is made up of two intermediate layers, where w1 and w0 are filters. The @ symbol represents a dot product in the equation above. After each filter is applied, the extra parameters are applied to the model. Note: the partial derivative of the second layer output with respect to the input parameters includes the extra parameters, i.e., alpha1, beta1. The non-linear parameters in the 1st layer causes the partial derivative to be dependent on other parameters in the same layer. The ‘layer recursive dependency’ does not cause any problems for KernelML. The model will minimize the mean squared error between the model output and the input data.

*Read full article here. *

© 2020 Data Science Central ® Powered by

Badges | Report an Issue | Privacy Policy | Terms of Service

**Most Popular Content on DSC**

To not miss this type of content in the future, subscribe to our newsletter.

- Book: Statistics -- New Foundations, Toolbox, and Machine Learning Recipes
- Book: Classification and Regression In a Weekend - With Python
- Book: Applied Stochastic Processes
- Long-range Correlations in Time Series: Modeling, Testing, Case Study
- How to Automatically Determine the Number of Clusters in your Data
- New Machine Learning Cheat Sheet | Old one
- Confidence Intervals Without Pain - With Resampling
- Advanced Machine Learning with Basic Excel
- New Perspectives on Statistical Distributions and Deep Learning
- Fascinating New Results in the Theory of Randomness
- Fast Combinatorial Feature Selection

**Other popular resources**

- Comprehensive Repository of Data Science and ML Resources
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- 100 Data Science Interview Questions and Answers
- Cheat Sheets | Curated Articles | Search | Jobs | Courses
- Post a Blog | Forum Questions | Books | Salaries | News

**Archives:** 2008-2014 |
2015-2016 |
2017-2019 |
Book 1 |
Book 2 |
More

**Most popular articles**

- Free Book and Resources for DSC Members
- New Perspectives on Statistical Distributions and Deep Learning
- Time series, Growth Modeling and Data Science Wizardy
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- Comprehensive Repository of Data Science and ML Resources
- Advanced Machine Learning with Basic Excel
- Difference between ML, Data Science, AI, Deep Learning, and Statistics
- Selected Business Analytics, Data Science and ML articles
- How to Automatically Determine the Number of Clusters in your Data
- Fascinating New Results in the Theory of Randomness
- Hire a Data Scientist | Search DSC | Find a Job
- Post a Blog | Forum Questions

## You need to be a member of Data Science Central to add comments!

Join Data Science Central