Home » Uncategorized

Fixed Point Strategies

  • Fixed point strategies give a unifying framework to analyze, model, and solve a wide range of problems. 
  • The methods are easy to train/implement, yet can approximate infinite depth.
  • They can explain the behavior of advanced convex optimization methods,

a key shortcoming associated with creating deep learning models is that making them deeper and,
thus, more expressible requires ever increasing memory. For practical devices, computer memory limits model depth. Prior implicit depth methods
presented a way of partially circumventing this memory limit; however, they
came with the trade off of much more computational costs to train networks.
This work with FPNs presents a third option, implicit depth without increasing
the computational costs of training. This approach increases the efficiency of
computational resources both with respect to memory and computation [2]

Solutions for highly structured or large-scale optimization are getting deeper, requiring game theory, neural networks and an array other advanced techniques that are far beyond the “usual” technique of minimizing functions. This wide range of increasingly complex data science problems and their solutions can be described by fixed point strategies: a unifying set of general principles and structures. This is a relatively new class of deep learning models that can approximate infinite depth. 

Over the last 60 years or so, the theory of fixed points has been revealed as a very powerful and important tool in the study of nonlinear phenomena. In particular, fixed point techniques have been applied in such diverse fields as biology, chemistry, physics, engineering, game theory and economics.

In numerous cases finding the exact solution is not possible; hence it is necessary to develop appropriate algorithms to approximate the requested result. This is strongly related to control and optimization problems arising in the different sciences and in engineering problems. Many situations in the study of nonlinear equations, calculus of variations, partial differential equations, optimal control and inverse problems can be formulated in terms of fixed point problems or optimization.[3]

FPNs
Fixed point strategies are a way to tackle the infinite depth of some problems by finding a limit. FPNs can modify standard models, guaranteeing convergence to the fixed point limit [2]. Standard neural networks use their weights to define a series of computations to perform (e.g. define weights for matrix multiplications). Implicit models, in particular FPNs, use weights to define a limit condition. Applying FPNs can be viewed as a sort of optimization scheme

Advantages of Fixed Point Strategies

On the algorithmic front, it leads to powerful convergence principles that demystify the design and the asymptotic analysis of iterative methods. Furthermore, fixed point methods
can be implemented using stochastic perturbations, as well as block-coordinate or block-iterative strategies which reduce the computational load and memory requirements of the iteration [1].

We approached the problem of maximum-likelihood estimation of a DPP kernel from a different angle: we analyzed the stationarity properties of the cost function and used them
to obtain a novel fixed-point Picard iteration.TWe compare performance of our algorithm, referred to as Picard iteration4 , against the EM algorithm presented in Gillenwater et al. (2014). We experiment on both synthetic and real-world data.he Picard iteration provides overall significantly shorter runtimes when dealing with large matrices and training sets.for a range of ground set sizes and number of samples, our Picard iteration runs remarkably faster that the previous best approach,while being extremely simple to implement. [1]

References:

[1] Fixed Point Strategies in Data Science

[2] Fixed Point Networks: Implicit Depth Models with Jacobian-Free Back…

[3] https://fixedpointtheoryandapplications.springeropen.com/about