go home Home | Main Page | Modules | Namespace List | Class Hierarchy | Alphabetical List | Data Structures | File List | Namespace Members | Data Fields | Globals | Related Pages
Public Types | Public Member Functions | Static Public Member Functions | Protected Member Functions | Protected Attributes | Private Member Functions | Private Attributes
itk::AdaptiveStochasticGradientDescentOptimizer Class Reference

#include <itkAdaptiveStochasticGradientDescentOptimizer.h>

Inheritance diagram for itk::AdaptiveStochasticGradientDescentOptimizer:
Inheritance graph
[legend]
Collaboration diagram for itk::AdaptiveStochasticGradientDescentOptimizer:
Collaboration graph
[legend]

Public Types

typedef SmartPointer< const SelfConstPointer
typedef
Superclass::CostFunctionType 
CostFunctionType
typedef Superclass::DerivativeType DerivativeType
typedef Superclass::MeasureType MeasureType
typedef Superclass::ParametersType ParametersType
typedef SmartPointer< SelfPointer
typedef
Superclass::ScaledCostFunctionPointer 
ScaledCostFunctionPointer
typedef
Superclass::ScaledCostFunctionType 
ScaledCostFunctionType
typedef Superclass::ScalesType ScalesType
typedef
AdaptiveStochasticGradientDescentOptimizer 
Self
typedef
Superclass::StopConditionType 
StopConditionType
typedef
StandardGradientDescentOptimizer 
Superclass

Public Member Functions

virtual const char * GetClassName () const
virtual double GetSigmoidMax () const
virtual double GetSigmoidMin () const
virtual double GetSigmoidScale () const
virtual bool GetUseAdaptiveStepSizes () const
virtual void SetSigmoidMax (double _arg)
virtual void SetSigmoidMin (double _arg)
virtual void SetSigmoidScale (double _arg)
virtual void SetUseAdaptiveStepSizes (bool _arg)

Static Public Member Functions

static Pointer New ()

Protected Member Functions

 AdaptiveStochasticGradientDescentOptimizer ()
virtual void UpdateCurrentTime (void)
virtual ~AdaptiveStochasticGradientDescentOptimizer ()

Protected Attributes

DerivativeType m_PreviousGradient

Private Member Functions

 AdaptiveStochasticGradientDescentOptimizer (const Self &)
void operator= (const Self &)

Private Attributes

double m_SigmoidMax
double m_SigmoidMin
double m_SigmoidScale
bool m_UseAdaptiveStepSizes

Detailed Description

This class implements a gradient descent optimizer with adaptive gain.

If $C(x)$ is a costfunction that has to be minimised, the following iterative algorithm is used to find the optimal parameters $x$:

\[ x(k+1) = x(k) - a(t_k) dC/dx \]

The gain $a(t_k)$ at each iteration $k$ is defined by:

\[ a(t_k) = a / (A + t_k + 1)^alpha \]

.

And the time $t_k$ is updated according to:

\[ t_{k+1} = [ t_k + sigmoid( -g_k^T g_{k-1} ) ]^+ \]

where $g_k$ equals $dC/dx$ at iteration $k$. For $t_0$ the InitialTime is used, which is defined in the the superclass (StandardGradientDescentOptimizer). Whereas in the superclass this parameter is superfluous, in this class it makes sense.

This method is described in the following references:

[1] P. Cruz, "Almost sure convergence and asymptotical normality of a generalization of Kesten's stochastic approximation algorithm for multidimensional case." Technical Report, 2005. http://hdl.handle.net/2052/74

[2] S. Klein, J.P.W. Pluim, and M. Staring, M.A. Viergever, "Adaptive stochastic gradient descent optimisation for image registration," International Journal of Computer Vision, vol. 81, no. 3, pp. 227-239, 2009. http://dx.doi.org/10.1007/s11263-008-0168-y

It is very suitable to be used in combination with a stochastic estimate of the gradient $dC/dx$. For example, in image registration problems it is often advantageous to compute the metric derivative ( $dC/dx$) on a new set of randomly selected image samples in each iteration. You may set the parameter NewSamplesEveryIteration to "true" to achieve this effect. For more information on this strategy, you may have a look at:

See also:
AdaptiveStochasticGradientDescent, StandardGradientDescentOptimizer

Definition at line 68 of file itkAdaptiveStochasticGradientDescentOptimizer.h.


Member Typedef Documentation

Typedefs inherited from the superclass.

Reimplemented from itk::StandardGradientDescentOptimizer.

Definition at line 85 of file itkAdaptiveStochasticGradientDescentOptimizer.h.


Constructor & Destructor Documentation


Member Function Documentation

virtual const char* itk::AdaptiveStochasticGradientDescentOptimizer::GetClassName ( ) const [virtual]

Run-time type information (and related methods).

Reimplemented from itk::StandardGradientDescentOptimizer.

Reimplemented in elastix::AdaptiveStochasticGradientDescent< TElastix >.

Method for creation through the object factory.

Reimplemented from itk::StandardGradientDescentOptimizer.

Reimplemented in elastix::AdaptiveStochasticGradientDescent< TElastix >.

void itk::AdaptiveStochasticGradientDescentOptimizer::operator= ( const Self ) [private]

Set/Get the maximum of the sigmoid. Should be >0. Default: 1.0

Set/Get the maximum of the sigmoid. Should be <0. Default: -0.8

Set/Get the scaling of the sigmoid width. Large values cause a more wide sigmoid. Default: 1e-8. Should be >0.

Set/Get whether the adaptive step size mechanism is desired. Default: true

virtual void itk::AdaptiveStochasticGradientDescentOptimizer::UpdateCurrentTime ( void  ) [protected, virtual]

Function to update the current time If UseAdaptiveStepSizes is false this function just increments the CurrentTime by $E_0 = (sigmoid_{max} + sigmoid_{min})/2$. Else, the CurrentTime is updated according to:
time = max[ 0, time + sigmoid( -gradient*previousgradient) ]
In that case, also the m_PreviousGradient is updated.

Reimplemented from itk::StandardGradientDescentOptimizer.


Field Documentation

The PreviousGradient, necessary for the CruzAcceleration

Definition at line 131 of file itkAdaptiveStochasticGradientDescentOptimizer.h.

Settings

Definition at line 139 of file itkAdaptiveStochasticGradientDescentOptimizer.h.



Generated on 11-05-2012 for elastix by doxygen 1.7.6.1 elastix logo