Find the optimal value for theta


In [1]:
% Notation for setting the parameters for the optimisation algorithm
options = optimset('GradObj', 'on', 'MaxIter', 100);  % 100 iterations and we are going to provide the gradient
initialTheta = zeros(2,1)  % Initial guess for theta


initialTheta =

   0
   0


In [2]:
% f-min-unconstrained
% optimal value for theta = ...
[optTheta, functionVal, exitFlag] = fminunc(@costFunction, initialTheta, options)


optTheta =

   5.0000
   5.0000

functionVal =    1.5777e-30
exitFlag =  1

In [4]:
help fminunc


'fminunc' is a function from the file /usr/share/octave/4.2.1/m/optimization/fminunc.m

 -- fminunc (FCN, X0)
 -- fminunc (FCN, X0, OPTIONS)
 -- [X, FVAL, INFO, OUTPUT, GRAD, HESS] = fminunc (FCN, ...)
     Solve an unconstrained optimization problem defined by the function
     FCN.

     FCN should accept a vector (array) defining the unknown variables,
     and return the objective function value, optionally with gradient.
     'fminunc' attempts to determine a vector X such that 'FCN (X)' is a
     local minimum.

     X0 determines a starting guess.  The shape of X0 is preserved in
     all calls to FCN, but otherwise is treated as a column vector.

     OPTIONS is a structure specifying additional options.  Currently,
     'fminunc' recognizes these options: "FunValCheck", "OutputFcn",
     "TolX", "TolFun", "MaxIter", "MaxFunEvals", "GradObj",
     "FinDiffType", "TypicalX", "AutoScaling".

     If "GradObj" is "on", it specifies that FCN, when called with two
     output arguments, also returns the Jacobian matrix of partial first
     derivatives at the requested point.  'TolX' specifies the
     termination tolerance for the unknown variables X, while 'TolFun'
     is a tolerance for the objective function value FVAL.  The default
     is '1e-7' for both options.

     For a description of the other options, see 'optimset'.

     On return, X is the location of the minimum and FVAL contains the
     value of the objective function at X.

     INFO may be one of the following values:

     1
          Converged to a solution point.  Relative gradient error is
          less than specified by 'TolFun'.

     2
          Last relative step size was less than 'TolX'.

     3
          Last relative change in function value was less than 'TolFun'.

     0
          Iteration limit exceeded--either maximum number of algorithm
          iterations 'MaxIter' or maximum number of function evaluations
          'MaxFunEvals'.

     -1
          Algorithm terminated by 'OutputFcn'.

     -3
          The trust region radius became excessively small.

     Optionally, 'fminunc' can return a structure with convergence
     statistics (OUTPUT), the output gradient (GRAD) at the solution X,
     and approximate Hessian (HESS) at the solution X.

     Application Notes: If the objective function is a single nonlinear
     equation of one variable then using 'fminbnd' is usually a better
     choice.

     The algorithm used by 'fminunc' is a gradient search which depends
     on the objective function being differentiable.  If the function
     has discontinuities it may be better to use a derivative-free
     algorithm such as 'fminsearch'.

     See also: fminbnd, fminsearch, optimset.

Additional help for built-in functions and operators is
available in the online version of the manual.  Use the command
'doc <topic>' to search the manual index.

Help and information about Octave is also available on the WWW
at http://www.octave.org and via the help@octave.org
mailing list.

In [ ]: