This page describes the tools to preform local optimization of generic function and non-linear fit of experimental data to some model equation.
The goal of optimization is, given a system, to find the parameters that yield the optimal performance. The performance measure is given by a known mathematical function which depends on several independent variables. Thus, the problem consists in finding the parameter values that minimize f. If your problem is to maximize f, just redefind f by -f and you will then have a minimization problem.
We are interested in the cases that the function f depends on many parameters, thus an exhaustive search of the minimum is unfeasible. It is then assumed that the only source of information available to the optimization algorithm is the avaluation of the objective function (also called fitness function, or figure-of-merit) at a limited number of selected points.
Usually the topology of the parameter space (i.e., the variation of f as a function of the different variables) has many minima, and a fundamental problem is to find the global one. This is a difficult task and is treated elsewhere. Here we are only concerned in calculating the minimum closest to an starting point in the parameter space. This problem is known as local optimization.
A typical problem in many scientific areas is to fit a set of points (usually experimental data points) to a given model, consisting in a (most of the times non-linear) equation. This model depends on several "free" parameters and the goal of the fitting routine is to find the ones that minimize a given figure-of-merit (usually the "chi-square") (for a detailed discussion, see Numerical Recipes book). We want to concentrate in pactical problems and how to apply "standard" softeware to them. For a more detailed discussion on Local optimization methods see, for example, the following books:
The following article also discuss modern local optimization codes:
You may also be interested in the Local Optimization Software
Our interest here is: