ABSTRACT
Let f be
a continuous function on Rn, and supposed f is a smooth nonlinear function, such functions arise in many
applications, and very often minimizers are points at which f is not differentiable. Of particular
interest is the case where the gradient and the Hessian cannot be computed for
any x. In this thesis, two methods
are presented for implementation of derivative free optimization. The finite
difference representation of the
gradient and Hessian in Quasi Newton method and the derivative free Trust Region method. We showed that if f
has a unique solution, then the set of all the step length (h) generated by
the algorithm converges globally. Three test problems are presented and with
the use of MATLAB (R2007b) software the effectiveness of the methods is shown.
Numerical results are presented demonstrating the robustness of the algorithm
and the result compared favourably with some existing algorithms.
CHAPTER ONE
INTRODUCTION
1.1
Background to the Study
This
research is centered on optimizing a function of several variables, whose
derivative is unavailable.
Each
and every one of us takes decisions in the course of our day-to-day activities,
in order to accomplish certain tasks. Usually, there are several, perhaps many
possible ways of accomplishing these tasks. Although some choices will
generally be better than others, consciously or unconsciously, we must therefore
decide upon the best-or optimal-way to realize our objectives.
For example, all of us at one time
or another, find it necessary to drive through city traffic. We could attempt
to find the shortest possible route from point A to point B without concern for
the time required to traverse this route, or alternately, we could seek out the
quickest though not necessarily the shortest route between A and B. As a
compromise, we might attempt to find the shortest path from A to B subject to
the auxiliary condition that the transit time, does not exceed some prescribed
value.
In
a classical sense, Optimization can be defined as the art of obtaining best
policies to satisfy certain objectives, at the same time satisfying fixed
requirements. However, recent advances in Applied Mathematics, Operations
Research, and Digital-Computer Technology enable many complex industrial
problems in engineering and economics to be optimized successfully by the
application of logical and systematic techniques. The development of new and
increasingly powerful optimization techniques is proliferating rapidly.
Optimization has been playing an important
role in many branches of science and technology such as engineering, finance,
probability and statistics. There are many optimization algorithms that have
been developed to locate the optima of continuous objective functions. It is
obvious that if a point x* corresponds to the maximum of a function f(x), the
same point corresponding to the minimum value of the function - f(x). Thus,
optimization can be taken to be maximization.
There
is no single method available for solving all optimization problems
efficiently. Hence, a number of methods have been developed for solving
different types of problems.
The
existence of optimization can be traced back to Newton, Lagrange and Cauchy.
The development of differential methods for optimization was possible because
of the contribution of Newton and Leibnitz. The foundations of the calculus of
variations were laid by Bernoulli, Euler, Lagrange and Weierstrass. Constrained
optimization was first studied by Lagrange and the notion of descent was
introduced by Cauchy.
Despite
these early contributions, very little progress was made till the 20th
century, when computer power made the implementation of optimization procedures
possible and this turn stimulated further research methods.
================================================================
Item Type: Project Material | Attribute: 49 pages | Chapters: 1-5
Format: MS Word | Price: N3,000 | Delivery: Within 30Mins.
================================================================
No comments:
Post a Comment