Search moodle.org's
Developer Documentation

See Release Notes

  • Bug fixes for general core bugs in 3.11.x will end 14 Nov 2022 (12 months plus 6 months extension).
  • Bug fixes for security issues in 3.11.x will end 13 Nov 2023 (18 months plus 12 months extension).
  • PHP version: minimum PHP 7.3.0 Note: minimum PHP version has increased since Moodle 3.10. PHP 7.4.x is supported too.

(no description)

File Size: 304 lines (9 kb)
Included or required:0 times
Referenced: 0 times
Includes or requires: 0 files

Defines 2 classes

ConjugateGradient:: (7 methods):
  runOptimization()
  gradient()
  cost()
  getAlpha()
  getNewTheta()
  getBeta()
  getNewDirection()

MP:: (8 methods):
  mul()
  div()
  add()
  sub()
  muls()
  divs()
  adds()
  subs()


Class: ConjugateGradient  - X-Ref

Conjugate Gradient method to solve a non-linear f(x) with respect to unknown x
See https://en.wikipedia.org/wiki/Nonlinear_conjugate_gradient_method)

The method applied below is explained in the below document in a practical manner
- http://web.cs.iastate.edu/~cs577/handouts/conjugate-gradient.pdf

However it is compliant with the general Conjugate Gradient method with
Fletcher-Reeves update method. Note that, the f(x) is assumed to be one-dimensional
and one gradient is utilized for all dimensions in the given data.
runOptimization(array $samples, array $targets, Closure $gradientCb)   X-Ref
No description

gradient(array $theta)   X-Ref
Executes the callback function for the problem and returns
sum of the gradient for all samples & targets.


cost(array $theta)   X-Ref
Returns the value of f(x) for given solution


getAlpha(array $d)   X-Ref
Calculates alpha that minimizes the function f(θ + α.d)
by performing a line search that does not rely upon the derivation.

There are several alternatives for this function. For now, we
prefer a method inspired from the bisection method for its simplicity.
This algorithm attempts to find an optimum alpha value between 0.0001 and 0.01

Algorithm as follows:
a) Probe a small alpha  (0.0001) and calculate cost function
b) Probe a larger alpha (0.01) and calculate cost function
b-1) If cost function decreases, continue enlarging alpha
b-2) If cost function increases, take the midpoint and try again

getNewTheta(float $alpha, array $d)   X-Ref
Calculates new set of solutions with given alpha (for each θ(k)) and
gradient direction.

θ(k+1) = θ(k) + α.d

getBeta(array $newTheta)   X-Ref
Calculates new beta (β) for given set of solutions by using
Fletcher–Reeves method.

β = ||f(x(k+1))||²  ∕  ||f(x(k))||²

See:
R. Fletcher and C. M. Reeves, "Function minimization by conjugate gradients", Comput. J. 7 (1964), 149–154.

getNewDirection(array $theta, float $beta, array $d)   X-Ref
Calculates the new conjugate direction

d(k+1) =–∇f(x(k+1)) + β(k).d(k)

Class: MP  - X-Ref

Handles element-wise vector operations between vector-vector
and vector-scalar variables

mul(array $m1, array $m2)   X-Ref
Element-wise <b>multiplication</b> of two vectors of the same size


div(array $m1, array $m2)   X-Ref
Element-wise <b>division</b> of two vectors of the same size


add(array $m1, array $m2, int $mag = 1)   X-Ref
Element-wise <b>addition</b> of two vectors of the same size


sub(array $m1, array $m2)   X-Ref
Element-wise <b>subtraction</b> of two vectors of the same size


muls(array $m1, float $m2)   X-Ref
Element-wise <b>multiplication</b> of a vector with a scalar


divs(array $m1, float $m2)   X-Ref
Element-wise <b>division</b> of a vector with a scalar


adds(array $m1, float $m2, int $mag = 1)   X-Ref
Element-wise <b>addition</b> of a vector with a scalar


subs(array $m1, float $m2)   X-Ref
Element-wise <b>subtraction</b> of a vector with a scalar