Optimization

The methods used for the actual optimization

Bayesian

class F_UNCLE.Opt.Bayesian.Bayesian(simulations, models, opt_keys=None, name=u'Bayesian', *args, **kwargs)

A class for performing Bayesian inference on a model given data

Attributes

Attributes:

simulations(list): Each element is a tuple with the following elements

  1. A simulation
  2. Experimental results

model(PhysicsModel): The model under consideration sens_matrix(nump.ndarray): The (nxm) sensitivity matrix

  • n model degrees of freedom
  • m total experiment DOF
  • [i,j] sensitivity of model response i to experiment DOF j

Options These can be set as keyword arguments in a call to self.__init__

Name Type Def Min Max Units Description
outer_atol (float) 1E-6 0.0 1.0
Absolute tolerance on change in likelihood for outer loop convergence
outer_rtol (float) 1E-4 0.0 1.0
Relative tolerance on change in likelihood for outer loop convergence
maxiter (int) 6 1 100
Maximum iterations for convergence of the likelihood
constrain (bool) True None None
Flag to constrain the optimization
precondition (bool) True None None
Flag to scale the problem
debug (bool) False None None
Flag to print debug information
verb (bool) True None None
Flag to print stats during optimization

Note

The options outer_atol and prior_weight are deprecated and should be used for debugging purposes only

Methods

__call__(comm=None)

Determines the best candidate EOS function for the models

Parameters:comm (MPI.intracom) – An MPI communicator for sensitities
Returns:length 3, elements are:
  1. (Bayesian): A copy of self with the optimal models
  2. (list): is of solution history elements are:
    1. (np.ndarray) Log likelihood, (nx1) where n is number of iterations
    2. (np.ndarray) model dof history (nxm) where n is iterations and m is the model dofs
  3. (dict) sensitivity matrix of all experiments to
    the model, keys are simulation keys
  4. (dict) fisher information matrix for all experiments
    keys are simulation keys
Return type:(tuple)
__init__(simulations, models, opt_keys=None, name=u'Bayesian', *args, **kwargs)

Instantiates the Bayesian analysis

Parameters:
  • simulations (dict) – A dictionary of simulation, experiment pairs
  • models (dict) – A dictionary of models
  • opt_keys (list) – The list of models to be optimized
Keyword Arguments:
 

name (str) – Name for the analysis.(‘Bayesian’)

Returns:

None

_check_inputs(models, simulations)

Checks that the values for model and simulation are valid

_get_constraints()

Get the constraints on the model

_get_model_pq()

Gets the quadratic optimization matrix contributions from the prior

Parameters:None
Returns:elements are
  1. (np.ndarray): p, a nxn matrix where n is the model DOF
  2. (np.ndarray): q, a nx1 matrix where n is the model DOF
Return type:(tuple)
_get_sens(initial_data=None, comm=None)

Gets the sensitivity of the simulated experiment to the EOS

The sensitivity matrix is the attribute self.sens_matrix which is set by this method

Note

This method is specific to the model type under consideration. This implementation is only for spline models of EOS

Parameters:None
Keyword Arguments:
 initial_data – The results of each simulation with the current best model. Each element in the list corresponds tho the output from a __call__ to each element in the self.simulations list
Returns:None
_get_sim_pq(initial_data, sens_matrix)

Gets the QP contributions from the model

Note

This method is specific to the model type under consideration. This implementation is only for spline models of EOS

Parameters:
  • initial_data (list) – A list of the initial results from the simulations in the same order as in the sim list
  • sens_matrix (np.ndarray) – The sensitivity matrix
Returns:

Elements are:

  1. (np.ndarray): P, a nxn matrix where n is the model DOF
  2. (np.ndarray): q, a nx1 matrix where n is the model DOF

Return type:

(tuple)

_local_opt(initial_data, sens_matrix)

Solves the quadratic problem for maximization of the log likelihood

Parameters:
  • initial_data (list) – The initial data corresponding to simulations from sims
  • sens_matrix (np.array) – The sensitivity matrix about the model
Returns:

The step direction for greatest improvement in log likelihood

Return type:

(np.ndarray)

_on_str()

Print method for Bayesian model called by Struct.__str__

Parameters:None
Returns:String describing the Bayesian object
Return type:(str)
static fisher_decomposition(fisher_dct, simid, models, mkey, tol=0.001)

Calculate a spectral decomposition of the fisher information matrix

Parameters:
  • fisher_dct (dict) – A dictionary nxn array where n is sum of all model dof
  • simid (str) – Key for the simulation you want the Fisher info for
  • models (dict) – A dictionary of all models
  • mkey (str) – The key for the model on which the analysis is being performed
Keyword Arguments:
 

tol (float) – Eigenvalues less than tol are ignored

Returns:

Elements are:

  1. (list): Eigenvalues greater than tol

  2. (np.ndarray): nxm array.

    • n is number of eigenvalues greater than tol
    • m is model dof
  3. (np.ndarray): nxm array

    • n is the number of eigenvalues greater than tol
    • m is an arbitrary dimension of independent variable
  4. (np.ndarray): vector of independent variable

Return type:

(tuple)

get_data()

Generates a set of data at each experimental data-point

For each pair of simulations and experiments, generates the the simulation response, aligning it with the experimental data

Parameters:None
Returns:List of lists for experiment comparison data
  1. independent value
  2. dependent value of interest
  3. the summary data (3rd element) of sim
Return type:(list)
get_hessian(initial_data=None, simid=None)

Gets the Hessian (matrix of second derivatives) of the simulated experiments to the EOS

\[\begin{split}H(f_i) = \begin{smallmatrix} \frac{\partial^2 f_i}{\partial \mu_1^2} & \frac{\partial^2 f_i}{\partial \mu_1 \partial \mu_2} & \ldots & \frac{\partial^2 f_i}{\partial \mu_1 \partial \mu_n}\\ \frac{\partial^2 f_i}{\partial \mu_2 \partial \mu_1} & \frac{\partial^2 f_i}{\partial \mu_2^2} & \ldots & \frac{\partial^2 f_i}{\partial \mu_2 \partial \mu_n}\\ \frac{\partial^2 f_i}{\partial \mu_n \partial \mu_1} & \frac{\partial^2 f_i}{\partial \mu_n \partial \mu_2} & \ldots & \frac{\partial^2 f_i}{\partial \mu_n^2} \end{smallmatrix}\end{split}\]
\[H(f) = (H(f_1), H(f_2), \ldots , H(f_n))\]

where

\[\begin{split}f \in \mathcal{R}^m \\ \mu \in \mathcal{R}^m\end{split}\]
get_opt_dof()

Returns a vector of the model DOF for the optimization

model_log_like()

Gets the log likelihood of the self.model given that model’s prior

Returns:Log likelihood of the model
Return type:(float)
\[\log(p(f|y))_{model} = -\frac{1}{2}(f - \mu_f)\Sigma_f^{-1}(f-\mu_f)\]
plot_convergence(hist, axes=None, linestyles=[u'-k'], labels=[])
Parameters:

hist (tuple) – Convergence history, elements 0. (list): MAP history 1. (list): DOF history

Keyword Arguments:
 
  • axes (plt.Axes) – The axes on which to plot the figure, if None, creates a new figure object on which to plot.
  • linestyles (list) – Strings for the linestyles
  • labels (list) – Strings for the labels
plot_fisher_data(fisher_data, axes=None, fig=None, linestyles=[], labels=[])
Parameters:

fisher_dat (tuple) – Data from the fisher_decomposition function see docscring for definition

Keyword Arguments:
 
  • axes (plt.Axes) – Ignored
  • fig (plt.Figure) – A valid figure to plot on
  • linestyles (list) – A list of valid linestyles Ignored
  • labels (list) – A list of labels Ignored
shape()

Gets the dimensions of the problem

Returns:The (n, m) dimensions of the problem
  • n is the total degrees of freedom of all the model responses
  • m is the degrees of freedom of the model
Return type:(tuple)
sim_log_like(initial_data)

Gets the log likelihood of the simulations given the data

Parameters:
  • model (PhysicsModel) – The model under investigation
  • initial_data (list) – A list of the initial data for the simulations Each element in the list is the output from a __call__ to the corresponding element in the self.simulations list
Returns:

Log likelihood of the simulation given the data

Return type:

(float)

\[\log(p(f|y))_{model} = -\frac{1}{2} (y_k - \mu_k(f))\Sigma_k^{-1}(y_k-\mu_k(f))\]
update(simulations=None, models=None)

Updates the properties of the Bayesian analysis

Keyword Arguments:
 
  • simulations (dict) – Dictionary of simulation experiment pairs (Default None)
  • models (dict) – Dictionary of models (Default None)
Returns:

None

CostOpt

class F_UNCLE.Opt.cost_opt.CostOpt(simulations, models, opt_keys=None, name=u'Bayesian', *args, **kwargs)

Code for experimenting with alternative optimization algorithms

This class is experimental. It Requires importing pyOpt.

__call__()

Overloaded call function to optimize cost directly