|
- Method resolution order:
- Polynomial
- polynomial_opt.Polynomial
- polynomial_base.Polynomial
- builtins.object
Methods defined here:
- __init__(self, *args, **kwargs)
- Create a new multivariate polynomial object for optimisation.
Call:
p = Polynomial(A, b)
p = Polynomial(s)
p = Polynomial(shape, variables, degree, terms[, inner])
p = Polynomial(nr)
Input:
There are different possible inputs:
---
A - (n x t)-matrix or list of lists, representiong the exponents
b - array-like of length t
---
s - string, which represents the polynomial, variables as 'x0' or 'x(0)'
---
shape - string, describes Newton polytope, can be 'simplex'/'standard_simplex'/'general'
variables - int, maximal number of variables
degree - int, maximal degree
terms - int, number of terms
inner [optional, default 0] - minimal number of interior points
---
nr - number, which tells the rowid of the database
---
Additional keywords
seed [default None] - seed for the random number generator
dirty [default True] - flag, whether the input is in an unclean state
USE ONLY IF YOU KNOW WHAT YOU ARE DOING.
matlab_instance [default newly created] - bridge to matlab, to avoid starting multiple instances
orthant [default (0,...,0)] - restriction to some orthants, one entry for each variable
0 - unknown sign
1/-1 - positive/negative half space
- forked_bound(self, strategy='sonc', parallel=True, solver='ECOS', **kwargs)
- Give an improved bound by computing a bound for each minimal orthant.
Call:
bound = p.forked_bound([strategy, parallel])
Input:
strategy[optional, string, default 'sonc']: which method to use for the lower bounds
- 'sonc': see self.sonc_opt_python(), fastest
- 'sage': see self.sage_opt_python()
- 'all': see self.run_all(), best result
parallel[optional, boolean, default True]: flag, whether to run the computations in parallel
Output:
bound: float, the smallest bound found over the minimal orthants
- traverse(self, strategy='min', depth=None, max_size=None, reltol=1.1920928955078125e-07, abstol=1.1920928955078125e-07, call_sos=True, verbose=False, sparse=False)
- Improve the lower bound of the polynomial by branching over the signs of the variables.
Call:
p.traverse([strategy, depth, max_size, reltol, abstol])
Input:
strategy [string, default 'min']: strategy in which order to traverse the tree
- 'min': choose node with minimal lower bound
- 'dfs': depth-first search
- 'bfs': breadth-first search
depth [optional, default no limit]: maximal depth of the search tree
mainly useful for 'bfs'/'dfs'
max_size [optional, default no limit]: maximal size (number of nodes) of the search tree
mainly useful for 'min'
abstol [optional, default: aux.EPSILON]: absolute tolerance,
branching stops, when duality gap is below this value
reltol [optional, default: aux.EPSILON]: relative tolerance
branching stops, when duality gap is below this value
call_sos [optional, default: True]: flag, whether to call SOS on the root
Methods inherited from polynomial_opt.Polynomial:
- clear(self)
- Delete all problem associated with the polynomial, to save memory.
- detect_infinity(self)
- Quick check, whether we can verify that the polynomial is unbounded.
This method iterates over all faces with degenerate points and checks whether the polynomial restricted to that face is negative.
Call:
res = p.detect_infinity()
Output:
res: 'None' if no negative face was found.
Otherwise it is a pair consisting of:
x_min - argument, where neagtive value is obtained
indices - list of indices, which vertices form that negative face
- gap(self)
- Return the gap between lower bound and found minimum.
- get_decomposition(self, symbolic=False)
- Return a decomposition into non-negative polynomials, according to currently stored solution.
Note: If in SONC, there are points, which are not covered with the constant term,
the solution in exact arithmetic may still be slightly negative.
Call:
decomposition = p.get_decomposition([symbolic])
Input:
symbolic [optional, default False]: boolean, whether to create a solution in exact arithmetic.
Currently only implemented for SONC.
Output:
decomposition: list of polynomials
- get_solution(self, index)
- Return the solution given by the index.
- get_solutions(self)
- Return a list of (solver, time, optimum) for all solutions found.
- gloptipoly_opt(self, solver='MOSEK')
- Optimise the polynomial given by (A,b) via SOS using globtipoly in Matlab.
Let p be the polynomial given by (A,b). We want to find min{ p(x) : x in R^n} by asking, what is the minimal gamma such that p + gamma is a sum of squares.
This is the case iff there is some psd-matrix C such that p = Z^T * C * Z, where Z is the vector of all monomials.
Call:
data = p.gloptipoly_opt()
Output:
data: dictionary containing information about the solution
- opt: optimal value
- C: psd-matrix such that p = Z^T * C * Z, where Z is the vector of all monomials
- time: time to compute the solution
- status: 1 = Solved, 0 = otherwise
- is_sum_of_monomial_squares(self, eps=0)
- Check whether the polynomial is a sum of monomial squares.
Call:
res = p.is_sum_of_monomial_squares([eps = 0])
Input:
eps [optional, default 0]: accuracy, up to which value a coefficient is treated as zero.
Output:
res: bool, whether the polynomial is a sum of monomial squares (up to given accuracy)
- local_min(self, method='sonc', **kwargs)
- Compute a local minimum, according to the given method.
Call:
fmin, xmin = p.local_min([method,**kwargs])
Input:
method: which method to use
- random: gradient method with random starting points
see p._local_min_random()
- sonc: single call of gradient method, starting form barycentre of SONC-minima
see p._local_min_sonc()
- differential_evolution: scipy
see p._local_min_differential_evolution()
**kwargs: dictionary of keyword-arguments, handled to the minimisation method
Output:
fmin: minimal value found
xmin: argument, where fmin is attained
- opt(self, T=None, language='python', solver=None)
- Optimise the polynomial given by (A,b) via SONC using cvx in Matlab.
WARNING: function is outdated
Let p be the polynomial given by (A,b). We want to find min{ p(x) : x in R^n} by asking, what is the minimal gamma such that p + gamma is a sum of non-negative circuit polynomials.
Call:
data = p.opt([T, solver, check])
Input:
solver [optional, default aux.SDP_SOLVER/aux.GP_SOLVER]: solver, to solve the problem, currenty possible:
Matlab: 'sedumi', 'sdpt3', 'Mosek'
Python: 'ECOS', 'scs', 'cvxopt'
T [optional, default None]: a covering of the interior points by simplices
if none is given, a cover is computed
Output:
data: dictionary containing information about the solution
- opt: optimal value
- C: matrix, coefficients for the decomposition
- time: time to compute the solution
- verify: 1 = Solved, -1 = error, 0 = otherwise/unchecked
- status: status message of the solver
- print_all(self, **kwargs)
- print_solutions(self, form='grid', only_valid=False, params=False)
- Print a table of all stored solutions.
You can obtain the solution with a given index by p.get_solution(<index>).
Call:
p.print_solutions([only_valid, form])
Input:
only_valid [boolean, default False]: flag, whether to print only verified solutions
i.e. those with <solution>['verify'] == 1
form [string, default 'grid'] - tableformat for tabulate
params [boolean, default False]: flag, whether to print the parameters
- relax(self)
- Return a lower estimate for the polynomial.
All potentially newgative terms are made negative at once.
This function should be used, when checking the decomposition, obtained by get_decomposition(), since that functions works on the relaxation.
- run_all(self, keep_alive=False, call_sos=True, clear=True)
- Run all optimisation methods which are currently implemented.
The results are stored in self.old_solutions.
SOS is called only if <= 400 monomials
Current list of methods:
- sostools
- gloptipoly
- sos-cvx in Matlab, using sedumi
- sos-cvx in Matlab, using sdpt3
- sos-cvx in Python using cvxopt (if matrix-size <= 120)
- sonc-cvx in Matlab, using sedumi
- sonc-cvx in Matlab, using sdpt3
- sonc-cvx in Python using ECOS
If the Matlab engine was not found, only Python is run.
- run_sonc(self, solver='ECOS')
- Run SONC with two different cover strategies.
* cover, that maximises use of constant term
* cover, that uses all monomial squares
If both strategies yields the same cover, we run SONC only once.
- run_sos(self)
- Run all available SOS-methods to compute a lower bound.
If Matlab is available (if matrix-size <= 400):
* Sostools: with and without 'sparse' flag
* Gloptipoly
* Yalmip: using 'solvesos' and 'sparsepop'
* own implementation (only if <= 8000 constraints): using 'Mosek', 'sedumi' and 'sdpt3'
Python (if matrix-size <= 120):
* cvxopt
* MOSEK
- sage_opt_python(self, solver='ECOS', **solverargs)
- Optimise the polynomial given by (A,b) via SAGE using cvxpy.
Let p be the polynomial given by (A,b), where A is an (n x t)-matrix and b in R^t.
We want to find min{ p(x) : x in R^n} by asking, what is the minimal gamma such that p + gamma is a sum of arithmetic-geometric-mean exponentials (SAGE).
Call:
data = p.sage_opt_python([solver], **solverargs)
Input:
solver [optional, default aux.GP_SOVLER]: solver, to solve the problem, currenty possible: 'ECOS', 'cvxopt', 'scs', 'Mosek'
solverargs: dictionary of keywords, handed to the solver
Output:
data: dictionary containing information about the solution
- opt: optimal value
- C: (t x t)-matrix, coefficients for the decomposition,
- lambda: (t x t)-matrix, barycentri c coordinates
- time: time to compute the solution
- verify: 1 = Solved, -1 = error, 0 = otherwise/unchecked
- status: status message of the solver
- set_cover(self, T, split=True)
- Set a new cover and store the previous one.
Call:
p.set_cover(T)
Input:
T: list of list of integers, where all entries are column indices of p.A
For each l in T we need that p.A[:,l] describes a simplex with some interior points.
If there already was a cover, it is appended to p.old_covers.
- sonc_opt_python(self, solver='ECOS', **solverargs)
- Optimise the polynomial given by (A,b) via SONC using cvxpy.
Let p be the polynomial given by (A,b). We want to find min{ p(x) : x in R^n} by asking, what is the minimal gamma such that p + gamma is a sum of non-negative circuit polynomials.
Call:
data = p.sonc_opt_python([solver], **solverargs)
Input:
solver [optional, default aux.GP_SOLVER]: solver, to solve the problem, currenty possible: 'ECOS', 'cvxopt', 'scs', 'Mosek'
solverargs: dictionary of keywords, handed to the solver
Output:
data: dictionary containing information about the solution
- opt: optimal value
- C: (m x (n-m))-matrix, coefficients for the decomposition
- time: time to compute the solution
- verify: 1 = Solved, -1 = error, 0 = otherwise/unchecked
- status: status message of the solver
- sonc_realloc(self, max_iters=10)
- Print several solutions for SONC, with sucessively improved distribution of negative coefficients.
Call:
p.sonc_realloc(max_iters)
Input:
max_iters [optional, default 10]: number of iterations
- sos_opt_matlab(self, solver='MOSEK')
- Optimise the polynomial given by (A,b) via SOS using cvx in Matlab.
Let p be the polynomial given by (A,b). We want to find min{ p(x) : x in R^n} by asking, what is the minimal gamma such that p + gamma is a sum of squares.
This is the case iff there is some psd-matrix C such that p = Z^T * C * Z, where Z is the vector of all monomials.
Call:
data = p.sos_opt_matlab([solver])
Input:
solver [optional, default aux.SDP_SOLVER]: solver, to solve the problem, currenty possible: 'sedumi', 'sdpt3', 'Mosek'
Output:
data: dictionary containing information about the solution
- opt: optimal value
- C: psd-matrix such that p = Z^T * C * Z, where Z is the vector of all monomials
- time: time to compute the solution
- verify: 1 = Solved, -1 = error, 0 = otherwise/unchecked
- status: status message of the solver
- sos_opt_python(self, solver='MOSEK', sparse=True, **solverargs)
- Optimise the polynomial given by (A,b) via SOS using cvx.
Let p be the polynomial given by (A,b). We want to find min{ p(x) : x in R^n} by asking, what is the minimal gamma such that p + gamma is a sum of squares.
This is the case iff there is some psd-matrix C such that p = Z^T * C * Z, where Z is the vector of all monomials.
Note: scs randomly runs VERY long on trivial instances. Usage is possible, but discouraged.
Call:
data = p.sos_opt_python(A,b,[solver],**solverargs)
Input:
solver [optional, default 'CVXOPT']: solver, to solve the problem, currenty possible: 'CVXOPT', 'Mosek', 'SCS'
solverargs: dictionary of keywords, handed to the solver
Output:
data: dictionary containing information about the solution
- opt: optimal value
- C: psd-matrix such that p = Z^T * C * Z, where Z is the vector of all monomials
- time: time to compute the solution
- verify: 1 = Solved, -1 = error, 0 = otherwise/unchecked
- status: status message of the solver
- sostools_opt(self, solver='MOSEK', sparse=False)
- Optimise the polynomial given by (A,b) via SOS using SOSTOOLS in Matlab.
Let p be the polynomial given by (A,b). We want to find min{ p(x) : x in R^n} by asking, what is the minimal gamma such that p + gamma is a sum of squares.
This is the case iff there is some psd-matrix C such that p = Z^T * C * Z, where Z is the vector of all monomials.
Call:
data = p.sostools_opt()
Output:
data: dictionary containing information about the solution
- opt: optimal value
- C: psd-matrix such that p = Z^T * C * Z, where Z is the vector of all monomials
- time: time to compute the solution
- status: 1 = Solved, 0 = otherwise
- trivial_check(self)
- Check whether p is a sum of monomial squares.
- trivial_solution(self)
- Compute a trivial solution to the SONC-problem.
Let p be the polynomial given by (A,b). We want to quickly find a gamma such that p + gamma is SONC.
Note: This function only works if the convex hull of the Newton polytope forms a simplex.
Call:
data = p.trivial_solution()
Output:
data: dictionary containing information about the solution
- opt: feasible value
- C: (m x (n-m))-matrix, coefficients for the decomposition
- time: time to compute the solution
- verify: 1 = Solved, -1 = error, 0 = otherwise/unchecked
- status: status message of the solver
- yalmip_opt(self, method='solvesos', solver='MOSEK')
- Optimise the polynomial given by (A,b) via SOS using YALMIP in Matlab.
Let p be the polynomial given by (A,b). We want to find min{ p(x) : x in R^n} by asking, what is the minimal gamma such that p + gamma is a sum of squares.
This is the case iff there is some psd-matrix C such that p = Z^T * C * Z, where Z is the vector of all monomials.
Call:
data = p.yalmip_opt()
Input:
method [optional, default 'solvesos']: which method in Yalmip to use
- solvesos
- sparsepop
Output:
data: dictionary containing information about the solution
- opt: optimal value
- C: psd-matrix such that p = Z^T * C * Z, where Z is the vector of all monomials
- time: time to compute the solution
- status: 1 = Solved, 0 = otherwise
- z3_nonnegative(self)
- Check nonnegativity via Z3.
Call:
nonneg = p.z3_nonnegative()
Output:
nonneg [bool] - whether the polynomial is nonnegative
Methods inherited from polynomial_base.Polynomial:
- __add__(self, other)
- Return the sum of this polynomial with another one.
- __call__(self, x, dtype='float')
- Evaluate the polynomial at point x.
__dict__ = mappingproxy({'__module__': 'polynomial', '__doc...nction Polynomial._tree_size at 0x7fb0bacb1ae8>})
- __eq__(self, other)
- Check equality of polynomials.
- __neg__(self)
- Return the negation of this polynomial.
- __sizeof__(self)
- Return bit-size of the instance.
- __str__(self)
- Return the polynomial as string.
- __sub__(self, other)
- Return the difference between this polynomial and another one.
- clean(self)
- Bring polynomial into clean state.
- copy(self)
- Return a copy of itself.
- derive(self, index)
- Compute the derivative with respect to the given index.
Call:
res = p.derive(index)
Input:
index [integer] - index of variable, by which we derive p, starting with zero
Output:
res - Polynomial, derivative of p by x_index
- pip(self)
- Return the polynomial in PIP-format.
- prime(self, variables=None)
- Compute full derivative of the polynomial.
Call:
pprime = p.prime([variables])
Input:
variables [optional, default: all occurring] - number of variables, by which we derive
Output:
pprime - Polynomial, derivative of p
- scaleround(self, factor)
- Scale polynomial and round coefficients to integer.
Call:
p.scaleround(factor)
Input:
factor [number] - scale all coefficients by 'factor', then round to integer
Note: This function changes the coefficients in place and sets the 'dirty' flag to 'True'.
- tex(self)
- Return the polynomial as string for LaTeX.
- to_symbolic(self)
- Return the polynomial as symbolic expression in sympy.
Data descriptors inherited from polynomial_base.Polynomial:
- __weakref__
- list of weak references to the object (if defined)
Data and other attributes inherited from polynomial_base.Polynomial:
- __hash__ = None
|