How do I set the tolerance in scipy.optimize.minimize()?

scipy optimize maximize
scipy optimize example
scipy minimize multiple variables
scipy optimize minimize step size
scipy optimize minimize args
scipy minimize debug
sc optimize minimize
scipy optimize initial guess

I am using scipy.optimize.minimize to solve for an efficient portfolio.

With the default settings I am frequently running into "BaseException: Positive directional derivative for linesearch" errors for when using certain inputs. I noticed if I set the tolerance high enough, the problem becomes less but doesn't go away. Any advice?

import numpy as np
import pandas as pd
import scipy.optimize

def fx(TOLERANCE):
    #TOLERANCE = 1.5

    def solve_weights(R, C, rf, b_):
        def port_mean_var(W,R,C):
            return sum(R*W),, C), W)
        def fitness(W, R, C, rf):
            mean, var = port_mean_var(W, R, C)    # calculate mean/variance of the portfolio
            util = (mean - rf) / np.sqrt(var)        # utility = Sharpe ratio
            return 1/util                        # maximize the utility, minimize its inverse value
        n = len(R)
        W = np.ones([n])/n                        # start optimization with equal weights
        #b_ = [(0.,1.) for i in range(n)]    # weights for boundaries between 0%..100%. No leverage, no shorting
        c_ = ({'type':'eq', 'fun': lambda W: sum(W)-1. })    # Sum of weights must be 100%
        optimized = scipy.optimize.minimize(fitness, W, (R, C, rf), 
                                            method='SLSQP', constraints=c_, 
                                            bounds=b_, tol=TOLERANCE)    
        if not optimized.success: 
            raise BaseException(optimized.message)
        return optimized.x

    def mean_var_opt2(ret_df, upper_bounds=None):
        R = (ret_df.mean(0)*252).values
        C = (ret_df.cov()*252).values
        rf = 0.0
        if upper_bounds == None:
            upper_bounds = pd.Series(1.0,index=ret_df.columns)
        b_ = [(0.0,float(num)) for num in upper_bounds]
        wgts = solve_weights(R, C, rf, b_)
        return pd.Series(wgts, index=ret_df.columns)

    rets = []
    for i in range(10000):

        for i in range(10000):
    except BaseException as e:
        print e
    finally: print "Tolerance: %s, iter: %s" % (TOLERANCE,i)

for k in [0.001, 0.01, 0.025, 0.05, 0.1, 0.5, 1.0, 5.0, 50.0, 500.0]:

Positive directional derivative for linesearch
Tolerance: 0.001, iter: 0
Positive directional derivative for linesearch
Tolerance: 0.01, iter: 30
Positive directional derivative for linesearch
Tolerance: 0.025, iter: 77
Inequality constraints incompatible
Tolerance: 0.05, iter: 212
Positive directional derivative for linesearch
Tolerance: 0.1, iter: 444
Positive directional derivative for linesearch
Tolerance: 0.5, iter: 444
Positive directional derivative for linesearch
Tolerance: 1.0, iter: 1026
Positive directional derivative for linesearch
Tolerance: 5.0, iter: 1026
Positive directional derivative for linesearch
Tolerance: 50.0, iter: 1026
Positive directional derivative for linesearch
Tolerance: 500.0, iter: 1026

python, Tuning the tolerance until it doesn't crash is a very weak solution, a small change on your data or function, and you are bound to crash. See here what does your  Minimization of scalar function of one or more variables. The objective function to be minimized. where x is an 1-D array with shape (n,) and args is a tuple of the fixed parameters needed to completely specify the function. Array of real elements of size (n,), where ‘n’ is the number of independent variables.

SLSQP requires the cost function to be twice derivable, and it may fail hard if that condition is not met. No amount of tolerance increase will help you unless it's so large that you'll always get the initial solution.

You may want to try COBYLA instead -- note, however, that this would require you to transform your bounds into constraints, asCOBYLA does not accept a separate bounds argument.

scipy.optimize.minimize, Tolerance for termination. For detailed Set to True to print convergence messages. from scipy.optimize import minimize, rosen, rosen_der. The provided method callable must be able to accept (and possibly ignore) arbitrary parameters; the set of parameters accepted by minimize may expand in future versions and then these parameters will be passed to the method. You can find an example in the scipy.optimize tutorial.

I am providing an answer at this late date because it seems that the source code for this example is still up on the web, here.

In order to maximize the Sharpe ratio as in the function fitness() it is not necessary to minimize the reciprocal of it; you can simply minimize the negative of it.

Thus if I replace return 1/util with return -util in that function and run the questioner's code--- I did that--- I find that there are no errors for any of the 10,000 iterations for any TOLERANCE.

scipy.optimize.minimize, scipy.optimize.minimize(fun, x0, args=(), method=None, jac=None, hess=None, Tolerance for termination. It performs sequential one-dimensional minimizations along each vector of the directions set (direc field in options and info), which  The scipy.optimize.minimize method offers an interface to several minimizers. It defines a tol argument, for which the docs say:. Tolerance for termination. For detailed control, use solver-specific options.

scipy.optimize.show_options, Type of optimization solver. One of 'minimize', 'minimize_scalar', 'root'. method : str, optional. If not given, shows all methods of the specified solver. Otherwise  I have a simple optimization problem that, with some specific data, makes scipy.optimize.minimize ignore the tol argument. From the documentation, tol determines the "tolerance for termination", that is, the

minimize(method='SLSQP'), scipy.optimize. minimize (fun, x0, args=(), method='SLSQP', jac=None, bounds=​None, Set to True to print convergence messages. If False  Problem Description: An objective function is set up to take mooring line lenghs as an input array and return the sum of absolute differences between target and achieved pretension values. Now, I am using the scipy.optimize.minimize function with following options:

scipy.optimize.minimize_scalar, scipy.optimize.minimize_scalar(fun, bracket=None, bounds=None, args=(), method='brent', tol=None, options=None)[source]¶ Tolerance for termination. ignore) arbitrary parameters; the set of parameters accepted by minimize may  Your tolerance should be set to whatever tolerance you need. Setting it higher just tells the optimiser to stop sooner and doesn't actually speed it up. That being said, allowing it to go to a greater tollerence might be a waste of your time if not needed. Possible ways to reduce the time required are as follows: Use a different optimiser

  • The question title is misleading -- your example demonstrates that you already know how to set the tolerance. But that's not really what you're asking about anyway