Interface GlopParametersOrBuilder

  • All Superinterfaces:
    com.google.protobuf.MessageLiteOrBuilder, com.google.protobuf.MessageOrBuilder
    All Known Implementing Classes:
    GlopParameters, GlopParameters.Builder

    public interface GlopParametersOrBuilder
    extends com.google.protobuf.MessageOrBuilder
    • Method Summary

      All Methods Instance Methods Abstract Methods 
      Modifier and Type Method Description
      boolean getAllowSimplexAlgorithmChange()
      During incremental solve, let the solver decide if it use the primal or dual simplex algorithm depending on the current solution and on the new problem.
      int getBasisRefactorizationPeriod()
      Number of iterations between two basis refactorizations.
      boolean getChangeStatusToImprecise()
      If true, the internal API will change the return status to imprecise if the solution does not respect the internal tolerances.
      GlopParameters.CostScalingAlgorithm getCostScaling()
      optional .operations_research.glop.GlopParameters.CostScalingAlgorithm cost_scaling = 60 [default = CONTAIN_ONE_COST_SCALING];
      double getCrossoverBoundSnappingDistance()
      If the starting basis contains FREE variable with bounds, we will move any such variable to their closer bounds if the distance is smaller than this parameter.
      double getDegenerateMinistepFactor()
      During a degenerate iteration, the more conservative approach is to do a step of length zero (while shifting the bound of the leaving variable).
      int getDevexWeightsResetPeriod()
      Devex weights will be reset to 1.0 after that number of updates.
      double getDropTolerance()
      In order to increase the sparsity of the manipulated vectors, floating point values with a magnitude smaller than this parameter are set to zero (only in some places).
      double getDualFeasibilityTolerance()
      Variables whose reduced costs have an absolute value smaller than this tolerance are not considered as entering candidates.
      double getDualizerThreshold()
      When solve_dual_problem is LET_SOLVER_DECIDE, take the dual if the number of constraints of the problem is more than this threshold times the number of variables.
      boolean getDualPricePrioritizeNorm()
      On some problem like stp3d or pds-100 this makes a huge difference in speed and number of iterations of the dual simplex.
      double getDualSmallPivotThreshold()
      Like small_pivot_threshold but for the dual simplex.
      boolean getDynamicallyAdjustRefactorizationPeriod()
      If this is true, then basis_refactorization_period becomes a lower bound on the number of iterations between two refactorization (provided there is no numerical accuracy issues).
      boolean getExploitSingletonColumnInInitialBasis()
      Whether or not we exploit the singleton columns already present in the problem when we create the initial basis.
      GlopParameters.PricingRule getFeasibilityRule()
      PricingRule to use during the feasibility phase.
      double getHarrisToleranceRatio()
      This impacts the ratio test and indicates by how much we allow a basic variable value that we move to go out of bounds.
      GlopParameters.InitialBasisHeuristic getInitialBasis()
      What heuristic is used to try to replace the fixed slack columns in the initial basis of the primal simplex.
      double getInitialConditionNumberThreshold()
      If our upper bound on the condition number of the initial basis (from our heurisitic or a warm start) is above this threshold, we revert to an all slack basis.
      boolean getInitializeDevexWithColumnNorms()
      Whether we initialize devex weights to 1.0 or to the norms of the matrix columns.
      boolean getLogSearchProgress()
      If true, logs the progress of a solve to LOG(INFO).
      boolean getLogToStdout()
      If true, logs will be displayed to stdout instead of using Google log info.
      double getLuFactorizationPivotThreshold()
      Threshold for LU-factorization: for stability reasons, the magnitude of the chosen pivot at a given step is guaranteed to be greater than this threshold times the maximum magnitude of all the possible pivot choices in the same column.
      double getMarkowitzSingularityThreshold()
      If a pivot magnitude is smaller than this during the Markowitz LU factorization, then the matrix is assumed to be singular.
      int getMarkowitzZlatevParameter()
      How many columns do we look at in the Markowitz pivoting rule to find a good pivot.
      double getMaxDeterministicTime()
      Maximum deterministic time allowed to solve a problem.
      long getMaxNumberOfIterations()
      Maximum number of simplex iterations to solve a problem.
      double getMaxNumberOfReoptimizations()
      When the solution of phase II is imprecise, we re-run the phase II with the opposite algorithm from that imprecise solution (i.e., if primal or dual simplex was used, we use dual or primal simplex, respectively).
      double getMaxTimeInSeconds()
      Maximum time allowed in seconds to solve a problem.
      double getMaxValidMagnitude()
      Any finite values in the input LP must be below this threshold, otherwise the model will be reported invalid.
      double getMinimumAcceptablePivot()
      We never follow a basis change with a pivot under this threshold.
      int getNumOmpThreads()
      Number of threads in the OMP parallel sections.
      double getObjectiveLowerLimit()
      The solver will stop as soon as it has proven that the objective is smaller than objective_lower_limit or greater than objective_upper_limit.
      double getObjectiveUpperLimit()
      optional double objective_upper_limit = 41 [default = inf];
      GlopParameters.PricingRule getOptimizationRule()
      PricingRule to use during the optimization phase.
      boolean getPerturbCostsInDualSimplex()
      When this is true, then the costs are randomly perturbed before the dual simplex is even started.
      double getPreprocessorZeroTolerance()
      A floating point tolerance used by the preprocessors.
      double getPrimalFeasibilityTolerance()
      This tolerance indicates by how much we allow the variable values to go out of bounds and still consider the current solution primal-feasible.
      boolean getProvideStrongOptimalGuarantee()
      If true, then when the solver returns a solution with an OPTIMAL status, we can guarantee that: - The primal variable are in their bounds.
      boolean getPushToVertex()
      If the optimization phases finishes with super-basic variables (i.e., variables that either 1) have bounds but are FREE in the basis, or 2) have no bounds and are FREE in the basis at a nonzero value), then run a "push" phase to push these variables to bounds, obtaining a vertex solution.
      int getRandomSeed()
      At the beginning of each solve, the random number generator used in some part of the solver is reinitialized to this seed.
      double getRatioTestZeroThreshold()
      During the primal simplex (resp.
      double getRecomputeEdgesNormThreshold()
      Note that the threshold is a relative error on the actual norm (not the squared one) and that edge norms are always greater than 1.
      double getRecomputeReducedCostsThreshold()
      We estimate the accuracy of the iteratively computed reduced costs.
      double getRefactorizationThreshold()
      We estimate the factorization accuracy of B during each pivot by using the fact that we can compute the pivot coefficient in two ways: - From direction[leaving_row].
      double getRelativeCostPerturbation()
      The magnitude of the cost perturbation is given by RandomIn(1.0, 2.0) * ( relative_cost_perturbation * cost + relative_max_cost_perturbation * max_cost);
      double getRelativeMaxCostPerturbation()
      optional double relative_max_cost_perturbation = 55 [default = 1e-07];
      GlopParameters.ScalingAlgorithm getScalingMethod()
      optional .operations_research.glop.GlopParameters.ScalingAlgorithm scaling_method = 57 [default = EQUILIBRATION];
      double getSmallPivotThreshold()
      When we choose the leaving variable, we want to avoid small pivot because they are the less precise and may cause numerical instabilities.
      double getSolutionFeasibilityTolerance()
      When the problem status is OPTIMAL, we check the optimality using this relative tolerance and change the status to IMPRECISE if an issue is detected.
      GlopParameters.SolverBehavior getSolveDualProblem()
      Whether or not we solve the dual of the given problem.
      boolean getUseDedicatedDualFeasibilityAlgorithm()
      We have two possible dual phase I algorithms.
      boolean getUseDualSimplex()
      Whether or not we use the dual simplex algorithm instead of the primal.
      boolean getUseImpliedFreePreprocessor()
      If presolve runs, include the pass that detects implied free variables.
      boolean getUseMiddleProductFormUpdate()
      Whether or not to use the middle product form update rather than the standard eta LU update.
      boolean getUsePreprocessing()
      Whether or not we use advanced preprocessing techniques.
      boolean getUseScaling()
      Whether or not we scale the matrix A so that the maximum coefficient on each line and each column is 1.0.
      boolean getUseTransposedMatrix()
      Whether or not we keep a transposed version of the matrix A to speed-up the pricing at the cost of extra memory and the initial tranposition computation.
      boolean hasAllowSimplexAlgorithmChange()
      During incremental solve, let the solver decide if it use the primal or dual simplex algorithm depending on the current solution and on the new problem.
      boolean hasBasisRefactorizationPeriod()
      Number of iterations between two basis refactorizations.
      boolean hasChangeStatusToImprecise()
      If true, the internal API will change the return status to imprecise if the solution does not respect the internal tolerances.
      boolean hasCostScaling()
      optional .operations_research.glop.GlopParameters.CostScalingAlgorithm cost_scaling = 60 [default = CONTAIN_ONE_COST_SCALING];
      boolean hasCrossoverBoundSnappingDistance()
      If the starting basis contains FREE variable with bounds, we will move any such variable to their closer bounds if the distance is smaller than this parameter.
      boolean hasDegenerateMinistepFactor()
      During a degenerate iteration, the more conservative approach is to do a step of length zero (while shifting the bound of the leaving variable).
      boolean hasDevexWeightsResetPeriod()
      Devex weights will be reset to 1.0 after that number of updates.
      boolean hasDropTolerance()
      In order to increase the sparsity of the manipulated vectors, floating point values with a magnitude smaller than this parameter are set to zero (only in some places).
      boolean hasDualFeasibilityTolerance()
      Variables whose reduced costs have an absolute value smaller than this tolerance are not considered as entering candidates.
      boolean hasDualizerThreshold()
      When solve_dual_problem is LET_SOLVER_DECIDE, take the dual if the number of constraints of the problem is more than this threshold times the number of variables.
      boolean hasDualPricePrioritizeNorm()
      On some problem like stp3d or pds-100 this makes a huge difference in speed and number of iterations of the dual simplex.
      boolean hasDualSmallPivotThreshold()
      Like small_pivot_threshold but for the dual simplex.
      boolean hasDynamicallyAdjustRefactorizationPeriod()
      If this is true, then basis_refactorization_period becomes a lower bound on the number of iterations between two refactorization (provided there is no numerical accuracy issues).
      boolean hasExploitSingletonColumnInInitialBasis()
      Whether or not we exploit the singleton columns already present in the problem when we create the initial basis.
      boolean hasFeasibilityRule()
      PricingRule to use during the feasibility phase.
      boolean hasHarrisToleranceRatio()
      This impacts the ratio test and indicates by how much we allow a basic variable value that we move to go out of bounds.
      boolean hasInitialBasis()
      What heuristic is used to try to replace the fixed slack columns in the initial basis of the primal simplex.
      boolean hasInitialConditionNumberThreshold()
      If our upper bound on the condition number of the initial basis (from our heurisitic or a warm start) is above this threshold, we revert to an all slack basis.
      boolean hasInitializeDevexWithColumnNorms()
      Whether we initialize devex weights to 1.0 or to the norms of the matrix columns.
      boolean hasLogSearchProgress()
      If true, logs the progress of a solve to LOG(INFO).
      boolean hasLogToStdout()
      If true, logs will be displayed to stdout instead of using Google log info.
      boolean hasLuFactorizationPivotThreshold()
      Threshold for LU-factorization: for stability reasons, the magnitude of the chosen pivot at a given step is guaranteed to be greater than this threshold times the maximum magnitude of all the possible pivot choices in the same column.
      boolean hasMarkowitzSingularityThreshold()
      If a pivot magnitude is smaller than this during the Markowitz LU factorization, then the matrix is assumed to be singular.
      boolean hasMarkowitzZlatevParameter()
      How many columns do we look at in the Markowitz pivoting rule to find a good pivot.
      boolean hasMaxDeterministicTime()
      Maximum deterministic time allowed to solve a problem.
      boolean hasMaxNumberOfIterations()
      Maximum number of simplex iterations to solve a problem.
      boolean hasMaxNumberOfReoptimizations()
      When the solution of phase II is imprecise, we re-run the phase II with the opposite algorithm from that imprecise solution (i.e., if primal or dual simplex was used, we use dual or primal simplex, respectively).
      boolean hasMaxTimeInSeconds()
      Maximum time allowed in seconds to solve a problem.
      boolean hasMaxValidMagnitude()
      Any finite values in the input LP must be below this threshold, otherwise the model will be reported invalid.
      boolean hasMinimumAcceptablePivot()
      We never follow a basis change with a pivot under this threshold.
      boolean hasNumOmpThreads()
      Number of threads in the OMP parallel sections.
      boolean hasObjectiveLowerLimit()
      The solver will stop as soon as it has proven that the objective is smaller than objective_lower_limit or greater than objective_upper_limit.
      boolean hasObjectiveUpperLimit()
      optional double objective_upper_limit = 41 [default = inf];
      boolean hasOptimizationRule()
      PricingRule to use during the optimization phase.
      boolean hasPerturbCostsInDualSimplex()
      When this is true, then the costs are randomly perturbed before the dual simplex is even started.
      boolean hasPreprocessorZeroTolerance()
      A floating point tolerance used by the preprocessors.
      boolean hasPrimalFeasibilityTolerance()
      This tolerance indicates by how much we allow the variable values to go out of bounds and still consider the current solution primal-feasible.
      boolean hasProvideStrongOptimalGuarantee()
      If true, then when the solver returns a solution with an OPTIMAL status, we can guarantee that: - The primal variable are in their bounds.
      boolean hasPushToVertex()
      If the optimization phases finishes with super-basic variables (i.e., variables that either 1) have bounds but are FREE in the basis, or 2) have no bounds and are FREE in the basis at a nonzero value), then run a "push" phase to push these variables to bounds, obtaining a vertex solution.
      boolean hasRandomSeed()
      At the beginning of each solve, the random number generator used in some part of the solver is reinitialized to this seed.
      boolean hasRatioTestZeroThreshold()
      During the primal simplex (resp.
      boolean hasRecomputeEdgesNormThreshold()
      Note that the threshold is a relative error on the actual norm (not the squared one) and that edge norms are always greater than 1.
      boolean hasRecomputeReducedCostsThreshold()
      We estimate the accuracy of the iteratively computed reduced costs.
      boolean hasRefactorizationThreshold()
      We estimate the factorization accuracy of B during each pivot by using the fact that we can compute the pivot coefficient in two ways: - From direction[leaving_row].
      boolean hasRelativeCostPerturbation()
      The magnitude of the cost perturbation is given by RandomIn(1.0, 2.0) * ( relative_cost_perturbation * cost + relative_max_cost_perturbation * max_cost);
      boolean hasRelativeMaxCostPerturbation()
      optional double relative_max_cost_perturbation = 55 [default = 1e-07];
      boolean hasScalingMethod()
      optional .operations_research.glop.GlopParameters.ScalingAlgorithm scaling_method = 57 [default = EQUILIBRATION];
      boolean hasSmallPivotThreshold()
      When we choose the leaving variable, we want to avoid small pivot because they are the less precise and may cause numerical instabilities.
      boolean hasSolutionFeasibilityTolerance()
      When the problem status is OPTIMAL, we check the optimality using this relative tolerance and change the status to IMPRECISE if an issue is detected.
      boolean hasSolveDualProblem()
      Whether or not we solve the dual of the given problem.
      boolean hasUseDedicatedDualFeasibilityAlgorithm()
      We have two possible dual phase I algorithms.
      boolean hasUseDualSimplex()
      Whether or not we use the dual simplex algorithm instead of the primal.
      boolean hasUseImpliedFreePreprocessor()
      If presolve runs, include the pass that detects implied free variables.
      boolean hasUseMiddleProductFormUpdate()
      Whether or not to use the middle product form update rather than the standard eta LU update.
      boolean hasUsePreprocessing()
      Whether or not we use advanced preprocessing techniques.
      boolean hasUseScaling()
      Whether or not we scale the matrix A so that the maximum coefficient on each line and each column is 1.0.
      boolean hasUseTransposedMatrix()
      Whether or not we keep a transposed version of the matrix A to speed-up the pricing at the cost of extra memory and the initial tranposition computation.
      • Methods inherited from interface com.google.protobuf.MessageLiteOrBuilder

        isInitialized
      • Methods inherited from interface com.google.protobuf.MessageOrBuilder

        findInitializationErrors, getAllFields, getDefaultInstanceForType, getDescriptorForType, getField, getInitializationErrorString, getOneofFieldDescriptor, getRepeatedField, getRepeatedFieldCount, getUnknownFields, hasField, hasOneof
    • Method Detail

      • hasScalingMethod

        boolean hasScalingMethod()
        optional .operations_research.glop.GlopParameters.ScalingAlgorithm scaling_method = 57 [default = EQUILIBRATION];
        Returns:
        Whether the scalingMethod field is set.
      • getScalingMethod

        GlopParameters.ScalingAlgorithm getScalingMethod()
        optional .operations_research.glop.GlopParameters.ScalingAlgorithm scaling_method = 57 [default = EQUILIBRATION];
        Returns:
        The scalingMethod.
      • hasFeasibilityRule

        boolean hasFeasibilityRule()
         PricingRule to use during the feasibility phase.
         
        optional .operations_research.glop.GlopParameters.PricingRule feasibility_rule = 1 [default = STEEPEST_EDGE];
        Returns:
        Whether the feasibilityRule field is set.
      • getFeasibilityRule

        GlopParameters.PricingRule getFeasibilityRule()
         PricingRule to use during the feasibility phase.
         
        optional .operations_research.glop.GlopParameters.PricingRule feasibility_rule = 1 [default = STEEPEST_EDGE];
        Returns:
        The feasibilityRule.
      • hasOptimizationRule

        boolean hasOptimizationRule()
         PricingRule to use during the optimization phase.
         
        optional .operations_research.glop.GlopParameters.PricingRule optimization_rule = 2 [default = STEEPEST_EDGE];
        Returns:
        Whether the optimizationRule field is set.
      • getOptimizationRule

        GlopParameters.PricingRule getOptimizationRule()
         PricingRule to use during the optimization phase.
         
        optional .operations_research.glop.GlopParameters.PricingRule optimization_rule = 2 [default = STEEPEST_EDGE];
        Returns:
        The optimizationRule.
      • hasRefactorizationThreshold

        boolean hasRefactorizationThreshold()
         We estimate the factorization accuracy of B during each pivot by using
         the fact that we can compute the pivot coefficient in two ways:
         - From direction[leaving_row].
         - From update_row[entering_column].
         If the two values have a relative difference above this threshold, we
         trigger a refactorization.
         
        optional double refactorization_threshold = 6 [default = 1e-09];
        Returns:
        Whether the refactorizationThreshold field is set.
      • getRefactorizationThreshold

        double getRefactorizationThreshold()
         We estimate the factorization accuracy of B during each pivot by using
         the fact that we can compute the pivot coefficient in two ways:
         - From direction[leaving_row].
         - From update_row[entering_column].
         If the two values have a relative difference above this threshold, we
         trigger a refactorization.
         
        optional double refactorization_threshold = 6 [default = 1e-09];
        Returns:
        The refactorizationThreshold.
      • hasRecomputeReducedCostsThreshold

        boolean hasRecomputeReducedCostsThreshold()
         We estimate the accuracy of the iteratively computed reduced costs. If
         it falls below this threshold, we reinitialize them from scratch. Note
         that such an operation is pretty fast, so we can use a low threshold.
         It is important to have a good accuracy here (better than the
         dual_feasibility_tolerance below) to be sure of the sign of such a cost.
         
        optional double recompute_reduced_costs_threshold = 8 [default = 1e-08];
        Returns:
        Whether the recomputeReducedCostsThreshold field is set.
      • getRecomputeReducedCostsThreshold

        double getRecomputeReducedCostsThreshold()
         We estimate the accuracy of the iteratively computed reduced costs. If
         it falls below this threshold, we reinitialize them from scratch. Note
         that such an operation is pretty fast, so we can use a low threshold.
         It is important to have a good accuracy here (better than the
         dual_feasibility_tolerance below) to be sure of the sign of such a cost.
         
        optional double recompute_reduced_costs_threshold = 8 [default = 1e-08];
        Returns:
        The recomputeReducedCostsThreshold.
      • hasRecomputeEdgesNormThreshold

        boolean hasRecomputeEdgesNormThreshold()
         Note that the threshold is a relative error on the actual norm (not the
         squared one) and that edge norms are always greater than 1. Recomputing
         norms is a really expensive operation and a large threshold is ok since
         this doesn't impact directly the solution but just the entering variable
         choice.
         
        optional double recompute_edges_norm_threshold = 9 [default = 100];
        Returns:
        Whether the recomputeEdgesNormThreshold field is set.
      • getRecomputeEdgesNormThreshold

        double getRecomputeEdgesNormThreshold()
         Note that the threshold is a relative error on the actual norm (not the
         squared one) and that edge norms are always greater than 1. Recomputing
         norms is a really expensive operation and a large threshold is ok since
         this doesn't impact directly the solution but just the entering variable
         choice.
         
        optional double recompute_edges_norm_threshold = 9 [default = 100];
        Returns:
        The recomputeEdgesNormThreshold.
      • hasPrimalFeasibilityTolerance

        boolean hasPrimalFeasibilityTolerance()
         This tolerance indicates by how much we allow the variable values to go out
         of bounds and still consider the current solution primal-feasible. We also
         use the same tolerance for the error A.x - b. Note that the two errors are
         closely related if A is scaled in such a way that the greatest coefficient
         magnitude on each column is 1.0.
        
         This is also simply called feasibility tolerance in other solvers.
         
        optional double primal_feasibility_tolerance = 10 [default = 1e-08];
        Returns:
        Whether the primalFeasibilityTolerance field is set.
      • getPrimalFeasibilityTolerance

        double getPrimalFeasibilityTolerance()
         This tolerance indicates by how much we allow the variable values to go out
         of bounds and still consider the current solution primal-feasible. We also
         use the same tolerance for the error A.x - b. Note that the two errors are
         closely related if A is scaled in such a way that the greatest coefficient
         magnitude on each column is 1.0.
        
         This is also simply called feasibility tolerance in other solvers.
         
        optional double primal_feasibility_tolerance = 10 [default = 1e-08];
        Returns:
        The primalFeasibilityTolerance.
      • hasDualFeasibilityTolerance

        boolean hasDualFeasibilityTolerance()
         Variables whose reduced costs have an absolute value smaller than this
         tolerance are not considered as entering candidates. That is they do not
         take part in deciding whether a solution is dual-feasible or not.
        
         Note that this value can temporarily increase during the execution of the
         algorithm if the estimated precision of the reduced costs is higher than
         this tolerance. Note also that we scale the costs (in the presolve step) so
         that the cost magnitude range contains one.
        
         This is also known as the optimality tolerance in other solvers.
         
        optional double dual_feasibility_tolerance = 11 [default = 1e-08];
        Returns:
        Whether the dualFeasibilityTolerance field is set.
      • getDualFeasibilityTolerance

        double getDualFeasibilityTolerance()
         Variables whose reduced costs have an absolute value smaller than this
         tolerance are not considered as entering candidates. That is they do not
         take part in deciding whether a solution is dual-feasible or not.
        
         Note that this value can temporarily increase during the execution of the
         algorithm if the estimated precision of the reduced costs is higher than
         this tolerance. Note also that we scale the costs (in the presolve step) so
         that the cost magnitude range contains one.
        
         This is also known as the optimality tolerance in other solvers.
         
        optional double dual_feasibility_tolerance = 11 [default = 1e-08];
        Returns:
        The dualFeasibilityTolerance.
      • hasRatioTestZeroThreshold

        boolean hasRatioTestZeroThreshold()
         During the primal simplex (resp. dual simplex), the coefficients of the
         direction (resp. update row) with a magnitude lower than this threshold are
         not considered during the ratio test. This tolerance is related to the
         precision at which a Solve() involving the basis matrix can be performed.
        
         TODO(user): Automatically increase it when we detect that the precision
         of the Solve() is worse than this.
         
        optional double ratio_test_zero_threshold = 12 [default = 1e-09];
        Returns:
        Whether the ratioTestZeroThreshold field is set.
      • getRatioTestZeroThreshold

        double getRatioTestZeroThreshold()
         During the primal simplex (resp. dual simplex), the coefficients of the
         direction (resp. update row) with a magnitude lower than this threshold are
         not considered during the ratio test. This tolerance is related to the
         precision at which a Solve() involving the basis matrix can be performed.
        
         TODO(user): Automatically increase it when we detect that the precision
         of the Solve() is worse than this.
         
        optional double ratio_test_zero_threshold = 12 [default = 1e-09];
        Returns:
        The ratioTestZeroThreshold.
      • hasHarrisToleranceRatio

        boolean hasHarrisToleranceRatio()
         This impacts the ratio test and indicates by how much we allow a basic
         variable value that we move to go out of bounds. The value should be in
         [0.0, 1.0) and should be interpreted as a ratio of the
         primal_feasibility_tolerance. Setting this to 0.0 basically disables the
         Harris ratio test while setting this too close to 1.0 will make it
         difficult to keep the variable values inside their bounds modulo the
         primal_feasibility_tolerance.
        
         Note that the same comment applies to the dual simplex ratio test. There,
         we allow the reduced costs to be of an infeasible sign by as much as this
         ratio times the dual_feasibility_tolerance.
         
        optional double harris_tolerance_ratio = 13 [default = 0.5];
        Returns:
        Whether the harrisToleranceRatio field is set.
      • getHarrisToleranceRatio

        double getHarrisToleranceRatio()
         This impacts the ratio test and indicates by how much we allow a basic
         variable value that we move to go out of bounds. The value should be in
         [0.0, 1.0) and should be interpreted as a ratio of the
         primal_feasibility_tolerance. Setting this to 0.0 basically disables the
         Harris ratio test while setting this too close to 1.0 will make it
         difficult to keep the variable values inside their bounds modulo the
         primal_feasibility_tolerance.
        
         Note that the same comment applies to the dual simplex ratio test. There,
         we allow the reduced costs to be of an infeasible sign by as much as this
         ratio times the dual_feasibility_tolerance.
         
        optional double harris_tolerance_ratio = 13 [default = 0.5];
        Returns:
        The harrisToleranceRatio.
      • hasSmallPivotThreshold

        boolean hasSmallPivotThreshold()
         When we choose the leaving variable, we want to avoid small pivot because
         they are the less precise and may cause numerical instabilities. For a
         pivot under this threshold times the infinity norm of the direction, we try
         various countermeasures in order to avoid using it.
         
        optional double small_pivot_threshold = 14 [default = 1e-06];
        Returns:
        Whether the smallPivotThreshold field is set.
      • getSmallPivotThreshold

        double getSmallPivotThreshold()
         When we choose the leaving variable, we want to avoid small pivot because
         they are the less precise and may cause numerical instabilities. For a
         pivot under this threshold times the infinity norm of the direction, we try
         various countermeasures in order to avoid using it.
         
        optional double small_pivot_threshold = 14 [default = 1e-06];
        Returns:
        The smallPivotThreshold.
      • hasMinimumAcceptablePivot

        boolean hasMinimumAcceptablePivot()
         We never follow a basis change with a pivot under this threshold.
         
        optional double minimum_acceptable_pivot = 15 [default = 1e-06];
        Returns:
        Whether the minimumAcceptablePivot field is set.
      • getMinimumAcceptablePivot

        double getMinimumAcceptablePivot()
         We never follow a basis change with a pivot under this threshold.
         
        optional double minimum_acceptable_pivot = 15 [default = 1e-06];
        Returns:
        The minimumAcceptablePivot.
      • hasDropTolerance

        boolean hasDropTolerance()
         In order to increase the sparsity of the manipulated vectors, floating
         point values with a magnitude smaller than this parameter are set to zero
         (only in some places). This parameter should be positive or zero.
         
        optional double drop_tolerance = 52 [default = 1e-14];
        Returns:
        Whether the dropTolerance field is set.
      • getDropTolerance

        double getDropTolerance()
         In order to increase the sparsity of the manipulated vectors, floating
         point values with a magnitude smaller than this parameter are set to zero
         (only in some places). This parameter should be positive or zero.
         
        optional double drop_tolerance = 52 [default = 1e-14];
        Returns:
        The dropTolerance.
      • hasUseScaling

        boolean hasUseScaling()
         Whether or not we scale the matrix A so that the maximum coefficient on
         each line and each column is 1.0.
         
        optional bool use_scaling = 16 [default = true];
        Returns:
        Whether the useScaling field is set.
      • getUseScaling

        boolean getUseScaling()
         Whether or not we scale the matrix A so that the maximum coefficient on
         each line and each column is 1.0.
         
        optional bool use_scaling = 16 [default = true];
        Returns:
        The useScaling.
      • hasCostScaling

        boolean hasCostScaling()
        optional .operations_research.glop.GlopParameters.CostScalingAlgorithm cost_scaling = 60 [default = CONTAIN_ONE_COST_SCALING];
        Returns:
        Whether the costScaling field is set.
      • getCostScaling

        GlopParameters.CostScalingAlgorithm getCostScaling()
        optional .operations_research.glop.GlopParameters.CostScalingAlgorithm cost_scaling = 60 [default = CONTAIN_ONE_COST_SCALING];
        Returns:
        The costScaling.
      • hasInitialBasis

        boolean hasInitialBasis()
         What heuristic is used to try to replace the fixed slack columns in the
         initial basis of the primal simplex.
         
        optional .operations_research.glop.GlopParameters.InitialBasisHeuristic initial_basis = 17 [default = TRIANGULAR];
        Returns:
        Whether the initialBasis field is set.
      • getInitialBasis

        GlopParameters.InitialBasisHeuristic getInitialBasis()
         What heuristic is used to try to replace the fixed slack columns in the
         initial basis of the primal simplex.
         
        optional .operations_research.glop.GlopParameters.InitialBasisHeuristic initial_basis = 17 [default = TRIANGULAR];
        Returns:
        The initialBasis.
      • hasUseTransposedMatrix

        boolean hasUseTransposedMatrix()
         Whether or not we keep a transposed version of the matrix A to speed-up the
         pricing at the cost of extra memory and the initial tranposition
         computation.
         
        optional bool use_transposed_matrix = 18 [default = true];
        Returns:
        Whether the useTransposedMatrix field is set.
      • getUseTransposedMatrix

        boolean getUseTransposedMatrix()
         Whether or not we keep a transposed version of the matrix A to speed-up the
         pricing at the cost of extra memory and the initial tranposition
         computation.
         
        optional bool use_transposed_matrix = 18 [default = true];
        Returns:
        The useTransposedMatrix.
      • hasBasisRefactorizationPeriod

        boolean hasBasisRefactorizationPeriod()
         Number of iterations between two basis refactorizations. Note that various
         conditions in the algorithm may trigger a refactorization before this
         period is reached. Set this to 0 if you want to refactorize at each step.
         
        optional int32 basis_refactorization_period = 19 [default = 64];
        Returns:
        Whether the basisRefactorizationPeriod field is set.
      • getBasisRefactorizationPeriod

        int getBasisRefactorizationPeriod()
         Number of iterations between two basis refactorizations. Note that various
         conditions in the algorithm may trigger a refactorization before this
         period is reached. Set this to 0 if you want to refactorize at each step.
         
        optional int32 basis_refactorization_period = 19 [default = 64];
        Returns:
        The basisRefactorizationPeriod.
      • hasDynamicallyAdjustRefactorizationPeriod

        boolean hasDynamicallyAdjustRefactorizationPeriod()
         If this is true, then basis_refactorization_period becomes a lower bound on
         the number of iterations between two refactorization (provided there is no
         numerical accuracy issues). Depending on the estimated time to refactorize
         vs the extra time spend in each solves because of the LU update, we try to
         balance the two times.
         
        optional bool dynamically_adjust_refactorization_period = 63 [default = true];
        Returns:
        Whether the dynamicallyAdjustRefactorizationPeriod field is set.
      • getDynamicallyAdjustRefactorizationPeriod

        boolean getDynamicallyAdjustRefactorizationPeriod()
         If this is true, then basis_refactorization_period becomes a lower bound on
         the number of iterations between two refactorization (provided there is no
         numerical accuracy issues). Depending on the estimated time to refactorize
         vs the extra time spend in each solves because of the LU update, we try to
         balance the two times.
         
        optional bool dynamically_adjust_refactorization_period = 63 [default = true];
        Returns:
        The dynamicallyAdjustRefactorizationPeriod.
      • hasSolveDualProblem

        boolean hasSolveDualProblem()
         Whether or not we solve the dual of the given problem.
         With a value of auto, the algorithm decide which approach is probably the
         fastest depending on the problem dimensions (see dualizer_threshold).
         
        optional .operations_research.glop.GlopParameters.SolverBehavior solve_dual_problem = 20 [default = LET_SOLVER_DECIDE];
        Returns:
        Whether the solveDualProblem field is set.
      • getSolveDualProblem

        GlopParameters.SolverBehavior getSolveDualProblem()
         Whether or not we solve the dual of the given problem.
         With a value of auto, the algorithm decide which approach is probably the
         fastest depending on the problem dimensions (see dualizer_threshold).
         
        optional .operations_research.glop.GlopParameters.SolverBehavior solve_dual_problem = 20 [default = LET_SOLVER_DECIDE];
        Returns:
        The solveDualProblem.
      • hasDualizerThreshold

        boolean hasDualizerThreshold()
         When solve_dual_problem is LET_SOLVER_DECIDE, take the dual if the number
         of constraints of the problem is more than this threshold times the number
         of variables.
         
        optional double dualizer_threshold = 21 [default = 1.5];
        Returns:
        Whether the dualizerThreshold field is set.
      • getDualizerThreshold

        double getDualizerThreshold()
         When solve_dual_problem is LET_SOLVER_DECIDE, take the dual if the number
         of constraints of the problem is more than this threshold times the number
         of variables.
         
        optional double dualizer_threshold = 21 [default = 1.5];
        Returns:
        The dualizerThreshold.
      • hasSolutionFeasibilityTolerance

        boolean hasSolutionFeasibilityTolerance()
         When the problem status is OPTIMAL, we check the optimality using this
         relative tolerance and change the status to IMPRECISE if an issue is
         detected.
        
         The tolerance is "relative" in the sense that our thresholds are:
         - tolerance * max(1.0, abs(bound)) for crossing a given bound.
         - tolerance * max(1.0, abs(cost)) for an infeasible reduced cost.
         - tolerance for an infeasible dual value.
         
        optional double solution_feasibility_tolerance = 22 [default = 1e-06];
        Returns:
        Whether the solutionFeasibilityTolerance field is set.
      • getSolutionFeasibilityTolerance

        double getSolutionFeasibilityTolerance()
         When the problem status is OPTIMAL, we check the optimality using this
         relative tolerance and change the status to IMPRECISE if an issue is
         detected.
        
         The tolerance is "relative" in the sense that our thresholds are:
         - tolerance * max(1.0, abs(bound)) for crossing a given bound.
         - tolerance * max(1.0, abs(cost)) for an infeasible reduced cost.
         - tolerance for an infeasible dual value.
         
        optional double solution_feasibility_tolerance = 22 [default = 1e-06];
        Returns:
        The solutionFeasibilityTolerance.
      • hasProvideStrongOptimalGuarantee

        boolean hasProvideStrongOptimalGuarantee()
         If true, then when the solver returns a solution with an OPTIMAL status,
         we can guarantee that:
         - The primal variable are in their bounds.
         - The dual variable are in their bounds.
         - If we modify each component of the right-hand side a bit and each
           component of the objective function a bit, then the pair (primal values,
           dual values) is an EXACT optimal solution of the perturbed problem.
         - The modifications above are smaller than the associated tolerances as
           defined in the comment for solution_feasibility_tolerance (*).
        
         (*): This is the only place where the guarantee is not tight since we
         compute the upper bounds with scalar product of the primal/dual
         solution and the initial problem coefficients with only double precision.
        
         Note that whether or not this option is true, we still check the
         primal/dual infeasibility and objective gap. However if it is false, we
         don't move the primal/dual values within their bounds and leave them
         untouched.
         
        optional bool provide_strong_optimal_guarantee = 24 [default = true];
        Returns:
        Whether the provideStrongOptimalGuarantee field is set.
      • getProvideStrongOptimalGuarantee

        boolean getProvideStrongOptimalGuarantee()
         If true, then when the solver returns a solution with an OPTIMAL status,
         we can guarantee that:
         - The primal variable are in their bounds.
         - The dual variable are in their bounds.
         - If we modify each component of the right-hand side a bit and each
           component of the objective function a bit, then the pair (primal values,
           dual values) is an EXACT optimal solution of the perturbed problem.
         - The modifications above are smaller than the associated tolerances as
           defined in the comment for solution_feasibility_tolerance (*).
        
         (*): This is the only place where the guarantee is not tight since we
         compute the upper bounds with scalar product of the primal/dual
         solution and the initial problem coefficients with only double precision.
        
         Note that whether or not this option is true, we still check the
         primal/dual infeasibility and objective gap. However if it is false, we
         don't move the primal/dual values within their bounds and leave them
         untouched.
         
        optional bool provide_strong_optimal_guarantee = 24 [default = true];
        Returns:
        The provideStrongOptimalGuarantee.
      • hasChangeStatusToImprecise

        boolean hasChangeStatusToImprecise()
         If true, the internal API will change the return status to imprecise if the
         solution does not respect the internal tolerances.
         
        optional bool change_status_to_imprecise = 58 [default = true];
        Returns:
        Whether the changeStatusToImprecise field is set.
      • getChangeStatusToImprecise

        boolean getChangeStatusToImprecise()
         If true, the internal API will change the return status to imprecise if the
         solution does not respect the internal tolerances.
         
        optional bool change_status_to_imprecise = 58 [default = true];
        Returns:
        The changeStatusToImprecise.
      • hasMaxNumberOfReoptimizations

        boolean hasMaxNumberOfReoptimizations()
         When the solution of phase II is imprecise, we re-run the phase II with the
         opposite algorithm from that imprecise solution (i.e., if primal or dual
         simplex was used, we use dual or primal simplex, respectively). We repeat
         such re-optimization until the solution is precise, or we hit this limit.
         
        optional double max_number_of_reoptimizations = 56 [default = 40];
        Returns:
        Whether the maxNumberOfReoptimizations field is set.
      • getMaxNumberOfReoptimizations

        double getMaxNumberOfReoptimizations()
         When the solution of phase II is imprecise, we re-run the phase II with the
         opposite algorithm from that imprecise solution (i.e., if primal or dual
         simplex was used, we use dual or primal simplex, respectively). We repeat
         such re-optimization until the solution is precise, or we hit this limit.
         
        optional double max_number_of_reoptimizations = 56 [default = 40];
        Returns:
        The maxNumberOfReoptimizations.
      • hasLuFactorizationPivotThreshold

        boolean hasLuFactorizationPivotThreshold()
         Threshold for LU-factorization: for stability reasons, the magnitude of the
         chosen pivot at a given step is guaranteed to be greater than this
         threshold times the maximum magnitude of all the possible pivot choices in
         the same column. The value must be in [0,1].
         
        optional double lu_factorization_pivot_threshold = 25 [default = 0.01];
        Returns:
        Whether the luFactorizationPivotThreshold field is set.
      • getLuFactorizationPivotThreshold

        double getLuFactorizationPivotThreshold()
         Threshold for LU-factorization: for stability reasons, the magnitude of the
         chosen pivot at a given step is guaranteed to be greater than this
         threshold times the maximum magnitude of all the possible pivot choices in
         the same column. The value must be in [0,1].
         
        optional double lu_factorization_pivot_threshold = 25 [default = 0.01];
        Returns:
        The luFactorizationPivotThreshold.
      • hasMaxTimeInSeconds

        boolean hasMaxTimeInSeconds()
         Maximum time allowed in seconds to solve a problem.
         
        optional double max_time_in_seconds = 26 [default = inf];
        Returns:
        Whether the maxTimeInSeconds field is set.
      • getMaxTimeInSeconds

        double getMaxTimeInSeconds()
         Maximum time allowed in seconds to solve a problem.
         
        optional double max_time_in_seconds = 26 [default = inf];
        Returns:
        The maxTimeInSeconds.
      • hasMaxDeterministicTime

        boolean hasMaxDeterministicTime()
         Maximum deterministic time allowed to solve a problem. The deterministic
         time is more or less correlated to the running time, and its unit should
         be around the second (at least on a Xeon(R) CPU E5-1650 v2 @ 3.50GHz).
        
         TODO(user): Improve the correlation.
         
        optional double max_deterministic_time = 45 [default = inf];
        Returns:
        Whether the maxDeterministicTime field is set.
      • getMaxDeterministicTime

        double getMaxDeterministicTime()
         Maximum deterministic time allowed to solve a problem. The deterministic
         time is more or less correlated to the running time, and its unit should
         be around the second (at least on a Xeon(R) CPU E5-1650 v2 @ 3.50GHz).
        
         TODO(user): Improve the correlation.
         
        optional double max_deterministic_time = 45 [default = inf];
        Returns:
        The maxDeterministicTime.
      • hasMaxNumberOfIterations

        boolean hasMaxNumberOfIterations()
         Maximum number of simplex iterations to solve a problem.
         A value of -1 means no limit.
         
        optional int64 max_number_of_iterations = 27 [default = -1];
        Returns:
        Whether the maxNumberOfIterations field is set.
      • getMaxNumberOfIterations

        long getMaxNumberOfIterations()
         Maximum number of simplex iterations to solve a problem.
         A value of -1 means no limit.
         
        optional int64 max_number_of_iterations = 27 [default = -1];
        Returns:
        The maxNumberOfIterations.
      • hasMarkowitzZlatevParameter

        boolean hasMarkowitzZlatevParameter()
         How many columns do we look at in the Markowitz pivoting rule to find
         a good pivot. See markowitz.h.
         
        optional int32 markowitz_zlatev_parameter = 29 [default = 3];
        Returns:
        Whether the markowitzZlatevParameter field is set.
      • getMarkowitzZlatevParameter

        int getMarkowitzZlatevParameter()
         How many columns do we look at in the Markowitz pivoting rule to find
         a good pivot. See markowitz.h.
         
        optional int32 markowitz_zlatev_parameter = 29 [default = 3];
        Returns:
        The markowitzZlatevParameter.
      • hasMarkowitzSingularityThreshold

        boolean hasMarkowitzSingularityThreshold()
         If a pivot magnitude is smaller than this during the Markowitz LU
         factorization, then the matrix is assumed to be singular. Note that
         this is an absolute threshold and is not relative to the other possible
         pivots on the same column (see lu_factorization_pivot_threshold).
         
        optional double markowitz_singularity_threshold = 30 [default = 1e-15];
        Returns:
        Whether the markowitzSingularityThreshold field is set.
      • getMarkowitzSingularityThreshold

        double getMarkowitzSingularityThreshold()
         If a pivot magnitude is smaller than this during the Markowitz LU
         factorization, then the matrix is assumed to be singular. Note that
         this is an absolute threshold and is not relative to the other possible
         pivots on the same column (see lu_factorization_pivot_threshold).
         
        optional double markowitz_singularity_threshold = 30 [default = 1e-15];
        Returns:
        The markowitzSingularityThreshold.
      • hasUseDualSimplex

        boolean hasUseDualSimplex()
         Whether or not we use the dual simplex algorithm instead of the primal.
         
        optional bool use_dual_simplex = 31 [default = false];
        Returns:
        Whether the useDualSimplex field is set.
      • getUseDualSimplex

        boolean getUseDualSimplex()
         Whether or not we use the dual simplex algorithm instead of the primal.
         
        optional bool use_dual_simplex = 31 [default = false];
        Returns:
        The useDualSimplex.
      • hasAllowSimplexAlgorithmChange

        boolean hasAllowSimplexAlgorithmChange()
         During incremental solve, let the solver decide if it use the primal or
         dual simplex algorithm depending on the current solution and on the new
         problem. Note that even if this is true, the value of use_dual_simplex
         still indicates the default algorithm that the solver will use.
         
        optional bool allow_simplex_algorithm_change = 32 [default = false];
        Returns:
        Whether the allowSimplexAlgorithmChange field is set.
      • getAllowSimplexAlgorithmChange

        boolean getAllowSimplexAlgorithmChange()
         During incremental solve, let the solver decide if it use the primal or
         dual simplex algorithm depending on the current solution and on the new
         problem. Note that even if this is true, the value of use_dual_simplex
         still indicates the default algorithm that the solver will use.
         
        optional bool allow_simplex_algorithm_change = 32 [default = false];
        Returns:
        The allowSimplexAlgorithmChange.
      • hasDevexWeightsResetPeriod

        boolean hasDevexWeightsResetPeriod()
         Devex weights will be reset to 1.0 after that number of updates.
         
        optional int32 devex_weights_reset_period = 33 [default = 150];
        Returns:
        Whether the devexWeightsResetPeriod field is set.
      • getDevexWeightsResetPeriod

        int getDevexWeightsResetPeriod()
         Devex weights will be reset to 1.0 after that number of updates.
         
        optional int32 devex_weights_reset_period = 33 [default = 150];
        Returns:
        The devexWeightsResetPeriod.
      • hasUsePreprocessing

        boolean hasUsePreprocessing()
         Whether or not we use advanced preprocessing techniques.
         
        optional bool use_preprocessing = 34 [default = true];
        Returns:
        Whether the usePreprocessing field is set.
      • getUsePreprocessing

        boolean getUsePreprocessing()
         Whether or not we use advanced preprocessing techniques.
         
        optional bool use_preprocessing = 34 [default = true];
        Returns:
        The usePreprocessing.
      • hasUseMiddleProductFormUpdate

        boolean hasUseMiddleProductFormUpdate()
         Whether or not to use the middle product form update rather than the
         standard eta LU update. The middle form product update should be a lot more
         efficient (close to the Forrest-Tomlin update, a bit slower but easier to
         implement). See for more details:
         Qi Huangfu, J. A. Julian Hall, "Novel update techniques for the revised
         simplex method", 28 january 2013, Technical Report ERGO-13-0001
         http://www.maths.ed.ac.uk/hall/HuHa12/ERGO-13-001.pdf
         
        optional bool use_middle_product_form_update = 35 [default = true];
        Returns:
        Whether the useMiddleProductFormUpdate field is set.
      • getUseMiddleProductFormUpdate

        boolean getUseMiddleProductFormUpdate()
         Whether or not to use the middle product form update rather than the
         standard eta LU update. The middle form product update should be a lot more
         efficient (close to the Forrest-Tomlin update, a bit slower but easier to
         implement). See for more details:
         Qi Huangfu, J. A. Julian Hall, "Novel update techniques for the revised
         simplex method", 28 january 2013, Technical Report ERGO-13-0001
         http://www.maths.ed.ac.uk/hall/HuHa12/ERGO-13-001.pdf
         
        optional bool use_middle_product_form_update = 35 [default = true];
        Returns:
        The useMiddleProductFormUpdate.
      • hasInitializeDevexWithColumnNorms

        boolean hasInitializeDevexWithColumnNorms()
         Whether we initialize devex weights to 1.0 or to the norms of the matrix
         columns.
         
        optional bool initialize_devex_with_column_norms = 36 [default = true];
        Returns:
        Whether the initializeDevexWithColumnNorms field is set.
      • getInitializeDevexWithColumnNorms

        boolean getInitializeDevexWithColumnNorms()
         Whether we initialize devex weights to 1.0 or to the norms of the matrix
         columns.
         
        optional bool initialize_devex_with_column_norms = 36 [default = true];
        Returns:
        The initializeDevexWithColumnNorms.
      • hasExploitSingletonColumnInInitialBasis

        boolean hasExploitSingletonColumnInInitialBasis()
         Whether or not we exploit the singleton columns already present in the
         problem when we create the initial basis.
         
        optional bool exploit_singleton_column_in_initial_basis = 37 [default = true];
        Returns:
        Whether the exploitSingletonColumnInInitialBasis field is set.
      • getExploitSingletonColumnInInitialBasis

        boolean getExploitSingletonColumnInInitialBasis()
         Whether or not we exploit the singleton columns already present in the
         problem when we create the initial basis.
         
        optional bool exploit_singleton_column_in_initial_basis = 37 [default = true];
        Returns:
        The exploitSingletonColumnInInitialBasis.
      • hasDualSmallPivotThreshold

        boolean hasDualSmallPivotThreshold()
         Like small_pivot_threshold but for the dual simplex. This is needed because
         the dual algorithm does not interpret this value in the same way.
         TODO(user): Clean this up and use the same small pivot detection.
         
        optional double dual_small_pivot_threshold = 38 [default = 0.0001];
        Returns:
        Whether the dualSmallPivotThreshold field is set.
      • getDualSmallPivotThreshold

        double getDualSmallPivotThreshold()
         Like small_pivot_threshold but for the dual simplex. This is needed because
         the dual algorithm does not interpret this value in the same way.
         TODO(user): Clean this up and use the same small pivot detection.
         
        optional double dual_small_pivot_threshold = 38 [default = 0.0001];
        Returns:
        The dualSmallPivotThreshold.
      • hasPreprocessorZeroTolerance

        boolean hasPreprocessorZeroTolerance()
         A floating point tolerance used by the preprocessors. This is used for
         things like detecting if two columns/rows are proportional or if an
         interval is empty.
        
         Note that the preprocessors also use solution_feasibility_tolerance() to
         detect if a problem is infeasible.
         
        optional double preprocessor_zero_tolerance = 39 [default = 1e-09];
        Returns:
        Whether the preprocessorZeroTolerance field is set.
      • getPreprocessorZeroTolerance

        double getPreprocessorZeroTolerance()
         A floating point tolerance used by the preprocessors. This is used for
         things like detecting if two columns/rows are proportional or if an
         interval is empty.
        
         Note that the preprocessors also use solution_feasibility_tolerance() to
         detect if a problem is infeasible.
         
        optional double preprocessor_zero_tolerance = 39 [default = 1e-09];
        Returns:
        The preprocessorZeroTolerance.
      • hasObjectiveLowerLimit

        boolean hasObjectiveLowerLimit()
         The solver will stop as soon as it has proven that the objective is smaller
         than objective_lower_limit or greater than objective_upper_limit. Depending
         on the simplex algorithm (primal or dual) and the optimization direction,
         note that only one bound will be used at the time.
        
         Important: The solver does not add any tolerances to these values, and as
         soon as the objective (as computed by the solver, so with some imprecision)
         crosses one of these bounds (strictly), the search will stop. It is up to
         the client to add any tolerance if needed.
         
        optional double objective_lower_limit = 40 [default = -inf];
        Returns:
        Whether the objectiveLowerLimit field is set.
      • getObjectiveLowerLimit

        double getObjectiveLowerLimit()
         The solver will stop as soon as it has proven that the objective is smaller
         than objective_lower_limit or greater than objective_upper_limit. Depending
         on the simplex algorithm (primal or dual) and the optimization direction,
         note that only one bound will be used at the time.
        
         Important: The solver does not add any tolerances to these values, and as
         soon as the objective (as computed by the solver, so with some imprecision)
         crosses one of these bounds (strictly), the search will stop. It is up to
         the client to add any tolerance if needed.
         
        optional double objective_lower_limit = 40 [default = -inf];
        Returns:
        The objectiveLowerLimit.
      • hasObjectiveUpperLimit

        boolean hasObjectiveUpperLimit()
        optional double objective_upper_limit = 41 [default = inf];
        Returns:
        Whether the objectiveUpperLimit field is set.
      • getObjectiveUpperLimit

        double getObjectiveUpperLimit()
        optional double objective_upper_limit = 41 [default = inf];
        Returns:
        The objectiveUpperLimit.
      • hasDegenerateMinistepFactor

        boolean hasDegenerateMinistepFactor()
         During a degenerate iteration, the more conservative approach is to do a
         step of length zero (while shifting the bound of the leaving variable).
         That is, the variable values are unchanged for the primal simplex or the
         reduced cost are unchanged for the dual simplex. However, instead of doing
         a step of length zero, it seems to be better on degenerate problems to do a
         small positive step. This is what is recommended in the EXPAND procedure
         described in:
         P. E. Gill, W. Murray, M. A. Saunders, and M. H. Wright. "A practical anti-
         cycling procedure for linearly constrained optimization".
         Mathematical Programming, 45:437\u2013474, 1989.
        
         Here, during a degenerate iteration we do a small positive step of this
         factor times the primal (resp. dual) tolerance. In the primal simplex, this
         may effectively push variable values (very slightly) further out of their
         bounds (resp. reduced costs for the dual simplex).
        
         Setting this to zero reverts to the more conservative approach of a zero
         step during degenerate iterations.
         
        optional double degenerate_ministep_factor = 42 [default = 0.01];
        Returns:
        Whether the degenerateMinistepFactor field is set.
      • getDegenerateMinistepFactor

        double getDegenerateMinistepFactor()
         During a degenerate iteration, the more conservative approach is to do a
         step of length zero (while shifting the bound of the leaving variable).
         That is, the variable values are unchanged for the primal simplex or the
         reduced cost are unchanged for the dual simplex. However, instead of doing
         a step of length zero, it seems to be better on degenerate problems to do a
         small positive step. This is what is recommended in the EXPAND procedure
         described in:
         P. E. Gill, W. Murray, M. A. Saunders, and M. H. Wright. "A practical anti-
         cycling procedure for linearly constrained optimization".
         Mathematical Programming, 45:437\u2013474, 1989.
        
         Here, during a degenerate iteration we do a small positive step of this
         factor times the primal (resp. dual) tolerance. In the primal simplex, this
         may effectively push variable values (very slightly) further out of their
         bounds (resp. reduced costs for the dual simplex).
        
         Setting this to zero reverts to the more conservative approach of a zero
         step during degenerate iterations.
         
        optional double degenerate_ministep_factor = 42 [default = 0.01];
        Returns:
        The degenerateMinistepFactor.
      • hasRandomSeed

        boolean hasRandomSeed()
         At the beginning of each solve, the random number generator used in some
         part of the solver is reinitialized to this seed. If you change the random
         seed, the solver may make different choices during the solving process.
         Note that this may lead to a different solution, for example a different
         optimal basis.
        
         For some problems, the running time may vary a lot depending on small
         change in the solving algorithm. Running the solver with different seeds
         enables to have more robust benchmarks when evaluating new features.
        
         Also note that the solver is fully deterministic: two runs of the same
         binary, on the same machine, on the exact same data and with the same
         parameters will go through the exact same iterations. If they hit a time
         limit, they might of course yield different results because one will have
         advanced farther than the other.
         
        optional int32 random_seed = 43 [default = 1];
        Returns:
        Whether the randomSeed field is set.
      • getRandomSeed

        int getRandomSeed()
         At the beginning of each solve, the random number generator used in some
         part of the solver is reinitialized to this seed. If you change the random
         seed, the solver may make different choices during the solving process.
         Note that this may lead to a different solution, for example a different
         optimal basis.
        
         For some problems, the running time may vary a lot depending on small
         change in the solving algorithm. Running the solver with different seeds
         enables to have more robust benchmarks when evaluating new features.
        
         Also note that the solver is fully deterministic: two runs of the same
         binary, on the same machine, on the exact same data and with the same
         parameters will go through the exact same iterations. If they hit a time
         limit, they might of course yield different results because one will have
         advanced farther than the other.
         
        optional int32 random_seed = 43 [default = 1];
        Returns:
        The randomSeed.
      • hasNumOmpThreads

        boolean hasNumOmpThreads()
         Number of threads in the OMP parallel sections. If left to 1, the code will
         not create any OMP threads and will remain single-threaded.
         
        optional int32 num_omp_threads = 44 [default = 1];
        Returns:
        Whether the numOmpThreads field is set.
      • getNumOmpThreads

        int getNumOmpThreads()
         Number of threads in the OMP parallel sections. If left to 1, the code will
         not create any OMP threads and will remain single-threaded.
         
        optional int32 num_omp_threads = 44 [default = 1];
        Returns:
        The numOmpThreads.
      • hasPerturbCostsInDualSimplex

        boolean hasPerturbCostsInDualSimplex()
         When this is true, then the costs are randomly perturbed before the dual
         simplex is even started. This has been shown to improve the dual simplex
         performance. For a good reference, see Huangfu Q (2013) "High performance
         simplex solver", Ph.D, dissertation, University of Edinburgh.
         
        optional bool perturb_costs_in_dual_simplex = 53 [default = false];
        Returns:
        Whether the perturbCostsInDualSimplex field is set.
      • getPerturbCostsInDualSimplex

        boolean getPerturbCostsInDualSimplex()
         When this is true, then the costs are randomly perturbed before the dual
         simplex is even started. This has been shown to improve the dual simplex
         performance. For a good reference, see Huangfu Q (2013) "High performance
         simplex solver", Ph.D, dissertation, University of Edinburgh.
         
        optional bool perturb_costs_in_dual_simplex = 53 [default = false];
        Returns:
        The perturbCostsInDualSimplex.
      • hasUseDedicatedDualFeasibilityAlgorithm

        boolean hasUseDedicatedDualFeasibilityAlgorithm()
         We have two possible dual phase I algorithms. Both work on an LP that
         minimize the sum of dual infeasiblities. One use dedicated code (when this
         param is true), the other one use exactly the same code as the dual phase
         II but on an auxiliary problem where the variable bounds of the original
         problem are changed.
        
         TODO(user): For now we have both, but ideally the non-dedicated version
         will win since it is a lot less code to maintain.
         
        optional bool use_dedicated_dual_feasibility_algorithm = 62 [default = true];
        Returns:
        Whether the useDedicatedDualFeasibilityAlgorithm field is set.
      • getUseDedicatedDualFeasibilityAlgorithm

        boolean getUseDedicatedDualFeasibilityAlgorithm()
         We have two possible dual phase I algorithms. Both work on an LP that
         minimize the sum of dual infeasiblities. One use dedicated code (when this
         param is true), the other one use exactly the same code as the dual phase
         II but on an auxiliary problem where the variable bounds of the original
         problem are changed.
        
         TODO(user): For now we have both, but ideally the non-dedicated version
         will win since it is a lot less code to maintain.
         
        optional bool use_dedicated_dual_feasibility_algorithm = 62 [default = true];
        Returns:
        The useDedicatedDualFeasibilityAlgorithm.
      • hasRelativeCostPerturbation

        boolean hasRelativeCostPerturbation()
         The magnitude of the cost perturbation is given by
         RandomIn(1.0, 2.0) * (
             relative_cost_perturbation * cost
           + relative_max_cost_perturbation * max_cost);
         
        optional double relative_cost_perturbation = 54 [default = 1e-05];
        Returns:
        Whether the relativeCostPerturbation field is set.
      • getRelativeCostPerturbation

        double getRelativeCostPerturbation()
         The magnitude of the cost perturbation is given by
         RandomIn(1.0, 2.0) * (
             relative_cost_perturbation * cost
           + relative_max_cost_perturbation * max_cost);
         
        optional double relative_cost_perturbation = 54 [default = 1e-05];
        Returns:
        The relativeCostPerturbation.
      • hasRelativeMaxCostPerturbation

        boolean hasRelativeMaxCostPerturbation()
        optional double relative_max_cost_perturbation = 55 [default = 1e-07];
        Returns:
        Whether the relativeMaxCostPerturbation field is set.
      • getRelativeMaxCostPerturbation

        double getRelativeMaxCostPerturbation()
        optional double relative_max_cost_perturbation = 55 [default = 1e-07];
        Returns:
        The relativeMaxCostPerturbation.
      • hasInitialConditionNumberThreshold

        boolean hasInitialConditionNumberThreshold()
         If our upper bound on the condition number of the initial basis (from our
         heurisitic or a warm start) is above this threshold, we revert to an all
         slack basis.
         
        optional double initial_condition_number_threshold = 59 [default = 1e+50];
        Returns:
        Whether the initialConditionNumberThreshold field is set.
      • getInitialConditionNumberThreshold

        double getInitialConditionNumberThreshold()
         If our upper bound on the condition number of the initial basis (from our
         heurisitic or a warm start) is above this threshold, we revert to an all
         slack basis.
         
        optional double initial_condition_number_threshold = 59 [default = 1e+50];
        Returns:
        The initialConditionNumberThreshold.
      • hasLogSearchProgress

        boolean hasLogSearchProgress()
         If true, logs the progress of a solve to LOG(INFO). Note that the same
         messages can also be turned on by displaying logs at level 1 for the
         relevant files.
         
        optional bool log_search_progress = 61 [default = false];
        Returns:
        Whether the logSearchProgress field is set.
      • getLogSearchProgress

        boolean getLogSearchProgress()
         If true, logs the progress of a solve to LOG(INFO). Note that the same
         messages can also be turned on by displaying logs at level 1 for the
         relevant files.
         
        optional bool log_search_progress = 61 [default = false];
        Returns:
        The logSearchProgress.
      • hasLogToStdout

        boolean hasLogToStdout()
         If true, logs will be displayed to stdout instead of using Google log info.
         
        optional bool log_to_stdout = 66 [default = true];
        Returns:
        Whether the logToStdout field is set.
      • getLogToStdout

        boolean getLogToStdout()
         If true, logs will be displayed to stdout instead of using Google log info.
         
        optional bool log_to_stdout = 66 [default = true];
        Returns:
        The logToStdout.
      • hasCrossoverBoundSnappingDistance

        boolean hasCrossoverBoundSnappingDistance()
         If the starting basis contains FREE variable with bounds, we will move
         any such variable to their closer bounds if the distance is smaller than
         this parameter.
        
         The starting statuses can contains FREE variables with bounds, if a user
         set it like this externally. Also, any variable with an initial BASIC
         status that was not kept in the initial basis is marked as FREE before this
         step is applied.
        
         Note that by default a FREE variable is assumed to be zero unless a
         starting value was specified via SetStartingVariableValuesForNextSolve().
        
         Note that, at the end of the solve, some of these FREE variable with bounds
         and an interior point value might still be left in the final solution.
         Enable push_to_vertex to clean these up.
         
        optional double crossover_bound_snapping_distance = 64 [default = inf];
        Returns:
        Whether the crossoverBoundSnappingDistance field is set.
      • getCrossoverBoundSnappingDistance

        double getCrossoverBoundSnappingDistance()
         If the starting basis contains FREE variable with bounds, we will move
         any such variable to their closer bounds if the distance is smaller than
         this parameter.
        
         The starting statuses can contains FREE variables with bounds, if a user
         set it like this externally. Also, any variable with an initial BASIC
         status that was not kept in the initial basis is marked as FREE before this
         step is applied.
        
         Note that by default a FREE variable is assumed to be zero unless a
         starting value was specified via SetStartingVariableValuesForNextSolve().
        
         Note that, at the end of the solve, some of these FREE variable with bounds
         and an interior point value might still be left in the final solution.
         Enable push_to_vertex to clean these up.
         
        optional double crossover_bound_snapping_distance = 64 [default = inf];
        Returns:
        The crossoverBoundSnappingDistance.
      • hasPushToVertex

        boolean hasPushToVertex()
         If the optimization phases finishes with super-basic variables (i.e.,
         variables that either 1) have bounds but are FREE in the basis, or 2) have
         no bounds and are FREE in the basis at a nonzero value), then run a "push"
         phase to push these variables to bounds, obtaining a vertex solution. Note
         this situation can happen only if a starting value was specified via
         SetStartingVariableValuesForNextSolve().
         
        optional bool push_to_vertex = 65 [default = true];
        Returns:
        Whether the pushToVertex field is set.
      • getPushToVertex

        boolean getPushToVertex()
         If the optimization phases finishes with super-basic variables (i.e.,
         variables that either 1) have bounds but are FREE in the basis, or 2) have
         no bounds and are FREE in the basis at a nonzero value), then run a "push"
         phase to push these variables to bounds, obtaining a vertex solution. Note
         this situation can happen only if a starting value was specified via
         SetStartingVariableValuesForNextSolve().
         
        optional bool push_to_vertex = 65 [default = true];
        Returns:
        The pushToVertex.
      • hasUseImpliedFreePreprocessor

        boolean hasUseImpliedFreePreprocessor()
         If presolve runs, include the pass that detects implied free variables.
         
        optional bool use_implied_free_preprocessor = 67 [default = true];
        Returns:
        Whether the useImpliedFreePreprocessor field is set.
      • getUseImpliedFreePreprocessor

        boolean getUseImpliedFreePreprocessor()
         If presolve runs, include the pass that detects implied free variables.
         
        optional bool use_implied_free_preprocessor = 67 [default = true];
        Returns:
        The useImpliedFreePreprocessor.
      • hasMaxValidMagnitude

        boolean hasMaxValidMagnitude()
         Any finite values in the input LP must be below this threshold, otherwise
         the model will be reported invalid. This is needed to avoid floating point
         overflow when evaluating bounds * coeff for instance. In practice, users
         shouldn't use super large values in an LP. With the default threshold, even
         evaluating large constraint with variables at their bound shouldn't cause
         any overflow.
         
        optional double max_valid_magnitude = 199 [default = 1e+30];
        Returns:
        Whether the maxValidMagnitude field is set.
      • getMaxValidMagnitude

        double getMaxValidMagnitude()
         Any finite values in the input LP must be below this threshold, otherwise
         the model will be reported invalid. This is needed to avoid floating point
         overflow when evaluating bounds * coeff for instance. In practice, users
         shouldn't use super large values in an LP. With the default threshold, even
         evaluating large constraint with variables at their bound shouldn't cause
         any overflow.
         
        optional double max_valid_magnitude = 199 [default = 1e+30];
        Returns:
        The maxValidMagnitude.
      • hasDualPricePrioritizeNorm

        boolean hasDualPricePrioritizeNorm()
         On some problem like stp3d or pds-100 this makes a huge difference in
         speed and number of iterations of the dual simplex.
         
        optional bool dual_price_prioritize_norm = 69 [default = false];
        Returns:
        Whether the dualPricePrioritizeNorm field is set.
      • getDualPricePrioritizeNorm

        boolean getDualPricePrioritizeNorm()
         On some problem like stp3d or pds-100 this makes a huge difference in
         speed and number of iterations of the dual simplex.
         
        optional bool dual_price_prioritize_norm = 69 [default = false];
        Returns:
        The dualPricePrioritizeNorm.