Performance
& Quality
Your optimization engine should produce stable, optimal or near-optimal results that give you confidence that you’re receiving the best possible solution.
The first step toward this goal is the gap concept that linear/mixed integer programming uses to estimate the quality of a given result. Said another way, it measures how close the current objective value is to an optimal value. Here’s how it works:
- A solver removes the constraints from a set of equations and solves them to find the best but not feasible solution.
- The solver restores the constraints. As it solves the equations, it measures the gap between the current objective and the one it found in the previous step. When this difference falls below the defined gap tolerance, the optimization stops.
Unfortunately, combinatorial optimization doesn’t allow us to use such an approach. Since there aren’t any sets of linear equations which we could solve to estimate optimum – in other words, to guarantee the quality of our results – our only recourse is to test the optimizer on the mix of datasets with known results. The results give us an estimate of the gap tolerance.
How we test our engine
We continually test our optimization engine to ensure it can provide rapid, optimal or near-optimal results for real-world problems.
- Our tests use diverse public and private datasets which include problems of different sizes. This ensures the optimizer can solve them with similar quality and within a specific timeframe.
- We use the standard optimization engine for all our testing. We do not optimize or customize it for a specific dataset.
We measure the optimizer’s performance by determining the time it takes to get into 3-5% tolerance from the best-known results. For most datasets, it does this in under 10 minutes.