Skip to content

How can I debug models when things go wrong?

Dymos allow the user to build complex optimization models that include dynamic behavior. Managing that complexity can be a challenge as models grow larger. In this section we'll talk about some tools that can help when things are going as expected.

Testing

If you look at the dymos source code, a considerable portion of it is used for testing. We strongly recommend that you develop tests of your models, from testing that the most basic components work as expected, to testing integrated systems with nonlinear solvers. In most cases these tests consist of these steps:

  1. Instantiate an OpenMDAO Problem
  2. Add your model.
  3. Setup the problem.
  4. Set the model inputs.
  5. Call run_model()
  6. Check the outputs against known values.
  7. Run problem.check_partials() to verify that the analytic partials are reasonably close to finite-difference or complex-step results.

For example, the tests for the kappa_comp in the minimum time-to-climb model looks like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
import unittest

import numpy as np

import openmdao.api as om
from openmdao.utils.assert_utils import assert_near_equal
from dymos.utils.testing_utils import assert_check_partials
from dymos.examples.min_time_climb.aero.kappa_comp import KappaComp


class TestKappaComp(unittest.TestCase):

    def test_value(self):
        n = 500
        p = om.Problem()
        p.model.add_subsystem(name='kappa_comp', subsys=KappaComp(num_nodes=n))
        p.setup()
        p.set_val('kappa_comp.mach', np.linspace(0, 1.8, n))
        p.run_model()

        M = p.get_val('kappa_comp.mach')
        kappa = p.get_val('kappa_comp.kappa')

        idxs_0 = np.where(M <= 1.15)[0]
        idxs_1 = np.where(M > 1.15)[0]

        kappa_analtic_0 = 0.54 + 0.15 * (1.0 + np.tanh((M[idxs_0] - 0.9)/0.06))
        kappa_analtic_1 = 0.54 + 0.15 * (1.0 + np.tanh(0.25/0.06)) + 0.14 * (M[idxs_1] - 1.15)

        assert_near_equal(kappa[idxs_0], kappa_analtic_0)
        assert_near_equal(kappa[idxs_1], kappa_analtic_1)

    def test_partials(self):
        n = 10
        p = om.Problem(model=om.Group())
        p.model.add_subsystem(name='kappa_comp', subsys=KappaComp(num_nodes=n))
        p.setup()
        p.set_val('kappa_comp.mach', np.linspace(0, 1.8, n))
        p.run_model()
        cpd = p.check_partials(compact_print=False, out_stream=None)
        assert_check_partials(cpd, atol=1.0E-5, rtol=1.0E-4)


if __name__ == '__main__':  # pragma: no cover
    unittest.main()

This consists of two separate tests: one that tests results, and one that tests the partials against finite-differencing. OpenMDAO includes a useful assert_check_partials method that can be used to programmatically verify accurate partials in automated testing.

The N2 Viewer

When complex models don't output the correct value and the compute method has been double-checked, an incorrect or non-existent connection is frequently to blame. The goto tool for checking to see if models are correctly connected is OpenMDAO's N-squared (N2) viewer. This tool provides information about how models are connected and lets the user know when inputs aren't connected to an output as expected.

It can be invoked from a run script using

om.n2(problem.model)

or from the command line using

openmdao n2 file.py

where file.py is the file that contains an instantiated OpenMDAO Problem.

An example of an N2 of a Dymos model

Coming soon

Using debug_print

Coming soon