How can I debug models when things go wrong?¶
Dymos allow the user to build complex optimization models that include dynamic behavior. Managing that complexity can be a challenge as models grow larger. In this section we'll talk about some tools that can help when things are going as expected.
If you look at the dymos source code, a considerable portion of it is used for testing. We strongly recommend that you develop tests of your models, from testing that the most basic components work as expected, to testing integrated systems with nonlinear solvers. In most cases these tests consist of these steps:
- Instantiate an OpenMDAO Problem
- Add your model.
- Setup the problem.
- Set the model inputs.
- Check the outputs against known values.
problem.check_partials()to verify that the analytic partials are reasonably close to finite-difference or complex-step results.
For example, the tests for the
kappa_comp in the minimum time-to-climb model looks like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45
This consists of two separate tests: one that tests results, and one that tests the partials against finite-differencing.
OpenMDAO includes a useful
assert_check_partials method that can be used to programmatically verify accurate partials in automated testing.
The N2 Viewer¶
When complex models don't output the correct value and the compute method has been double-checked, an incorrect or non-existent connection is frequently to blame. The goto tool for checking to see if models are correctly connected is OpenMDAO's N-squared (N2) viewer. This tool provides information about how models are connected and lets the user know when inputs aren't connected to an output as expected.
It can be invoked from a run script using
or from the command line using
openmdao n2 file.py
where file.py is the file that contains an instantiated OpenMDAO Problem.
An example of an N2 of a Dymos model¶