Contents

# 1. Introduction

PyDSTool supports the optional specification of bounds on variables and parameters. This is most useful when performing parameter and model estimation, but also provides a means to check the consistency of iterative processes as they run.

Bounds on variables are specified to a Generator (see Generators) using the `xdomain` key, while those on parameters use the `pdomain` key. Computed Trajectories or raw numeric intervals are also defined on a specific domain of validity. These domains may be semi- or bi-infinite (through the use of the IEEE 754 `Inf` special value).

# 2. The numerical interval class

In `Interval.py` a class for representing numeric intervals is specified. This class is fundamental to PyDSTool in the specification of closed sets of real numbers or integers. (An extension to complex numbers is not presently planned, but would be an easy extension.)

## 2.1. Rounding error at endpoints

Interval checking at endpoints can suffer from numerical rounding errors that may incorrectly indicate that a value is inside or outside a given interval. This commonly occurs in floating point arithmetic involving numbers of greatly different magnitude. In practice the problem arises particularly in the use of hybrid systems when the determination of appropriate trajectory segment relies on a subtraction, where the result is then evaluated for interval containment.

To help counter this problem, the user may specify an *absolute* tolerance for interval specifications. Numerical errors of magnitude smaller than this tolerance will lead to the test value being *included* in the interval. For this to work well, it is recommended that all quantities in a user's model should be pre-scaled to be of "order 1" magnitude. (A relative tolerance may be implemented later, which would avoid this necessity.) The default absolute tolerance is `1e-14`.

# 3. Behaviour at interval endpoints

The `checklevel` parameter is used by Models, Generators, and Trajectory objects to determine behaviour when computations, comparisons, or trajectory lookups are evaluated within the absolute tolerance of an endpoint of the relevant interval. This case is known as an `uncertain` evaluation, in contrast to those evaluations outside of this tolerance -- either `contained` or `notcontained`.

- 0: no intelligent bounds checking, just evaluate on the expectation that the bounds restriction is strictly met (may cause exception if this fails) -- equivalent to absolute tolerance = 0.
1:

**ignore**on`uncertain`(treat as`contained`),**error**on`notcontained`.2:

**warn**on`uncertain`(treat as`contained`),**error**on`notcontained`(*default*).

Here, the flagging of a warning will consist of a record being kept in the Model, Generator, or Trajectory in the `warnings` attribute (usually read by a `showWarnings` method call, and cleared with `clearWarnings`). An error will consist of a `PyDSTool_BoundsError` or `ValueError` exception being raised.

# 4. The use of events for bounds safety

By default, models constructed and compiled using the Model`Constructor classes come with a set of dormant (initially inactive) events that correspond to domain-crossing detectors. If variable domains were specified, these events can be made active so that bounds checking is used by ODE integrators, etc. This is a useful tool in the implementation of "non-negativity preserving" algorithms. `

The occurrence of bounds errors can be trapped and used to change vector field in a hybrid system. Alternatively, these events can be used to efficiently stop computations that search initial condition or parameter spaces using expensive loops, and allow them to try another value. This feature is used in PyDSTool's parameter estimation class in the file `ParamEst.py`.

# 5. Strategy for efficiency

The activation of many events for bounds checking can incur a signifcant computational cost during simulation. The use of trajectory domain checks during lookup calls also incurs some cost. Therefore, a recommended strategy is to retain bounds checking when testing new models, or during parameter searches that may go "out of bounds", with a view to establishing a "safe working domain" in which bounds-checking can be switched off for maximum efficiency.