Skip to main content

Predictor Model Window

A structured predictor model is constructed that allows the consistent identification of a selected target module in the network.

A predictor model specifies the structure of a network equation

    wout=G(q)win+T(q)rin+H(q)ew_{out} = G(q) w_{in} + T(q) r_{in} + H(q) e

and is therefore specified by the sets woutw_{out}, winw_{in} accompanied by the relevant signals rinr_{in} and structural zeros / known terms in GG, TT and HH.

As identification algorithms for which the predictor models are being developed, 3 methods are implemented:

  • the local direct method (Ref. 1), that is based on a predictor model with w-nodes as predictor inputs and w-nodes as predicted outputs, and appropriate handling of external excitation signals. This method can end up with a MIMO (multi-input, multi-output) predictor model.
  • the multi-step method (Ref. 3), that is based on a similar predictor model, but that uses a nonparametric step to estimate innovation signals first, which are then used as measured inputs in a parametric estimation. This method reflects an alternative way of handling confounding variables and always ends up with a MISO (multi-input, single-output) predictor model.
  • the indirect method (Ref. 4,5), that is based on a predictor model with r-nodes as inputs and w-nodes as predicted outputs. It requires post-processing of the identified predictor model in order to arrive at a target module estimate.

The Predictor Model Window is separated in two different parts:

  • In the left window panel, predictor models are being constructed (synthesized), according to particularly chosen algorithms, while constructed predictor models can be Accepted and consequently stored in the Stored Predictor Models (right upper) panel.
  • In the right window panel, a selected predictor model from the Stored Predictor Models panel can be manually edited, and analyzed in terms of its consistency conditions. 

Target module

A predictor model is composed for identification of a single module in the network. Therefore a target module needs to be selected by the user, either using the dropdown in the Target module panel, or by clicking on the module in the network plot.

Data informativity conditions

It can be selected which data-informativity conditions are being applied in both synthesis and analysis. This can be either conditions contributing to (consistency of) the full predictor model (Ref. 5, 3), or of the single target module only (results are not yet published).

Consistency conditions

Consistent identification of a single module in a network requires the satisfaction of three types of conditions:

  1. Structural conditions on the network topology, that encompass:
    • Conditions for module invariance, covered by the parallel path and loop condition;
    • Conditions on the absence of confounding variables between particular sets of nodes
  2. Data Informativity requiring sufficient external excitation to be present in the network;
  3. Absence of Algebraic Loops in particular parts of the network.

When synthesizing a predictor model, satisfying the selected conditions is guaranteed. For condition 2 this means that the predictor model has the structural capability to satisfy data-informativity provided that there is a sufficient number of external excitation signals present in the network. It does not imply that these external signals are indeed present. In the synthesis procedure no external signals are added to the network. This can be done in the analysis panel.

Synthesis Algorithm

For the direct method there are four synthesis algorithms between which the user can choose for constructing a predictor model.

  1. Full Measurement case

The first two algorithms (Full Input and Minimum Input) are based on the Full Measurement situation, i.e. it is assumed that all nodes in the network are available from measurements, and so there is no restriction in the selection of predictor inputs and outputs.  In this situation the measured status of nodes, as stored in the network, is not taken into account. All nodes in the network can be selected.

    Full input case: for every node variable that is selected as predicted output, all in-neighbours of that node are selected as predictor input.

    Minimum Input case: confounding variables that appear are not treated by blocking them with newly added predictor inputs, but are treated by copying the problematic predictor input to the predicted output, where the confounding effect can be modelled through a multivariate noise model.

Both algorithms are described in (Ref. 1).

    2. Partial Measurement case

In the Partial Measurement situation only those nodes are selected that have the measured status in the stored network. The corresponding synthesis algorithms take this set of measured nodes as a starting point. Two particular algorithms can be chosen;

    Inputs first algorithm: this is the User Selection algorithm presented in (Ref. 1). It actually comes down to applying the full input case to the immersed network. I.e. Predictor inputs are chosen first so as to maximally block confounding variables. Only when this is not possible, predictor outputs are added.

    Outputs first algorithm: this algorithm starts with choosing predictor outputs to handle confounding variables, and then adds appropriate predictor inputs. It is presented in (Ref. 2).

For the multi-step method there is one algorithm that is a partial measurement algorithm, in the sense that all measured nodes are taken into consideration of the first step of the algorithm, and the predictor that is constructed is the predictor model for the final parametric step.

For the Indirect method there is one algorithm that is based on full measurements, i.e. all node signals are available for constructing the predictor model.

Synthesis Solution

The resulting predictor model is presented here in terms of the sets: input winw_{in} or rinr_{in} and output woutw_{out}.

Some of the algorithms may lead to non-unique solutions, where additional choices for additional inputs need to be made by the user.

When accepting the presented predictor model solution, the predictor model is completed with appropriate structural information on GG, TT and HH, and information on rinr_{in}, and then stored in the Stored Predictor Models Table (right panel).

Stored Predictor Models

In this panel, a stored predictor model can be selected by clicking on the corresponding checkbox in the table. The predictor models can be individually edited by selecting ... or by selecting the Input or Output fields. Alternatively, you can right-click a node/excitation, hover over the Predictor model option, and then click on Input/Output to toggle its inclusion in the input/output set of the selected predictor model. Note that the addition of an internal node to the input is disabled if there are excitations in the input. Similarly, the addition of an excitation to the input is disabled if there are nodes in the input. The addition of an excitation to the output is always disabled. New predictor models can be manually added too.

Analysis Options

For a selected predictor model, the consistency conditions can be analyzed, either jointly or separately. For analysis, all three consistency conditions are implemented.

Add/remove excitation signals

Given the importance of the presence and location of external excitation signals rr for data-informativity conditions, this panel allows to add/delete external excitation signals as part of the workflow on predictor model studies. Note that even if a predictor model is constructed to comply with (structural) data-informativity conditions, it may require adding excitation signals at particular locations in the network to actually satisfy the data-informativity conditions.

Algorithm restrictions

In the current implementation of the direct method all present modules in the network are considered to be unknown, parametrized and non-switching. It is also assumed that the immersed network, based on the selected set of nodes, has a full rank disturbance process (square HH).

For the multi-step method and the indirect method known modules in the network (GG) are properly handled, and reduced rank disturbances are allowed.

References

The implemented algorithms result from the following publications:

  1. K.R. Ramaswamy and P.M.J. Van den Hof (2021). A local direct method for module identification in dynamic networks with correlated noise. IEEE Trans. Automatic Control, Vol. 66, no. 11, pp. 3237-3252, November 2021.
  2. S. Shi, X. Cheng, B. De Schutter and P.M.J. Van den Hof (2023). Signal selection for local module identification in linear dynamic networks: A graphical approach. Proc. 22nd IFAC World Congress, 9-14 July 2023, Yokohama, Japan, pp. 2718-2723.
  3. S.J.M. Fonken, K.R. Ramaswamy and P.M.J. Van den Hof (2023). Local identification in dynamic networks using a multi-step least squares method. Proc. 62nd IEEE Conf. Decision and Control, 13-15 December 2023, Marina Bay Sands, Singapore.
  4. M. Gevers, A.S. Bazanella and G.V. da Silva (2018). [A practical method for the consistent identification of a module in a dynamical network.] IFAC-PapersOnLine, 51(15), 862–867.
  5. S. Shi, X. Cheng and P.M.J. Van den Hof (2022). Generic identifiability of subnetworks in a linear dynamic network: the full measurement case.. Automatica, Vol. 117, (110093), March 2022.
  6. P.M.J. Van den Hof, K.R. Ramaswamy and S.J.M. Fonken (2023). Integrating data-informativity conditions in predictor models for single module identification in dynamic networks. IFAC PapersOnLine, Vol. 56-2 (2023), pp. 2377-2382. Proc. IFAC World Congress, Yokohama, Japan.