# lmlib.statespace.cost.RLSAlssmSet#

class lmlib.statespace.cost.RLSAlssmSet(cost_model, kappa_diag=True)#

Filter and Data container for Recursive Least Sqaure Alssm Filters using Sets (multichannel parallel processing)

This class is the same as RLSAlssm except that the signal y has an additional last dimension. The signals in these dimensions are processed simultaneously, as in a normal RLSAlssm called multiple times.

Parameters

Methods

 __init__(cost_model[, kappa_diag]) eval_errors(xs[, ks]) Evaluation of the squared error for multiple state vectors xs. filter(y[, v]) Computes the intermediate parameters for subsequent squared error computations and minimizations. filter_minimize_v(y[, v, H, h]) Combination of RLSAlssmSet.filter() and RLSAlssmSet.minimize_v(). filter_minimize_x(y[, v, H, h]) Combination of RLSAlssmSet.filter() and RLSAlssmSet.minimize_x(). minimize_v([H, h, broadcast_h, ...]) Returns the vector v of the squared error minimization with linear constraints minimize_x([H, h, broadcast_h]) Returns the state vector x of the squared error minimization with linear constraints set_backend(backend) Setting the backend computations option set_kappa_diag(b)

Attributes

 W Filter Parameter $$W$$ betas Segment scalars weights the cost function per segment cost_model Cost Model kappa Filter Parameter $$\kappa$$ nu Filter Parameter $$\nu$$ xi Filter Parameter $$\xi$$
property W#

Filter Parameter $$W$$

Type

ndarray

property betas#

Segment scalars weights the cost function per segment

Type

ndarray

property cost_model#

Cost Model

Type
eval_errors(xs, ks=None)#

Evaluation of the squared error for multiple state vectors xs.

The return value is the squared error

$J(x) = x^{\mathsf{T}}W_kx -2*x^{\mathsf{T}}\xi_k + \kappa_k$

for each state vector $$x$$ from the list xs.

Parameters
• xs (array_like of shape=(K, N, S)) – List of state vectors $$x$$

• ks (None, array_like of int of shape=(XS,)) – List of indices where to evaluate the error

Returns

J – Squared Error for each state vector

Return type

np.ndarray of shape=(XS, S [,S])

K : number of samples
XS : number of state vectors in a list
N : ALSSM system order, corresponding to the number of state variables

filter(y, v=None)#

Computes the intermediate parameters for subsequent squared error computations and minimizations.

Computes the intermediate parameters using efficient forward- and backward recursions. The results are stored internally, ready to solve the least squares problem using e.g., minimize_x() or minimize_v(). The parameter allocation allocate() is called internally, so a manual pre-allcation is not necessary.

Parameters

K : number of samples
L : output order / number of signal channels
S : number of signal sets

filter_minimize_v(y, v=None, H=None, h=None, **kwargs)#

Combination of RLSAlssmSet.filter() and RLSAlssmSet.minimize_v().

This method has the same output as calling the methods

rls.filter(y)
xs = rls.minimize_v()

filter_minimize_x(y, v=None, H=None, h=None, **kwargs)#

Combination of RLSAlssmSet.filter() and RLSAlssmSet.minimize_x().

This method has the same output as calling the methods

rls.filter(y)
xs = rls.minimize_x()

property kappa#

Filter Parameter $$\kappa$$

Type

ndarray

Returns the vector v of the squared error minimization with linear constraints

Minimizes the squared error over the vector v with linear constraints with an (optional) offset [Wildhaber2018] [TABLE V].

Constraint:

• Linear Scalar : $$x=Hv,\,v\in\mathbb{R}$$

known : $$H \in \mathbb{R}^{N \times 1}$$

$$\hat{v}_k = \frac{\xi_k^{\mathsf{T}}H}{H^{\mathsf{T}}W_k H}$$

• Linear Combination With Offset : $$x=Hv +h,\,v\in\mathbb{M}$$

known : $$H \in \mathbb{R}^{N \times M},\,h\in\mathbb{R}^N$$

$$\hat{v}_k = \big(H^{\mathsf{T}}W_k H\big)^{-1} H^\mathsf{T}\big(\xi_k - W_k h\big)$$

Parameters
• H (array_like, shape=(N, M)) – Matrix for linear constraining $$H$$

• h (array_like, shape=(N, [S]), optional) – Offset vector for linear constraining $$h$$

• broadcast_h (bool) – if True each channel has the same h vectore else h needs same shape as x.

• return_constrains (bool) – If set to True, the output is extened by H and h

Returns

• v (ndarray, shape = (K, M, S),) – Least square state vector estimate for each time index. The shape of one state vector x[k] is (N, [S]), where k is the time index of K samples, N the ALSSM order.

• |def_K|

• |def_S|

• |def_N|

Returns the state vector x of the squared error minimization with linear constraints

Minimizes the squared error over the state vector x. If needed its possible to apply linear constraints with an (optional) offset. [Wildhaber2018] [TABLE V].

Constraint:

• Linear Scalar : $$x=Hv,\,v\in\mathbb{R}$$

known : $$H \in \mathbb{R}^{N \times 1}$$

• Linear Combination With Offset : $$x=Hv +h,\,v\in\mathbb{M}$$

known : $$H \in \mathbb{R}^{N \times M},\,h\in\mathbb{R}^N$$

See also minimize_v()

Parameters
• H (array_like, shape=(N, M), optional) – Matrix for linear constraining $$H$$

• h (array_like, shape=(N, [S]), optional) – Offset vector for linear constraining $$h$$

• broadcast_h (bool) – if True each channel has the same h vectore else h needs same shape as x.

Returns

xs – Least square state vector estimate for each time index. The shape of one state vector x[k] is (N, S), where k is the time index of K samples, N the ALSSM order.

Return type

ndarray of shape = (K, N, S)

K : number of samples
S : number of signal sets
N : ALSSM system order, corresponding to the number of state variables

property nu#

Filter Parameter $$\nu$$

Type

ndarray

set_backend(backend)#

Setting the backend computations option

Parameters

backend (str) – ‘py’, for python backend, ‘jit’ for Just in Time backend

property xi#

Filter Parameter $$\xi$$

Type

ndarray