# lmlib.statespace.cost.RLSAlssm#

class lmlib.statespace.cost.RLSAlssm(cost_model, **kwargs)#

Filter and Data container for Recursive Least Sqaure Alssm Filters

RLSAlssm computes and stores intermediate values such as covariances, as required to efficiently solve recursive least squares problems between a model-based cost function CompositeCost or CostSegment and given observations. The intermediate variables are observation dependant and therefore the memory consumption of RLSAlssm scales linearly with the observation vector length.

Main intermediate variables are the covariance W, weighted mean xi, signal energy kappa, weighted number of samples nu, see Equation (4.6) in [Wildhaber2019] PDF

Parameters

Examples

Methods

 __init__(cost_model, **kwargs) eval_errors(xs[, ks]) Evaluation of the squared error for multiple state vectors xs. filter(y[, v]) Computes the intermediate parameters for subsequent squared error computations and minimizations. filter_minimize_v(y[, v, H, h]) Combination of RLSAlssm.filter() and RLSAlssm.minimize_v(). filter_minimize_x(y[, v, H, h]) Combination of RLSAlssm.filter() and RLSAlssm.minimize_x(). minimize_v([H, h, return_constrains]) Returns the vector v of the squared error minimization with linear constraints minimize_x([H, h]) Returns the state vector x of the squared error minimization with linear constraints set_backend(backend) Setting the backend computations option

Attributes

 W Filter Parameter $$W$$ betas Segment scalars weights the cost function per segment cost_model Cost Model kappa Filter Parameter $$\kappa$$ nu Filter Parameter $$\nu$$ xi Filter Parameter $$\xi$$
property W#

Filter Parameter $$W$$

Type

ndarray

property betas#

Segment scalars weights the cost function per segment

Type

ndarray

property cost_model#

Cost Model

Type
eval_errors(xs, ks=None)#

Evaluation of the squared error for multiple state vectors xs.

The return value is the squared error

$J(x) = x^{\mathsf{T}}W_kx -2*x^{\mathsf{T}}\xi_k + \kappa_k$

for each state vector $$x$$ from the list xs.

Parameters
• xs (array_like of shape=(K, N)) – List of state vectors $$x$$

• ks (None, array_like of int of shape=(XS,)) – List of indices where to evaluate the error

Returns

J – Squared Error for each state vector

Return type

np.ndarray of shape=(XS,)

K : number of samples
XS : number of state vectors in a list
N : ALSSM system order, corresponding to the number of state variables

filter(y, v=None)#

Computes the intermediate parameters for subsequent squared error computations and minimizations.

Computes the intermediate parameters using efficient forward- and backward recursions. The results are stored internally, ready to solve the least squares problem using e.g., minimize_x() or minimize_v(). The parameter allocation allocate() is called internally, so a manual pre-allcation is not necessary.

Parameters

K : number of samples
L : output order / number of signal channels
S : number of signal sets

filter_minimize_v(y, v=None, H=None, h=None)#

Combination of RLSAlssm.filter() and RLSAlssm.minimize_v().

This method has the same output as calling the methods

rls.filter(y)
xs = rls.minimize_v()

filter_minimize_x(y, v=None, H=None, h=None)#

Combination of RLSAlssm.filter() and RLSAlssm.minimize_x().

This method has the same output as calling the methods

rls.filter(y)
xs = rls.minimize_x()

property kappa#

Filter Parameter $$\kappa$$

Type

ndarray

minimize_v(H=None, h=None, return_constrains=False)#

Returns the vector v of the squared error minimization with linear constraints

Minimizes the squared error over the vector v with linear constraints with an (optional) offset [Wildhaber2018] [TABLE V].

Constraint:

• Linear Scalar : $$x=Hv,\,v\in\mathbb{R}$$

known : $$H \in \mathbb{R}^{N \times 1}$$

$$\hat{v}_k = \frac{\xi_k^{\mathsf{T}}H}{H^{\mathsf{T}}W_k H}$$

• Linear Combination With Offset : $$x=Hv +h,\,v\in\mathbb{M}$$

known : $$H \in \mathbb{R}^{N \times M},\,h\in\mathbb{R}^N$$

$$\hat{v}_k = \big(H^{\mathsf{T}}W_k H\big)^{-1} H^\mathsf{T}\big(\xi_k - W_k h\big)$$

Parameters
• H (array_like, shape=(N, M)) – Matrix for linear constraining $$H$$

• h (array_like, shape=(N, [S]), optional) – Offset vector for linear constraining $$h$$

• return_constrains (bool) – If set to True, the output is extened by H and h

Returns

• v (ndarray, shape = (K, M),) – Least square state vector estimate for each time index. The shape of one state vector x[k] is (N, [S]), where k is the time index of K samples, N the ALSSM order.

• |def_K|

• |def_N|

minimize_x(H=None, h=None)#

Returns the state vector x of the squared error minimization with linear constraints

Minimizes the squared error over the state vector x. If needed its possible to apply linear constraints with an (optional) offset. [Wildhaber2018] [TABLE V].

Constraint:

• Linear Scalar : $$x=Hv,\,v\in\mathbb{R}$$

known : $$H \in \mathbb{R}^{N \times 1}$$

• Linear Combination With Offset : $$x=Hv +h,\,v\in\mathbb{M}$$

known : $$H \in \mathbb{R}^{N \times M},\,h\in\mathbb{R}^N$$

See also minimize_v()

Parameters
• H (array_like, shape=(N, M), optional) – Matrix for linear constraining $$H$$

• h (array_like, shape=(N, [S]), optional) – Offset vector for linear constraining $$h$$

Returns

xs – Least square state vector estimate for each time index. The shape of one state vector x[k] is (N,), where k is the time index of K samples, N the ALSSM order.

Return type

ndarray of shape = (K, N)

K : number of samples
N : ALSSM system order, corresponding to the number of state variables

property nu#

Filter Parameter $$\nu$$

Type

ndarray

set_backend(backend)#

Setting the backend computations option

Parameters

backend (str) – ‘py’, for python backend, ‘jit’ for Just in Time backend

property xi#

Filter Parameter $$\xi$$

Type

ndarray