lmlib.statespace.cost.RLSAlssm#
- class lmlib.statespace.cost.RLSAlssm(cost_model, **kwargs)#
Bases:
lmlib.statespace.cost.RLSAlssmBase
Filter and Data container for Recursive Least Sqaure Alssm Filters
RLSAlssm
computes and stores intermediate values such as covariances, as required to efficiently solve recursive least squares problems between a model-based cost functionCompositeCost
orCostSegment
and given observations. The intermediate variables are observation dependant and therefore the memory consumption ofRLSAlssm
scales linearly with the observation vector length.Main intermediate variables are the covariance W, weighted mean xi, signal energy kappa, weighted number of samples nu, see Equation (4.6) in [Wildhaber2019]
PDF
- Parameters
cost_model (CostSegment, CompositeCost) – Cost Model
**kwargs – Forwarded to
RLSAlssmBase
Examples
Methods
__init__
(cost_model, **kwargs)eval_errors
(xs[, ks])Evaluation of the squared error for multiple state vectors xs.
filter
(y[, v])Computes the intermediate parameters for subsequent squared error computations and minimizations.
filter_minimize_x
(y[, v, H, h])Combination of
RLSAlssm.filter()
andRLSAlssm.minimize_x()
.minimize_v
([H, h, return_constrains])Returns the vector v of the squared error minimization with linear constraints
minimize_x
([H, h])Returns the state vector x of the squared error minimization with linear constraints
set_backend
(backend)Setting the backend computations option
Attributes
Filter Parameter \(W\)
Segment scalars weights the cost function per segment
Cost Model
Filter Parameter \(\kappa\)
Filter Parameter \(\nu\)
Filter Parameter \(\xi\)
- property cost_model#
Cost Model
- Type
- eval_errors(xs, ks=None)#
Evaluation of the squared error for multiple state vectors xs.
The return value is the squared error
\[J(x) = x^{\mathsf{T}}W_kx -2*x^{\mathsf{T}}\xi_k + \kappa_k\]for each state vector \(x\) from the list xs.
- Parameters
xs (array_like of shape=(K, N)) – List of state vectors \(x\)
ks (None, array_like of int of shape=(XS,)) – List of indices where to evaluate the error
- Returns
J – Squared Error for each state vector
- Return type
np.ndarray
of shape=(XS,)
K : number of samples
XS : number of state vectors in a list
N : ALSSM system order, corresponding to the number of state variables
- filter(y, v=None)#
Computes the intermediate parameters for subsequent squared error computations and minimizations.
Computes the intermediate parameters using efficient forward- and backward recursions. The results are stored internally, ready to solve the least squares problem using e.g.,
minimize_x()
orminimize_v()
. The parameter allocationallocate()
is called internally, so a manual pre-allcation is not necessary.- Parameters
y (array_like) –
Input signal
RLSAlssm
orRLSAlssmSteadyState
Single-channel signal is of shape =(K,) for
Multi-channel signal is of shape =(K,L)RLSAlssmSet
orRLSAlssmSetSteadyState
Single-channel set signals is of shape =(K,S) for
Multi-channel set signals is of shape =(K,L,S)
Multi-channel-sets signal is of shape =(K,L,S)
v (array_like, shape=(K,), optional) – Sample weights. Weights the parameters for a time step k and is the same for all multi-channels. By default the sample weights are initialized to 1.
K : number of samples
L : output order / number of signal channels
S : number of signal sets
- filter_minimize_v(y, v=None, H=None, h=None)#
Combination of
RLSAlssm.filter()
andRLSAlssm.minimize_v()
.This method has the same output as calling the methods
rls.filter(y) xs = rls.minimize_v()
See also
- filter_minimize_x(y, v=None, H=None, h=None)#
Combination of
RLSAlssm.filter()
andRLSAlssm.minimize_x()
.This method has the same output as calling the methods
rls.filter(y) xs = rls.minimize_x()
See also
- minimize_v(H=None, h=None, return_constrains=False)#
Returns the vector v of the squared error minimization with linear constraints
Minimizes the squared error over the vector v with linear constraints with an (optional) offset [Wildhaber2018] [TABLE V].
Constraint:
Linear Scalar : \(x=Hv,\,v\in\mathbb{R}\)
known : \(H \in \mathbb{R}^{N \times 1}\)
\(\hat{v}_k = \frac{\xi_k^{\mathsf{T}}H}{H^{\mathsf{T}}W_k H}\)
Linear Combination With Offset : \(x=Hv +h,\,v\in\mathbb{M}\)
known : \(H \in \mathbb{R}^{N \times M},\,h\in\mathbb{R}^N\)
\(\hat{v}_k = \big(H^{\mathsf{T}}W_k H\big)^{-1} H^\mathsf{T}\big(\xi_k - W_k h\big)\)
- Parameters
H (array_like, shape=(N, M)) – Matrix for linear constraining \(H\)
h (array_like, shape=(N, [S]), optional) – Offset vector for linear constraining \(h\)
return_constrains (bool) – If set to True, the output is extened by H and h
- Returns
v (
ndarray
, shape = (K, M),) – Least square state vector estimate for each time index. The shape of one state vector x[k] is (N, [S]), where k is the time index of K samples, N the ALSSM order.|def_K|
|def_N|
- minimize_x(H=None, h=None)#
Returns the state vector x of the squared error minimization with linear constraints
Minimizes the squared error over the state vector x. If needed its possible to apply linear constraints with an (optional) offset. [Wildhaber2018] [TABLE V].
Constraint:
Linear Scalar : \(x=Hv,\,v\in\mathbb{R}\)
known : \(H \in \mathbb{R}^{N \times 1}\)
Linear Combination With Offset : \(x=Hv +h,\,v\in\mathbb{M}\)
known : \(H \in \mathbb{R}^{N \times M},\,h\in\mathbb{R}^N\)
See also
minimize_v()
- Parameters
H (array_like, shape=(N, M), optional) – Matrix for linear constraining \(H\)
h (array_like, shape=(N, [S]), optional) – Offset vector for linear constraining \(h\)
- Returns
xs – Least square state vector estimate for each time index. The shape of one state vector x[k] is (N,), where k is the time index of K samples, N the ALSSM order.
- Return type
ndarray
of shape = (K, N)
K : number of samples
N : ALSSM system order, corresponding to the number of state variables