lmlib.statespace.cost.RLSAlssmSet#
- class lmlib.statespace.cost.RLSAlssmSet(cost_model, kappa_diag=True)#
Bases:
lmlib.statespace.cost.RLSAlssmBase
Filter and Data container for Recursive Least Sqaure Alssm Filters using Sets (multichannel parallel processing)
This class is the same as
RLSAlssm
except that the signal y has an additional last dimension. The signals in these dimensions are processed simultaneously, as in a normalRLSAlssm
called multiple times.- Parameters
cost_model (CostSegment, CompositeCost) – Cost Model
kappa_diag (bool) – If set to False, kappa will be computed as a matrix (outer product of each signal energy) else its diagonal will saved
**kwargs – Forwarded to
RLSAlssmBase
Methods
__init__
(cost_model[, kappa_diag])eval_errors
(xs[, ks])Evaluation of the squared error for multiple state vectors xs.
filter
(y[, v])Computes the intermediate parameters for subsequent squared error computations and minimizations.
filter_minimize_x
(y[, v, H, h])Combination of
RLSAlssmSet.filter()
andRLSAlssmSet.minimize_x()
.minimize_v
([H, h, broadcast_h, ...])Returns the vector v of the squared error minimization with linear constraints
minimize_x
([H, h, broadcast_h])Returns the state vector x of the squared error minimization with linear constraints
set_backend
(backend)Setting the backend computations option
set_kappa_diag
(b)Attributes
Filter Parameter \(W\)
Segment scalars weights the cost function per segment
Cost Model
Filter Parameter \(\kappa\)
Filter Parameter \(\nu\)
Filter Parameter \(\xi\)
- property cost_model#
Cost Model
- Type
- eval_errors(xs, ks=None)#
Evaluation of the squared error for multiple state vectors xs.
The return value is the squared error
\[J(x) = x^{\mathsf{T}}W_kx -2*x^{\mathsf{T}}\xi_k + \kappa_k\]for each state vector \(x\) from the list xs.
- Parameters
xs (array_like of shape=(K, N, S)) – List of state vectors \(x\)
ks (None, array_like of int of shape=(XS,)) – List of indices where to evaluate the error
- Returns
J – Squared Error for each state vector
- Return type
np.ndarray
of shape=(XS, S [,S])
K : number of samples
XS : number of state vectors in a list
N : ALSSM system order, corresponding to the number of state variables
- filter(y, v=None)#
Computes the intermediate parameters for subsequent squared error computations and minimizations.
Computes the intermediate parameters using efficient forward- and backward recursions. The results are stored internally, ready to solve the least squares problem using e.g.,
minimize_x()
orminimize_v()
. The parameter allocationallocate()
is called internally, so a manual pre-allcation is not necessary.- Parameters
y (array_like) –
Input signal
RLSAlssm
orRLSAlssmSteadyState
Single-channel signal is of shape =(K,) for
Multi-channel signal is of shape =(K,L)RLSAlssmSet
orRLSAlssmSetSteadyState
Single-channel set signals is of shape =(K,S) for
Multi-channel set signals is of shape =(K,L,S)
Multi-channel-sets signal is of shape =(K,L,S)
v (array_like, shape=(K,), optional) – Sample weights. Weights the parameters for a time step k and is the same for all multi-channels. By default the sample weights are initialized to 1.
K : number of samples
L : output order / number of signal channels
S : number of signal sets
- filter_minimize_v(y, v=None, H=None, h=None, **kwargs)#
Combination of
RLSAlssmSet.filter()
andRLSAlssmSet.minimize_v()
.This method has the same output as calling the methods
rls.filter(y) xs = rls.minimize_v()
See also
- filter_minimize_x(y, v=None, H=None, h=None, **kwargs)#
Combination of
RLSAlssmSet.filter()
andRLSAlssmSet.minimize_x()
.This method has the same output as calling the methods
rls.filter(y) xs = rls.minimize_x()
See also
- minimize_v(H=None, h=None, broadcast_h=True, return_constrains=False)#
Returns the vector v of the squared error minimization with linear constraints
Minimizes the squared error over the vector v with linear constraints with an (optional) offset [Wildhaber2018] [TABLE V].
Constraint:
Linear Scalar : \(x=Hv,\,v\in\mathbb{R}\)
known : \(H \in \mathbb{R}^{N \times 1}\)
\(\hat{v}_k = \frac{\xi_k^{\mathsf{T}}H}{H^{\mathsf{T}}W_k H}\)
Linear Combination With Offset : \(x=Hv +h,\,v\in\mathbb{M}\)
known : \(H \in \mathbb{R}^{N \times M},\,h\in\mathbb{R}^N\)
\(\hat{v}_k = \big(H^{\mathsf{T}}W_k H\big)^{-1} H^\mathsf{T}\big(\xi_k - W_k h\big)\)
- Parameters
H (array_like, shape=(N, M)) – Matrix for linear constraining \(H\)
h (array_like, shape=(N, [S]), optional) – Offset vector for linear constraining \(h\)
broadcast_h (bool) – if True each channel has the same h vectore else h needs same shape as x.
return_constrains (bool) – If set to True, the output is extened by H and h
- Returns
v (
ndarray
, shape = (K, M, S),) – Least square state vector estimate for each time index. The shape of one state vector x[k] is (N, [S]), where k is the time index of K samples, N the ALSSM order.|def_K|
|def_S|
|def_N|
- minimize_x(H=None, h=None, broadcast_h=True)#
Returns the state vector x of the squared error minimization with linear constraints
Minimizes the squared error over the state vector x. If needed its possible to apply linear constraints with an (optional) offset. [Wildhaber2018] [TABLE V].
Constraint:
Linear Scalar : \(x=Hv,\,v\in\mathbb{R}\)
known : \(H \in \mathbb{R}^{N \times 1}\)
Linear Combination With Offset : \(x=Hv +h,\,v\in\mathbb{M}\)
known : \(H \in \mathbb{R}^{N \times M},\,h\in\mathbb{R}^N\)
See also
minimize_v()
- Parameters
H (array_like, shape=(N, M), optional) – Matrix for linear constraining \(H\)
h (array_like, shape=(N, [S]), optional) – Offset vector for linear constraining \(h\)
broadcast_h (bool) – if True each channel has the same h vectore else h needs same shape as x.
- Returns
xs – Least square state vector estimate for each time index. The shape of one state vector x[k] is (N, S), where k is the time index of K samples, N the ALSSM order.
- Return type
ndarray
of shape = (K, N, S)
K : number of samples
S : number of signal sets
N : ALSSM system order, corresponding to the number of state variables