1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474

# Modeling Systems in State Space
We have no real business writing a page about state space techniques because so
many others have done it better. Our hope here is to provide some background
locally so that people who haven't heard about all this stuff before can follow
what we're doing.
**External links:**
 [wikipedia: State space controls](http://en.wikipedia.org/wiki/State_space_(controls))
 [wikipedia: State space equations](http://en.wikibooks.org/wiki/Control_Systems/StateSpace_Equations)
 [google:"state space representation"](http://www.google.com/search?hl=en&q=%22state+space+representation%22&btnG=Google+Search)
**Local links:**
 [[our mathematical notation pageMathSymbols]]
 [[yet another introduction to the Kalman filterKalmanIntro]]
**Modeling Systems in State Space**
1. [Modeling Systems the Old Fashioned Way](#OldFashioned)
2. [State Space Representations](#StateSpaceReps)
3. [Discreet Time Formulations](#DiscreetForms)
4. [Other System Representations](#OtherReps)

<a name="OldFashioned" id="OldFashioned"></a>
## Modeling Systems the Old Fashioned Way
Sometimes when people talk about state space techniques, they refer to them as
"Modern". Implying that techniques that were developed earlier are old
fashioned. That implication is really baseless. State space is just another
notational device, with all the usual advantages and disadvantages. Sometimes
state space representations are very convenient, when they are not, consider
using something [else](#OtherReps).
Many physical systems can be described using differential equations. For
example, consider an idealized system consisting of a point mass moving in one
dimension subject to an externally applied force. The governing equation is
<a name="EqNum1" id="EqNum1"></a>
[[!teximg code="(1)\quad\
F = m a"]]
Where (_F_) is the force, (_m_) the mass, and (_a_) the acceleration
To completely describe this system the position (_p_) and velocity (_v_) can be
related to acceleration
<a name="EqNum2" id="EqNum2"></a>
[[!teximg code="(2)\quad\
\begin{array}{lcl}\
a &=& v'\\\
v &=& p'\
\end{array}"]]
For a one dimensional point mass, the complete state of the system can be
described by two parameters, position and velocity. The acceleration is
determined by the force, which we have defined to be externally determined.
With these ideas in mind the system description can be rewritten in a
suggestive form
<a name="EqNum3" id="EqNum3"></a>
[[!teximg code="(3)\quad\
\begin{array}{lcl}\
p' &=& v\\\
v' &=& a\
\end{array}"]]
In this form, everything on the left of an equal sign is the first derivative
of a \`state' variable. The right hand sides can be considered as a set of
functions whose arguments are the state variables and the external inputs.
Formally, we can write
<a name="EqNum4" id="EqNum4"></a>
[[!teximg code="(4)\quad\
\begin{array}{lcl}\
x_1' &=& f_1[x_1, x_2,\ \ldots\ , x_n, u_1, u_2,\ \ldots\ , u_m]\\
x_2' &=& f_2[x_1, x_2,\ \ldots\ , x_n, u_1, u_2,\ \ldots\ , u_m]\\
&\vdots\\
x_n' &=& f_n[x_1, x_2,\ \ldots\ , x_n, u_1, u_2,\ \ldots\ , u_m]\\
\end{array}"]]
Where the (_x_<sub>i</sub>) are the state variables, the (_f_<sub>i</sub>) are
arbitrary single valued functions, and the (_u_<sub>i</sub>) are external inputs
to the system.
Equation [(4)](#EqNum4) is an example of a \`state space representation'. By
definition the state space form has first order derivatives of all the state
variables on the left, and functions of the state variables and external inputs
on the right. This arrangement can produce a very regular structure, one that
is suitable for a matrix formulation.
In general, the functions (_f_) in [(4)](#EqNum4) may be time varying. Time
varying functions complicate the analysis somewhat, but don't alter the
fundamental underpinnings of state space analysis, so in the interest of
simplifying things, constant functions are assumed throughout.

<a name="StateSpaceReps" id="StateSpaceReps"></a>
## State Space Representations
If the functions (_f_<sub>i</sub>) in [(4)](#EqNum4) are linear functions, then
the whole system of equations can be represented as a set of sums
<a name="EqNum5" id="EqNum5"></a>
[[!teximg code="(5)\quad\
\begin{array}{lcl}\
x_1' &=& \sum_{i=1}^n {{c_1}_i\ x_i} + \sum_{i=1}^m {{g_1}_j\ u_j}\\
x_2' &=& \sum_{i=1}^n {{c_2}_i\ x_i} + \sum_{i=1}^m {{g_2}_j\ u_j}\\
&\vdots\\
x_n' &=& \sum_{i=1}^n {{c_n}_i\ x_i} + \sum_{i=1}^m {{g_n}_j\ u_j}\\
\end{array}"]]
The sums can be written compactly in matrix form
<a name="EqNum6" id="EqNum6"></a>
[[!teximg code="(6)\quad\
\bf{x}' = \bf{F} \bf{x} + \bf{G} \bf{u}"]]
Where (**F** = \{_c_<sub>ki</sub>\}), (**G** = \{_g_<sub>kj</sub>\}), and (k)
runs from 1 to (n)
A very reasonable question is, "How useful is this?" After all, not every
dynamic system can be described by differential equations. Even if differential
equations can be used, often the equations are not linear, or the derivatives
used are not of first order. How can [(6)](#EqNum6) be used in those cases?
Informally, equation [(6)](#EqNum6) can be stated in English, "The current
change in the system state (**x**') depends in part on the current system state
(**x**), and in part on the external influence (**u**)." Clearly this
description could apply to a large variety of physical systems.
Formally, any system which is continuous and linear can be represented in the
form given by equation [(6)](#EqNum6). Equations with derivatives of nthorder
can be transformed into n coupled equations of 1st order. (See for example
[phasevariable canonical form](http://en.wikibooks.org/wiki/Control_Systems/StateSpace_Equations#Obtaining_the_StateSpace_Equations).
More or less this works by including the higher order derivatives as state
variables.)
In practice, equation [(6)](#EqNum6) can be used with nonlinear systems too.
Although nonlinear systems can't be represented exactly in a linear matrix
equation, they present no problem in the general state space formulation given
by [(4)](#EqNum4). However matrix techniques are sufficiently attractive that
nonlinear systems are often linearized to allow the matrix formulation to be
applied, approximately, to them as well <_link???_>.
The previous [example](#EqNum1) of a one dimensional accelerated mass, can be
put into matrix form like this
<a name="EqNum7" id="EqNum7"></a>
[[!teximg code="(7)\quad\
\begin{pmatrix} p \cr v \end{pmatrix}' =\
\begin{pmatrix} 0 & 1 \cr 0 & 0 \end{pmatrix}\
\begin{pmatrix} p \cr v \end{pmatrix} +\
\begin{pmatrix} 0 \cr 1 \end{pmatrix}\
a"]]
Expanding out equation [(7)](#EqNum7) reproduces exactly the system given in
[(3)](#EqNum3).
In [(7)](#EqNum7) the state variables are position (_p_) and velocity (_v_).
It's worth noting that the choice of state variables is not unique. Clearly, any
other set of state variables which can be solved for the original state
variables will also work. As suggested by [(6)](#EqNum6), any invertible linear
combination of state variables will work. The situation is equivalent to having
multiple [basis](http://en.wikipedia.org/wiki/Basis_%28linear_algebra%29) sets
for subspaces in linear algebra.
In complex situations, picking the state variables can be tricky. The first
problem is finding a good system model. In a realworld system there are always
many possible factors that influence the system. In some cases it is worth the
extra burden to incorporate some of these factors into the system model. Doing
so increases the development time and computational load, so often it's better
to model some of the influences as process noise, or to just ignore them in the
model. Likewise, some system inputs may have subtle relations to the system
state. The interdependence could be modeled by state variables, but again, at
increasing cost.
A set of state variables is theoretically minimal if together they are
sufficient to describe every aspect of the system, but elimination of any
variable, singly or in combination, leaves at least some of the system
unobservable. Knowing this, it seems like picking good state variables means
picking some orthogonal combination of spanning state variables. Particularly if
the variables chosen are easily related to available measurements. For linear
(or linearized) systems this idea is "mostly true". However, it's still possible
to imagine a linear system where some of the possible state variables are so
sensitive to small variations, that the accuracy of the system model will be
low, but these same variables may be expressible as functions of other possible
state variables which are relatively insensitive to small variations. In this
case choosing the latter as state variables should result in more accurate
system modeling.
Here are some other pathological situations where the choice of state variables
may be difficult.
 A system where a needed output is difficult to compute from potential state
variables. It may be possible to find other state variables that can be used
to compute the output more easily. Sometimes this will result in a set of
state variables which is not minimal. If the computations resulting from a
minimal set are sufficiently complex, the nonminimal set may result in a
lower overall system modeling burden.
 When the only available measurements of a system involve combinations of
variables considered to be representative of the system state. Again this
situation can suggest a nonminimal set of state variables to match the sensor
outputs. Another possibility is to do fairly complex precomputation on the
measurements to produce a detangled set of measurements to feed into the
state model. (Sometimes significant nonlinearities can be removed this way
too.)
 If the system to be modeled is not well understood, it's model can be derived
observationally by formal or informal techniques. Usually the model which is
derived is of very low order compared to the actual system. Sometimes it is
discovered that seemingly unrelated state variables are actually tightly
dependent. Dimensional analysis may help when analyzing this type of system.

<a name="DiscreetForms" id="DiscreetForms"></a>
## Discreet Time Formulations
For many purposes, writing the differential equations for a dynamical system is
not sufficient. For state space equations the value of the state variables
through time is usually desired.
Recall the linear form of the state space equations given in [(6)](#EqNum6)
<a name="EqNum8" id="EqNum8"></a>
[[!teximg code="(8)\quad\
\bf{x}' = \bf{F} \bf{x} + \bf{G} \bf{u}"]]
Ignoring the external inputs and the fact that this is a matrix equation,
equation [(8)](#EqNum8) reduces to a linear homogenous first order
differential equation
<a name="EqNum9" id="EqNum9"></a>
[[!teximg code="(9)\quad\
x'  F x = 0"]]
The solution is well known (and can be verified by direct substitution)
<a name="EqNum10" id="EqNum10"></a>
[[!teximg code="(10)\quad\
x = e^{F t}"]]
Since the matrix equation [(8)](#EqNum8) is linear, it can be solved exactly the
same way. We define the _state transition matrix_ (**Φ**) to be the matrix
exponential:
<a name="EqNum11" id="EqNum11"></a>
[[!teximg code="(11)\quad\
\bf\Phi\rm[t] \equiv e^{\bf{F}\rm t}"]]
And the derivative of (**Φ**) is
<a name="EqNum12" id="EqNum12"></a>
[[!teximg code="(12)\quad\
\bf\Phi\rm'[t] = \bf{F}\rm e^{\bf{F}\rm t} = \bf{F \Phi}\rm[t]"]]
Now we can write a solution to [(8)](#EqNum8)
<a name="EqNum13" id="EqNum13"></a>
[[!teximg code="(13)\quad\
\bf{x}\rm[t] =\
\bf\Phi\rm[t] \bf{x}\rm[0] +\
\int_0^t \bf\Phi\rm[t\tau] \bf{G}\rm[\tau] \bf{u}\rm[\tau]\, d\tau"]]
Where we have assumed that the system evolves beginning at time (t = 0)
The 1st term in [(13)](#EqNum13) is the homogeneous solution to [(8)](#EqNum8)
exactly analogous to [(10)](#EqNum10), the 2nd term is the particular solution.
To motivate the 2nd term, consider that (**F x**) and (**G u**) in
[(8)](#EqNum8) have exactly the same influence on (**x**'). To find the current
influence of a past input applied at time (τ), on the present time (t), take the
effect at time (τ), namely (**G**[τ] **u**[τ]) and propagate it forward to time
(t) by multiplying by (**Φ**[tτ]). The cumulative influence of all inputs from
time zero to (t) is just the integral given by the 2nd term. If this is too
handwavy for you, set the particular solution equal to the 2nd term and take
the derivative
<a name="EqNum14" id="EqNum14"></a>
[[!teximg code="(14)\quad\
\begin{array}{lcl}\
(\bf{x}\rm_{particular}[t])' =\
(\int_0^t \bf\Phi\rm[t\tau] \bf{G}\rm[\tau] \bf{u}\rm[\tau]\, d\tau)'\\\
= \int_0^t \bf\Phi\rm[t\tau] \bf{G}\rm[\tau] \bf{u}\rm[\tau]\, d\tau\ +\
\bf\Phi\rm[0] \bf{G}\rm[t] \bf{u}\rm[t]\\\
= \bf{F}\rm \int_0^t \bf\Phi\rm[t\tau]\
\bf{G}\rm[\tau] \bf{u}\rm[\tau]\, d\tau\ +\
\bf{G}\rm[t] \bf{u}\rm[t]\\\
= \bf{F}\rm \cdot \bf{x}\rm_{particular}[t] +\
\bf{G}\rm[t] \bf{u}\rm[t]\
\end{array}"]]
Where the fact that (**Φ**[0]) is an identity matrix has be used. Since the
integral in [(14)](#EqNum14) involves time in the upper limit, the time
derivative must be taken according to Leibniz' rule. (See
[mathworld](http://mathworld.wolfram.com/LeibnizIntegralRule.html) or
[wikipedia](http://en.wikipedia.org/wiki/Leibniz_integral_rule)).
An equation for the discreet time evolution of the system can be derived from
the continuous time solution [(13)](#EqNum13).
For the time sequence \{ T<sub></sub>, T<sub>1</sub>, ... , T<sub>i</sub> \},
the discreet solution is
<a name="EqNum15" id="EqNum15"></a>
[[!teximg code="(15)\quad\
\bf{x}\rm[T_i] =\
\bf\Phi\rm[T_i] \bf{x}\rm[0] +\
\int_0^{T_i} \bf\Phi\rm[T_i\tau] \bf{G}\rm[\tau] \bf{u}\rm[\tau]\, d\tau\
"]]
This can be written in a incremental form
<a name="EqNum16" id="EqNum16"></a>
[[!teximg code="(16)\quad\
\bf{x}\rm_i =\
\bf\Phi\rm_i \bf{x}\rm_{i1} +\
\int_{T_{i1}}^{T_i} \bf\Phi\rm[T_i\tau]\
\bf{G}\rm[\tau] \bf{u}\rm[\tau]\, d\tau"]]
Where (**Φ**<sub>i</sub>) is the state transition matrix from time
(T<sub>i  1</sub>) to (T<sub>i</sub>)
If in [(16)](#EqNum16), we assume (**u**) is constant over a time step (T), that
the time steps are all equal, and that (**Φ**) and (**G**) are constant, then we
can write the simplified form
<a name="EqNum17" id="EqNum17"></a>
[[!teximg code="(17)\quad\
\bf{x}\rm_{i} =\
\bf\Phi\rm_{i} \bf{x}\rm_{i1} +\
\left( \int_0^T \bf\Phi[\rm T\tau]\, d\tau \right) \bf G u\rm_{i}"]]
To make this a little more clear, refer once more to the [example
equation](#EqNum7) of a one dimensional accelerated mass, which is repeated here
<a name="EqNum18" id="EqNum18"></a>
[[!teximg code="(18)\quad\
\begin{pmatrix} p \cr v \end{pmatrix}' =\
\begin{pmatrix} 0 & 1 \cr 0 & 0 \end{pmatrix}\
\begin{pmatrix} p \cr v \end{pmatrix} +\
\begin{pmatrix} 0 \cr 1 \end{pmatrix} a"]]
This example satisfies all the assumptions of [(17)](#EqNum17) providing
acceleration (_a_) is constant over each time step. Since this is the usual
assumption for a sampled data system, let's assume it here. Identify the system
and input matrices as
<a name="EqNum19" id="EqNum19"></a>
[[!teximg code="(19)\quad\
\bf{F}\rm = \begin{pmatrix} 0 & 1 \cr 0 & 0 \end{pmatrix},\
\bf{G}\rm = \begin{pmatrix} 0 \cr 1 \end{pmatrix}"]]
To calculate (**Φ**) the matrix exponential referred to in [(11)](#EqNum11) must
be found. In analogy to the scalar case, the exponential can be found from the
series
<a name="EqNum20" id="EqNum20"></a>
[[!teximg code="(20)\quad\
\bf\Phi\rm[t] \equiv e^{\bf{F}\rm t} =\
\bf{I}\rm + \bf{F}\rm t + \frac{\bf{F}\rm^2 t^2}{2} +\
\frac{\bf{F}\rm^3 t^3}{6} +\ \ldots\ + \frac{\bf{F}\rm^k t^k}{k!} +\ \ldots\
"]]
In this case the exponential is easy to compute because (**F**<sup>2</sup>) and
all higher powers are the (2×2) zero matrix, therefore
<a name="EqNum21" id="EqNum21"></a>
[[!teximg code="(21)\quad\
\bf\Phi\rm = \begin{pmatrix} 1 & \rm T \cr 0 & 1 \end{pmatrix}"]]
Now using [(17)](#EqNum17) the discreet solution for the example can written
<a name="EqNum22" id="EqNum22"></a>
[[!teximg code="(22)\quad\
\begin{array}{lcl}\
\begin{pmatrix} p_i \cr v_i \end{pmatrix} &=&\
\begin{pmatrix} 1 & \rm T \cr 0 & 1 \end{pmatrix}\
\begin{pmatrix} p_{i1} \cr v_{i1} \end{pmatrix} +\
\int_0^{\rm T} \begin{pmatrix} 1 & \rm T\tau \cr 0 & 1 \end{pmatrix}\,\
d\tau \cdot\
\begin{pmatrix} 0 \cr 1 \end{pmatrix} a\\\
&=&\
\begin{pmatrix} 1 & \rm T \cr 0 & 1 \end{pmatrix}\
\begin{pmatrix} p_{i1} \cr\ v_{i1} \end{pmatrix} +\
\begin{pmatrix} \rm T & \rm T^2/2 \cr 0 & \rm T \end{pmatrix}\
\begin{pmatrix} 0 \cr 1 \end{pmatrix} a
\end{array}"]]
Or in scalar form
<a name="EqNum23" id="EqNum23"></a>
[[!teximg code="(23)\quad\
\begin{array}{lcl}\
p_i &=& p_{i1} + \rm T \it v_{i1} + a_i\ \rm T^2/2 \\\
v_i &=& v_{i1} + a_i\ \rm T\
\end{array}"]]

<a name="OtherReps" id="OtherReps"></a>
## Other System Representations
As suggested [above](#OldFashioned), state space representations are more or
less directly related the differential equations describing the dynamic system.
In traditional control theory, as opposed to "modern" state space methods,
[transfer functions](http://en.wikipedia.org/wiki/Transfer_function) are often
used. Since either description refers to the same system, state space
representations can be transformed into transfer functions and vice versa,
subject to some limitations.
The transfer function can be found from a state space representation fairly
easily.
Recall the linear form of the state space representation [(6)](#EqNum6)
<a name="EqNum24" id="EqNum24"></a>
[[!teximg code="(24)\quad\
\bf{x}' = \bf{F} \bf{x} + \bf{G} \bf{u}"]]
If we let the system outputs be defined as (**y**[t]), introduce the output
matrix (**H**) such that
<a name="EqNum25" id="EqNum25"></a>
[[!teximg code="(25)\quad\
\bf{y}\rm[t] = \bf{H}\rm[t] \bf{x}\rm[t]"]]
Take the Laplace transform of [(24)](#EqNum24)
<a name="EqNum26" id="EqNum26"></a>
[[!teximg code="(26)\quad\
\rm s\ \bf{X}\rm[s]\bf{x}\rm[0] = \bf{F}\ \bf{X}\rm[s] + \bf{G}\ \bf{U}\rm[s]\
"]]
Solve for (**X**[s])
<a name="EqNum27" id="EqNum27"></a>
[[!teximg code="(27)\quad\
\bf{X}\rm[s] =\
(\bf{I}\rm\ s \bf{F}\rm)^{1} (\bf{G} \bf{U}\rm[s] + \bf{x}\rm[0])"]]
If it is assumed that the initial conditions are zero, multiply by (**H**) and
divide by (**U**) to get the transfer function from the state space
representation
<a name="EqNum28" id="EqNum28"></a>
[[!teximg code="(28)\quad\
\bf{Y}\rm[s]\bf{U}\rm^{1}[s] =\
\bf{H}\rm(\bf{I}\rm\ s \bf{F}\rm)^{1} \bf{G}"]]
Comparing equation [(27)](#EqNum27) to [(13)](#EqNum13) it can be seen that the
role of the state transition matrix (**Φ**) in [(27)](#EqNum27) is preformed by
the first factor, that is
<a name="EqNum29" id="EqNum29"></a>
[[!teximg code="(29)\quad\
\bf\Phi\rm[s] = (\bf{I}\rm\ s \bf{F}\rm)^{1}"]]
So as an alternative method of computation, (**Φ**[t]) can be found from the
inverse Laplace transform
<a name="EqNum30" id="EqNum30"></a>
[[!teximg code="(30)\quad\
\bf\Phi\rm[t] = \mathcal{L}^{1}[(\bf{I}\rm\ s \bf{F}\rm)^{1}]"]]
