summaryrefslogtreecommitdiff
path: root/LvTwoComputationalHorsepower.mdwn
blob: 43b61010d6f4d282ee1b4630c1e960f0ae23a9a1 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
Thu, 29 Mar 2001:

Here's my attempt to estimate the number of floating point operations per second required to implement a reasonable INS on LV2. For the uninterested the answer looks to be about half a million.

Basic assumptions:

1. Input consists of 4 accelerometers, and 3 rate-gyros, augmented by periodic GPS and altimeter readings
2. Output consists of full 6 degree of freedom [DOF] position and velocity, therefore 12 numbers
3. Output update frequency 10 Hz
4. Accelerometer input sample frequency 2500 Hz
5. Rate-gyro input sampled at 625 Hz<br /><br />Further tentative assumptions:<br /><br />
6. Calculations which occur less frequently than 10 Hz do not significantly effect the computational load (Earth-rate, gravity map, normalization)
7. The highest rate integrations are treated adequately as simple sums
8. Multiplication and addition take the same amount of time

INS calculation:

<table border=1 cellpadding=0 cellspacing=0>
  <tr>
    <th bgcolor="#99CCCC"><strong> Calculation </strong></th>
    <th bgcolor="#99CCCC"><strong> Input </strong></th>
    <th bgcolor="#99CCCC"><strong> Output </strong></th>
    <th bgcolor="#99CCCC"><strong> Clocks </strong></th>
    <th bgcolor="#99CCCC"><strong> Rate [Hz] </strong></th>
  </tr>
  <tr>
    <td> velocity increment </td>
    <td> a_i </td>
    <td> Delta_v </td>
    <td align="center"> 4 </td>
    <td> 2500 </td>
  </tr>
  <tr>
    <td> angle increment </td>
    <td> omega_i </td>
    <td> Delta_th </td>
    <td align="center"> 3 </td>
    <td> 625 </td>
  </tr>
  <tr>
    <td> body transformation </td>
    <td> Delta_th,v </td>
    <td> th_L, v_L </td>
    <td align="center"> 78 </td>
    <td> 625 </td>
  </tr>
  <tr>
    <td> coning increment </td>
    <td> Delta_th,th_L </td>
    <td> beta_L </td>
    <td align="center"> 12 </td>
    <td> 625 </td>
  </tr>
  <tr>
    <td> sculling increment </td>
    <td> a, v </td>
    <td> v_scul </td>
    <td align="center"> 24 </td>
    <td> 625 </td>
  </tr>
  <tr>
    <td> summation to m </td>
    <td> L values </td>
    <td> m values </td>
    <td align="center"> 4 </td>
    <td> 625 </td>
  </tr>
  <tr>
    <td> rotation vector </td>
    <td> a_m, beta_m </td>
    <td> phi_m </td>
    <td align="center"> 1 </td>
    <td> 100 </td>
  </tr>
  <tr>
    <td> rotator update </td>
    <td> phi_m </td>
    <td> R[b,b-1] </td>
    <td align="center"> 44 </td>
    <td> 100 </td>
  </tr>
  <tr>
    <td> navigation transform </td>
    <td> R[b,b-1],R[b,n] </td>
    <td> R[b,n] </td>
    <td align="center"> 45 </td>
    <td> 100 </td>
  </tr>
  <tr>
    <td> velocity rotation comp. </td>
    <td> a_m, v_m </td>
    <td> v_rot </td>
    <td align="center"> 20 </td>
    <td> 100 </td>
  </tr>
  <tr>
    <td> body velocity inc. </td>
    <td> v_(m,scul,rot) </td>
    <td> v^body </td>
    <td align="center"> 3 </td>
    <td> 10 </td>
  </tr>
  <tr>
    <td> nav velocity </td>
    <td> R[b,n],v^body, v_gee </td>
    <td> v^nav </td>
    <td align="center"> 47 </td>
    <td> 10 </td>
  </tr>
  <tr>
    <td> gravity Coriolis inc. </td>
    <td> x^nav, v^nav </td>
    <td> v_gee </td>
    <td align="center"> 50 </td>
    <td> 10 </td>
  </tr>
  <tr>
    <td> summation to x </td>
    <td> v^nav </td>
    <td> x^nav </td>
    <td align="center"> 9 </td>
    <td> 10 </td>
  </tr>
</table>

Total INS flops == 97615 [floating point operations / second ]

Kalman filter calculation:

In our proposed algorithm the measurements are GPS and altimeter measurements, while the states are INS states (The GPS corrects for INS drift). The number of measurements is therefore 6+1, for 3 GPS position, 3 GPS velocity, and 1 altitude. Actually the altitude may be folded into the GPS prior to filtering, but that will take some computation too, and this calculation is approximate. The number of modeled states is somewhat selectable, but includes at least the 4 accelerometer biases, 3 gyro biases, 3 IMU pointing errors, and one gravity bias. Easily a dozen more states could be thrown in, but it's probably enough to assume the terms stated, though perhaps we might add 4 accelerometer scale factors and 3 gyro scale factors. The total number of states would then be 18.

I have done the operation count given these assumptions, an arrived at about 50000. The details are hard to write out in text format but i would suggest that the calculation is only approximate, because the number of states is somewhat arbitrary since any implementation can always model more states in the quest for greater accuracy. In fact there are good reasons we might want to add states, but the complexity increases, and a cost vs benefit consideration enters. As a very poor rule of thumb the operations count scales like the square of the number of states (i'm sure better estimates are in the literature). So adding 8 more states would more than double the operation count.

The prevailing situation appears to be that we need at least the 50k flops for Kalman operation, and can probably use more. We should certainly not choose a processor that is marginal at the calculated level (150k flops) because the calculations are uncertain, and there will be considerable program overhead involving data manipulation in memory and stuffing the FPU. If the FPU is as fast as we hope, the overhead will probably be about the same or more as the FPU time.

However, even if my calculation is off by 2, and the overhead is twice the FPU time, and the Kalman flops double, the load is only:

200k \* 3 \* 2 == 600k operations / second

So any of the processors we have been considering should be adequate.

It's worth recalling that on LV1b we were consuming a very large part of the CPU clocks bit-banging things at the 2500 Hz rate. If we repeated this performance on LV2 the operation count could increase considerably, but it would still be hard to see consuming 40 million instructions per second, which is the slowest of the processors being considered.

I feel that even if there are major mistakes in these operation counts, the computational power available is sufficient to make a useful system. Perhaps scaling back in some spots, but still capturing the essential characteristics of our desired system.

What i don't want to hear, then, is that there is some stupid gotcha' like the power consumption triples when the FPU runs, or the FPU only runs at 10 MHz, or it takes 10 cycles to load the registers for each single cycle multiply, or that the multiply is single cycle, but a floating point add takes 5 clocks, etc.