Just another blog



Short Introduction

Generally INS is a navigation tool which uses computer, motion sensors (accelerometer), rotation sensors (gyroscope) and other assistant sensors to provide object’s orientation, position and velocity (see wiki).

Here I am referring only about a low cost strap down GPS aided INS. The INS takes measurement from MEMS 3 axis accelerometer, 3 axis gyrometer, 3 axis magnetometer and a GPS module.


The object frame is represented by RPY coord. system (Roll-Pitch-Yaw axes). The RPY convention I use is aircraft principle axes. Center of this frame is positioned in COG of the object. The second frame (ground reference frame) to which object orientation is measured is a semi-global ENU (East-North-Up axes). Object velocity always regards to this frame. The ground reference frame is always related to object position (in the INS center of the ENU frame is in COG of the object). Object position is referenced in the global frame – Earth. Here two systems are used. It is either ECEF (Earth centered Earth fixed) or LLA (Longitude-Latitude-Altitude). Internally all operations (with position) in the INS are in ECEF coordinates. LLA is calculated from ECEF. The ECEF has a significant advantage of being a cartesian coordinate system (as both RPY and ENU). This fact simplifies INS algorithm and conversion to any LLA standard (depends on Earth model used)  is possible. Below a simple video demonstration of all three frames.

Click on the picture to play the video.

Demo INS

Demo INS

If your browser does not have ogv video support, download mp4 (IE) or webm.

The green axes are ECEF, the black are ENU and the red are RPY. The blue sphere represents Earth striped with longitude and latitude. ENU frame is not centered to the object COG to emphasize smaller object’s position changes.


The INS I have implemented is derived from (J. Bijker, W. Steyn, Kalman filter configurations for a low-cost loosely  integrated inertial navigation system on an airship). The algorithm was superior to others I inspected at least for my purposes. But of course after some changes :).


First and foremost the algorithm decouples orientation, velocity and position calculation from error correction (when one modification is used). It means object orientation and motion is calculated separately to the error correction. Because correction uses Kalman filter, it is relatively computationally intensive. But the the orientation and motion calculation is very cheap. You can calculate orientation and motion at much different frequency than the correction (using EKF), you just have to keep in mind Nyquist–Shannon sampling theorem. This spared processor time is not important for powerful embedded systems like Beaglebone but when using something much smaller and less powerful like Coldfire based INS board (running at 64MHz and without FPU) then number of FP operations is critical. Actually when using the algorithm (with severe optimizations) it is possible to run 12 state tightly coupled INS on the Coldfire, when the same chip gathers data from all sensors and controls motors. And still have some spare power.

Second, the error model is quite elegant and relatively “linear”.

Third, the algorithm is scalable:

  • You can start with 12 states tightly coupled INS
  • Go down to 9 states INS with custom gyro bias drift estimation
  • Move to a loosely coupled INS – 6 + 6 states. With independent orientation and velocity/position estimation.
  • And further down to 6 states with orientation estimation only (for cases when GPS provides no signal any more signal) for safe or assisted landing.
  • or even 5 states IMU without magnetometer data.

Fourth, calculated correction does not need to be applied to the model model every filter step.


Unlike in the original algorithm, I am using ECEF coordinates for position (so theoretically the INS can be used for around-the-globe flights because Earth is not locally approximated by a sphere), converted to LLA later. This brings some more “linearization” to the model and it enables another optimization – possibility to use single precision floating point for the EKF. Which is important for processors without FPU and with limited RAM. Next one, which enables separation of the motion and error correction, is a transfer of estimated gyro bias drift to the motion model. Next, it is even  not needed to use 12 states for tightly coupled INS when gyro bias drift can be estimated directly from δq by an expert system. Other optimizations include fast conversion of ECEF corrdinates to LLA and matrix calculation optimization. Note: most of the changes (except ECEF and δq based bias estimation) are not needed on the Beaglebone where processing power is not a bottle neck.


 Here you can see a simplified scheme how the inertial navigation works. Actually motion model and error correction (MEKF) are only a part of it. First of all, when sensors are read the data has to be corrected. When it comes to accelerometer temperature drift is a problem but can be reduced – see datasheets for details. When it comes to gyros, temperature compensation might or might not be needed. Optionally, the measured data can be pre-filtered – with a FIR or event IIR filter. This solely depends on the situation.

With the magnetometer it is even more complicated. The magnetometer has to be properly calibrated (in all 3 axes) due to hard and soft iron distortions – see Magnetometer calibration for details.

GPS velocity and position might have to be recalculated to ENU and ECEF coordinates (box GPS conversion).

Next part “unrelated” to the INS is magnetic field vector calculation (box on the right). Earth magnetic field vector does not point to geographical North but magnetic North pole and changes in time. Luckily there are models which predict the changes and enable you to calculate the vector in ECEF (recalculated to ENU)  coordinates. Look at IGRF and WMM models.

Motion model takes accelerometer and gyrometer data and calculates orientation, velocity and position. On regular basis calculated values are corrected by the MEKF (Multiplicative Extended Kalman Filter) which runs error model.

Output from the motion model can be taken as it is but usually other transformations are needed. For example quaternion qRPY can be converted to Euler angles. ECEF position can be converted to one of Earth models LLA. Then controller can be fed by the data.

This is not all, in background other important operations are executed. EKF’s health has to be checked regularly, the same is valid for output from sensors and the EKF / motion model. Or when the object gets close to singular points. Whenever such problem is detected a fallback action has to follow.


Foreword: For some details, please refer to (J. Bijker, W. Steyn , or Markley, Attitude error representations for Kalman filtering).

Here are the equations which describe the motion model and error model. They are written in continuous time. But transformation to discrete time is simple. Zero order is enough (and easy to implement).


The first formula is the motion model. Orientation (represented by quaternion) qRPY→ENU is calculated by integrating angular velocity ωRPY subtracted by current bias bRPY (this is another differnce to the original model). Object velocity vENU is calculated by taking measured accleration (and transforming it to ENU) and subtracting gravity. Position pECEF is calculated from object velocity transformed to ECEF using transformation matrix. The matrix should not be recalculated for every step but only when some distance, lets say for example 30 meters, is travelled. Resulting error is negligible.
And here comes the error model which is used in the (M)EKF. With this model you estimate error produced by the motion model due to various sensor inaccuracies. Besides variables calculated in the motion model there is one more estimated in the filter. It is gyrometer bias drift δbRPY. Bias drift cannot be directly measured but plays significant role as an error source. But on the other hand can be fairly well estimated by the error model. Because we want to separate the motion model and correction as much as possible, estimated drift should be moved to the motion model.

Knowing the model above you can calculate matrix F needed for the (M)EKF. When it comes to covariance matrices, please refer to J. Bijker, W. Steyn (the formula here omits noise acting on the states) but for simplification, pure diagonal matrices can be used.

Before we look at measurement and matrix H let’s examine scalability. You do not have to estimate gyro bias drift but you can calculate directly from orientation error (as it is by nature rotation vector containing the drift), so BiasEst_

Actually, the formula is oversimplified as there are more inputs to the function. At least the other inputs are time, and dynamic of object’s rotation and motion. When you look further you can separate the error model into two (loosely coupled INS) or just estimate object orientation only (in case GPS data is either unavailable or corrupted). When one type is implemented it is very easy to add another variants listed above.

Now back to the measurement. To be able to estimate orientation error we have to have two nonparallel vectors. To estimate position error we should have position measurement. For orientation there are two possibilities. In any case we have to measure Earth magnetic field (by magnetometer), then in case of tightly coupled INS, the second vector is velocity of the object. In case of loosely couple INS it is measured gravity -> to use loosely coupled INS the system cannot be too “agile” when there act a dynamic acceleration on it for longer time. To  correct position, we have to measure position.

Above you can see all possible estimated outputs by the (M)EKF. The first line is for magnetic field, the second for gravity, the third for velocity and the last for position. Using this formula you can create H matrix and compare estimated output to measured values. How to calculate A matrix, please look at (J. Bijker, W. Steyn).

The last step is to propagate estimated error back to the motion model. Now, you can see why it is called Multiplicative EKF. correction

It is because orientation correction is not added to but multiplied with orientation. The last (optional, but used) but not present part in the formula is bRPY , it is corrected the same way as velocity and position. When the correction is done errors are cleared.

EKF step and prediction is calculated standard way (for EKF, see wiki).

Very simple example file (m file) with 6 DOF IMU filter can be found in download section.



This is very difficult to estimate. Essentially, estimation errors are quite small – less than 0.5 degrees for orientation, 0.3 m/s for velocity and less than 1 meter for position (for used sensors). But this is valid only for ideal conditions. It means there is only white noise present in input (-> EKF states) and measurement. But practically this cannot be reached due to other (uncompensated sensor errors). For example accelerometer drift (gain stays pretty stable even over years) can add up to several degrees (approx. 1 deg for 0.2 m/s). Poorly compensated hard and soft iron distortion can add some more degrees. Also calculated Earth magnetic field (using existing model) only guarantees error below 1 degree. Then comes nonlinear gain, cross axis sensitivity for all sensors, errors due to object vibrations, etc., etc. Generally it is hard to estimate but it can result in error to up to few degrees for orientation and few meters for position in the worst case scenario.

The video below shown service application receiving real-time INS results from the embedded system (Beaglebone + INS board). Because no GPS is attached (and is useless indoors) the system is running in fallback mode and estimates only object orientation. Note: magnetometer is not calibrated because the system is not attached to the heli yet.

PS: Euler angles displayed (R, P, Y) are used only for OpenGL view transformations and are not valid for anything else.

If your browser does not have webm video support, download mp4 (IE).