Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: error correction offsets. #48

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

HeikoRibberink
Copy link

Adds offsets for error correction, and an example that can automatically generate offsets.

@HeikoRibberink
Copy link
Author

HeikoRibberink commented Jan 30, 2022

The example works as the following pseudo-code.

Iterate x times:
   Retrieve accelerometer and gyro data (offsets are already added).
   Calculate the difference between the read values and the target values.
   Change the offsets by the difference multiplied by a fraction.
   Repeat

@HeikoRibberink
Copy link
Author

@juliangaal Please, can you still look at this PR?

@juliangaal
Copy link
Owner

I can and will, but am short for time atm

@HeikoRibberink
Copy link
Author

I understand. Thank you in advance for your time.

@juliangaal
Copy link
Owner

Thank you for your effort. In the mean time (maybe you didn't know) you can use a link to a github repository (your fork in this case) in your Cargo.toml directly, in case you need the feature ASAP.

@rursprung
Copy link

just as an idea: why not put the calibration logic into the lib either as a dedicated pub fn calibrate or directly as part of init (i'd still put it as a separate fn, just not a pub one and call it from init) and accordingly keep the calibration data private and instead use it when returning the values to return calibrated data?

@HeikoRibberink
Copy link
Author

HeikoRibberink commented Oct 31, 2022

That might be a good option for bigger projects.
But it does pose a problem: as it would be time-consuming (and unnecessary) to recalibrate the sensor at the start of every runtime, we would also need a mechanism to store and load the calibration data from and to a file.
I intended this commit to be a quick solution for prototyping (just like this whole repo, as it seems), not code to be deployed by an enterprise.

@rursprung
Copy link

maybe offer two APIs?

  • calibrate() -> Result<CalibrationData, ...>
  • load_calibration_data(data: &CalibrationData) -> Result<(), ...>

then the consumer can decide:

  • to not do any calibration
  • do a calibration every time
  • do a calibration only if no calibration data has been stored (by the consumer!) and otherwise use the previous data

@juliangaal
Copy link
Owner

I'll have a serious look after my thesis is completed, great suggestion @rursprung

@HeikoRibberink
Copy link
Author

@juliangaal I have time at the end of next week to write the basis for this feature and incorporate it into this PR.

@juliangaal
Copy link
Owner

juliangaal commented Nov 3, 2022

perfect. As soon as I can test on my hardware, I'll merge. Please take your time, my thesis deadline is Dec 02.

@juliangaal
Copy link
Owner

* do a calibration only if no calibration data has been stored (by the consumer!) and otherwise use the previous data

What exactly do you mean with "the previous data"? I've searched the datasheet and register map for any references of calibration registers or something similar, but found nothing

@juliangaal juliangaal linked an issue Dec 4, 2022 that may be closed by this pull request
@juliangaal
Copy link
Owner

Work has begun in branch dev. Please also change the base of this request to dev for further changes @HeikoRibberink

@rursprung
Copy link

* do a calibration only if no calibration data has been stored (by the consumer!) and otherwise use the previous data

What exactly do you mean with "the previous data"? I've searched the datasheet and register map for any references of calibration registers or something similar, but found nothing

sorry if i didn't make this clear ☹️. this is where the "by the consumer" part of the point comes in - it's the responsibility of the consumer to somehow store the calibration data (in some permanent storage). i'm not aware of any calibration registers on the mpu6050 either.

@juliangaal
Copy link
Owner

juliangaal commented Dec 4, 2022

ah ok, that makes sense. Regarding your suggested calibrate function: Does it calibrate and return the resulted offsets, or only return offsets to then be manually loaded? I guess "the previous data" is produced from calibrate?

@rursprung
Copy link

ah ok, that makes sense. Regarding your suggested calibrate function: Does it calibrate and return the resulted offsets, or only return offsets to then be manually loaded? I guess "the previous data" is produced from calibrate?

my suggestion before was this API (i've just added in &mut self now in the quote because i realised that i forgot it before):

  • calibrate(&mut self) -> Result<CalibrationData, ...>
  • load_calibration_data(&mut self, data: &CalibrationData) -> Result<(), ...>

calibrate would compute the calibration offsets, store them internally in the Mpu6050 struct and return them so that the consumer can store it (if wished - otherwise the consumer can just ignore the returned calibration data).
load_calibration_data would overwrite its internal calibration data (in Mpu6050) with the data being passed in.

so a consumer could do something like this (pseudocode):

let i2c = ...;
let mut mpu = Mpu6050::new(i2c);
mpu.init(&mut delay).unwrap();

if let Some(calibration_data) = load_calibration_data() { // some function of the consumer to get the data somewhere
    mpu.load_calibration_data(&calibration_data)?;
} else {
    let calibration_data = mpu.calibrate()?;
    store_calibration_data(&calibration_data); // some function of the consumer to persist the data for future usage
}

@juliangaal
Copy link
Owner

@HeikoRibberink two questions regarding your calibration code:

  • Isn't speed always 1/DELAY?
  • Shouldn't delta be squared?
  • TEST_ITERATIONS and ITERATIONS must be the same, for calculated errors to make sense, correct?

Thanks for your help and excuse my long, long delay

@HeikoRibberink
Copy link
Author

HeikoRibberink commented Dec 4, 2022

Isn't speed always 1/DELAY?

No, the fact that the default SPEED I chose, 0.1 is equal to 1/DELAY (1/10), is just coincidence. SPEED is probably a misnomer: it determines how much the offsets are updated per iteration, and thus how much noise affects the offsets: too small, and the offst will never reach the equilibrium, too large and the offset will be affected too much by noise. Finding this balance is up to the user. I don't know what a better name would be: open for suggestions.

Shouldn't delta be squared?

Delta is the difference between the expected reading and the actual reading and the algorithm tries to minimize this difference. In the part where the offsets are calculated, it doesn't really make sense to square it, as you can minimize the difference linearly, and squaring might lead to overshooting.
In the part where the error is calculated, I chose to make the error dependant of delta squared, as that is common practice as far as I know. delta.abs() would work as an alternative.

TEST_ITERATIONS and ITERATIONS must be the same, for calculated errors to make sense, correct?

The errors are averaged by multiplying with inverse_i, which is the same as dividing by TEST_ITERATIONS as f32. Therefore, the amount of TEST_ITERATIONS only changes the precision of the final error.

(Side note: it is probably better to divide by TEST_ITERATIONS as f32 than to multiply by inverse_i, as the first is much clearer. I originally chose the latter as it is, in theory, slightly more performant. I realise now that that doesn't weigh up to the loss of clarity.)

@juliangaal
Copy link
Owner

equlibrium

Couldn't this achievement/non-achievement be detected at runtime and returned to the user? Or detect an error that is too large and abort?

@HeikoRibberink
Copy link
Author

equlibrium

Couldn't this achievement/non-achievement be detected at runtime and returned to the user? Or detect an error that is too large and abort?

Good idea; it would take a lot of hassle away from the user of this library. However; how would we implement this without it having the chance to loop infinitely, but keeping a high accuracy?

@juliangaal
Copy link
Owner

I guess convergence can be measured in two ways:

  1. change in offset since last adjustment below threshold
  2. error below certain threshold

The latter is more indirect, but combining both metrics is a reasonable approximation, I would think.

Infinite loop would simply be prevented with a maximum allowed loops. If the maximum is reached without 1. and 2., the calibration has "officially" failed and parameters will need to be adjusted.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Feature request: offsets for raw readings
3 participants