-
-
Notifications
You must be signed in to change notification settings - Fork 126
Testing Decoder Performance
This page describes our approach to testing the performance of the different radiosonde detection and decoding signal processing chains. Thanks to David Rowe for assistance with developing this testing strategy.
Produce the best performing radiosonde reception software. Decode ALL the radiosondes!
- Get some clean (very high SNR) samples of radiosonde signals - these are our reference samples.
- Add calibrated noise to the reference samples to produce a set of samples with known Eb/N0 (SNR-per-bit).
- Run the set of samples through a range of demodulation options, and determine the settings which produce the best performance.
A plot of the Bit-Error-Rate vs Eb/No performance of an ideal FSK demodulator can be found on page 16 of this presentation. In the world of radiosonde decoding, we care more about reception of complete packets, so instead of using BER as a metric, we will use PER - the Packet Error Rate. For small BERs and large packet sizes, we can relate BER to PER using the approximation: PER = BER*N, where N is the number of bits in the packet.
As an example, to receive a RS41 packet, we need to receive a header (64 bits) and a frame (320 bytes), for a total of 2624 bits. If we aim for a PER of 1e-3 (we lose one packet out of every 1000), we find we need a BER of around 3e-7. Consulting the BER vs Eb/N0 plot, we find that we need a Eb/No of about 14.5dB to achieve this. However, if we lose just another 2dB of Eb/N0, we now have a BER of ~5e-5, and hence a PER of 12%! Decode performance clearly degrades fairly quickly... Some of this can be clawed back by making use of any forward error correction that may be present, for example the RS41 uses a Reed-Solomon (255,231) code, which provides some coding gain.
All of this analysis is for an ideal demodulator. In practice, our demodulators will have a certain amount of implementation loss - our aim in this project is to make that implementation loss as small as possible.
A collection of notes on performance changes is being maintained in the auto_rx/test/notes/ directory, available here: https://github.com/projecthorus/radiosonde_auto_rx/tree/master/auto_rx/test/notes
auto_rx uses a combination of IQ and FM-demodulated input, depending on what task it is performing. To produce files for unit testing, we need to convert our set of radiosonde signal samples into the appropriate format to emulate the output from rtl_fm (which is operated in either IQ output, or FM output mode). The radiosonde signal samples are (mostly) 96 kHz floating-point (c8) IQ, centred over the sonde signal. The exception is the LMS6-1680 sample, which is at a 500 kHz sample rate.
Currently auto_rx will perform detection using FM demodulated audio, at a 22 kHz FM filter bandwidth / sample rate. To convert one of the 96 kHz IQ samples, we need to use Viproz's fork of rtl_sdr, which can take samples from stdin and process them as if they were from a RTLSDR. rtl_fm uses a 8x oversample rate when performing FM demodulation, so our input sample rate must be 8x the output sample rate, in this case 22 kHz. So, we need a 176 kHz input sample rate. We do this conversion using the tsrc utility, located in the utils directory. We also need to shift the signal to -0.25fs, where rtl_fm expects it to be. We then convert the samples into unsigned 8-bit format to feed into rtl_fm_stdin.
$ cat rs41_96k_float.bin | csdr convert_f_s16 | ./tsrc - - 1.8333 | csdr convert_s16_f | csdr shift_addition_cc -0.25000 2>/dev/null | csdr convert_f_u8 | ./rtl_fm_stdin -M fm -f 401000000 -F9 -s 22000 2>/dev/null > output.bin
Within auto_rx, the raw output from rtl_fm is converted into wav format using sox, and then fed into dft_detect:
<demodulation stuff here> | sox -t raw -r 22000 -e s -b 16 -c 1 - -r 48000 -b 8 -t wav - highpass 20 2>/dev/null |../dft_detect -t 5
We will soon be moving to IQ input for dft_detect for narrowband sondes (will probably still use FM Demodulation for the LMS6-1680 MHz, which has a huge bandwidth). For this input, we need to supply 48 kHz sample rate, 16-bit signed IQ, with the sonde signal centred at 0 Hz.
$ cat rs41_96k_float.bin | csdr convert_f_s16 | ./tsrc - - 0.5000 > output.bin
This can then be fed directly into dft_detect, supplying the sample rate and number of bits-per-sample as follows:
<demodulation here> | ../dft_detect --iq --dc - 48000 16
NOTE: This is not used yet. Will switch to this soon.
Each sonde decode chain uses a slightly different FM Demodulation bandwidth, for optimal performance.
- RS41: 15kHz
- RS92 (400 MHz): 12 kHz
- RS92 (1680 MHz): 28 kHz
- DFM: 15 kHz
- M10: 22 kHz
- iMet: 15 kHz
- LMS6 (400 MHz): 15 kHz (Note that we use the experimental decoder for LMS6-400 sondes by default)
- LMS6 (1680 MHz): 200 kHz (Note: Not sure the test samples can be demodulated in this way)
- Meisei iMS100: 15 kHz
Creation of test files is just like with the dft_detect step, though noting the different output (and hence different oversampled input) sample rates:
$ cat rs41_96k_float.bin | csdr convert_f_s16 | ./tsrc - - 1.2500 | csdr convert_s16_f | csdr shift_addition_cc -0.25000 2>/dev/null | csdr convert_f_u8 | ./rtl_fm_stdin -M fm -f 401000000 -F9 -s 15000 2>/dev/null > output.bin
The experimental decode chains use fsk_demod, which accepts signed-16-bit IQ input, with the signal located at +0.25fs. We can prepare a suitable input file from the 96 kHz float samples using:
$ cat rs41_96k_float.bin | csdr shift_addition_cc 0.125 2>/dev/null | csdr convert_f_s16 | ./tsrc - - 0.500 > output.bin
Note that we shift the signal to +0.125fs (= 12.5 kHz), as we then downsample next.
Most of the experimental decoders take 48 kHz sample rate input, with the following exceptions:
- RS92 (1680 MHz): 96 kHz used for better drift tracking
- DFM: 50 kHz
- M10: 48.080 kHz, due to weird baud rate
The shift and resample steps above will need to be adjusted to suit.
Note: Only sondes which I have decent high-SNR samples for (and can generate low-SNR samples from) have statistics in this section.
Testing is performed using the test_demod.py script, run over samples generated by generate_lowsnr.py
Detection performance is given as the Eb/N0 value where a sonde type is reliably detected (i.e. the correlation score is above the threshold for detection)
Sonde Type | 0 Hz Offset | +/- 5 kHz Offset |
---|---|---|
DFM | 12 | 12.5 |
RS41 | 9 | 11.5 |
RS92 | 12 | 14.5 |
M10 | ~7 | 8 |
LMS6-400 | 8.5 | 10.5 |
iMet | 17 | 17 |
- Note: The detector thresholds are intentionally set so that a higher SNR is required before a decoder is started. This helps avoid having a decoder start with a low SNR, and sitting around for 2 minutes before timing out.
Demodulator Performance is given as the Eb/N0 point where the calculated Packet Error Rate (PER) crosses over 50%.
Sonde Type | Demod Type | Eb/N0 for 50% PER |
---|---|---|
DFM | fsk_demod | 6.9 |
RS41 | fsk_demod | 10.2 |
RS92 | fsk_demod | 10.4 |
M10 | fsk_demod | 8.9 |
LMS6-400 | fsk_demod | 5.8 |
iMet 4 | FM demod | 18 |
Notes:
- iMet performance is degraded compared to the others due to its use of AFSK.
- Unsure if the good LMS6-400 performance is due to its use of concatenated codes (R=1/2 Conv Code + RS), or just an error in the low-SNR sample generation.
- No suitable test samples for LMS6-1680 and Meisei iMS100 sondes.