Skip to content

Latest commit

 

History

History
83 lines (67 loc) · 3.7 KB

README.md

File metadata and controls

83 lines (67 loc) · 3.7 KB

A Spatiotemporal Multi-Channel Learning Framework for Automatic Modulation Recognition

by Jialang Xu (e-mail: [email protected]), Chunbo Luo, Gerard Parr, Yang Luo.

Official implement of the paper, 'A Spatiotemporal Multi-Channel Learning Framework for Automatic Modulation Recognition'.

This repository contains MCLDNN implementation and datasets in the paper.

The code of this repository is integrated into the AMR Benchmark, which provides a unified implementation of several baseline deep learning models for automatic modulation recognition, thanks to the great contribution of Fuxin Zhang.

Introduction

Automatic modulation recognition (AMR) plays a vital role in modern communication systems. We proposes a novel three-stream deep learning framework to extract the features from individual and combined in-phase/quadrature (I/Q) symbols of the modulated data. The proposed framework integrates one-dimensional (1D) convolutional, two-dimensional (2D) convolutional and long short-term memory (LSTM) layers to extract features more effectively from a time and space perspective. Experiments on the benchmark dataset show the proposed framework has efficient convergence speed and achieves improved recognition accuracy, especially for the signals modulated by higher dimensional schemes such as 16 quadrature amplitude modulation (16-QAM) and 64-QAM.

Citation

If this work is useful for your research, please consider citing:

@ARTICLE{9106397,
	author={J. {Xu} and C. {Luo} and G. {Parr} and Y. {Luo}},
	journal={IEEE Wireless Communications Letters}, 
	title={A Spatiotemporal Multi-Channel Learning Framework for Automatic Modulation Recognition}, 
	year={2020},
	volume={9},
	number={10},
	pages={1629-1632},
	doi={10.1109/LWC.2020.2999453}
	}

Content

Model Performance

The recognition accuracy of the MCLDNN is shown in Fig.1.

Fig.1 Recognition accuracy comparison on the RadioML2016.10a dataset.

Datasets

The available datasets can be downloaded from the table below:

Datasets Download
RadioML2016.10a [Official]
RadioML2016.10b [Official]

Requirements

  • Python 3.6.10
  • TensorFlow-gpu 1.14.0
  • Keras-gpu 2.2.4

Training

For the RadioML2016.10a dataset:

python train.py --datasetpath /path/to/RML2016.10a_dict.pkl --data 0

For the RadioML2016.10b dataset:

python train.py --datasetpath /path/to/RML2016.10b.dat --data 1

Testing

For the RadioML2016.10a dataset:

python test.py --datasetpath /path/to/RML2016.10a_dict.pkl --data 0

For the RadioML2016.10b dataset:

python test.py --datasetpath /path/to/RML2016.10b.dat --data 1

Model Weights

Weights for the RML2016.10a dataset. [GitHub download]

Acknowledgement

Note that our code is partly based on radioml. Thanks leena201818 for his great work!