Skip to content

This repository summaries the research studies related to deep learning with posit arithmetic. The studies are sorted by the publication dates either on Conference/Journal or ArXiv and OpenReview. Feel free to reach me (seyed hamed fatemi langroudi) at (email: [email protected]) if your publication is missed or the publication date is wrong.

Notifications You must be signed in to change notification settings

Nu-AI/Posit_Research_Studies

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 

Repository files navigation

2018

1- H. F. Langroudi et. al, Deep learning inference on embedded devices: Fixed-point vs posit, EMC2 workshop co-located with ASPLOS, March 25.

2- M. Cococcioni et. al., Exploiting Posit Arithmetic for Deep Neural Networks in Autonomous Driving Applications, EETA, July 9.

3- Erwin de Haan et. al, Towards Accelerating Deep Neural Network Training on FPGAs: Facilitating the Use of Variable Precision, IPSJ-HPC Technical Report, July 31.

4- H. F. Langroudi et. al., PositNN: Tapered Precision Deep Learning Inference for the Edge, Open Review, Oct. 20.

5- Jeff Johnson, Rethinking floating point for deep learning, NeurIPS Systems for ML Workshop, Nov 1.

6- Zishen Wan, Study of Posit Numeric in Speech Recognition Neural Inference, CS247r Harvard Tech Rep, Dec 15.

2019

7- Carmichael et. al., Performance-Efficiency Trade-Off of Low-Precision Numerical Formats in Deep Neural Networks, CoNGA, March 13.

8- Carmichael et. al., Deep Positron: A Deep Neural Network Using the Posit Number System, DATE, March 25.

9- Raúl M. Montero et. al, TEMPLATE-BASED POSIT MULTIPLICATION FOR TRAINING AND INFERRING IN NEURAL NETWORKS, arXiv, July 9.

10- H. F. Langroudi et. al, PositNN Framework: Tapered Precision Deep Learning Inference for the Edge, SC, July 30.

11- H. F. Langroudi et. al, Deep Learning Training on the Edge with Low-Precision Posits, arXiv, July 30.

12- H. F. Langroudi et. al, Cheetah: Mixed Low-Precision Hardware & Software Co-Design Framework for DNNs on the Edge, arXiv, Aug 6.

13- Jinming Lu et. al., Training Deep Neural Networks Using Posit Number System, arXiv, Sept. 6.

14- M. Cococcioni et. al., Novel arithmetics to accelerate machine learning classifiers in autonomous driving applications, ICECS, Nov 27.

2020

15- Andre Guntoro et. al., Next Generation Arithmetic for Edge Computing, DATE, March 9.

16- M. Cococcioni et. al., Fast Approximations of Activation Functions in Deep Neural Networks when using Posit Arithmetic, Sensors, March 10.

17- Jinming Lu et. al., Evaluations on Deep Neural Networks Training Using Posit Number System, IEEE TC, April 14.

18- Lukas Sommer et. al., Comparison of Arithmetic Number Formats for Inference in Sum-Product Networks on FPGAs, FCCM, May 3.

19- Raúl M. Montero et. al, Deep PeNSieve: A deep learning framework based on the posit number system, DSP, May 7.

20- M. Cococcioni et. al., Fast deep neural networks for image processing using posits and ARM scalable vector extension, JRTIP, May 18.

21- Nano Nevas et. al., Reconfigurable Stream-based Tensor Unit with Variable-Precision Posit Arithmetic, ASAP, July 6.

22- H. F. Langroudi et. al, Adaptive Posit: Parameter aware numerical format for deep learning inference on the edge, CVPRW, July 30.

23- M. Cococcioni et. al., A Novel Posit-based Fast Approximation of ELU Activation Function for Deep Neural Networks, SMARTCOMP, Sept. 14.

24- Suresh Nambi et. al., ExPAN(N)D: Exploring Posits for Efficient Artificial Neural Network Design in FPGA-based Systems, IEEE Access, Oct. 24.

25- M. Cococcioni et. al., Novel Arithmetics in Deep Neural Networks Signal Processing for Autonomous Driving: Challenges and Opportunities, IEEE SPM, Dec. 24.

2021

26- Ihsen Alouani et. al., An Investigation on Inherent Robustness of Posit Data Representation, VLSID, Jan 5.

27- Nhut-Minh Ho et. al., Posit Arithmetic for the Training and Deployment of Generative Adversarial Networks, DATE, Feb 1.

28- Nimish Shah et. al., PIU: A 248GOPS/W Stream-Based Processor for Irregular Probabilistic Inference Networks Using Precision-Scalable Posit Arithmetic in 28nm, ISSCC, Feb 13.

29- Raúl M. Montero et. al., PLAM: a Posit Logarithm-Approximate Multiplier, TETC, Feb 18.

30- Varun Gohil et.al, Fixed-Posit: A Floating-Point Representation for Error-Resilient Applications, TCAS-II, April 10.

31- Aleksander YU. Romanov. et. al., Analysis of Posit and Bfloat Arithmetic of Real Numbers for Machine Learning, IEEE Access, June 4.

32- Gonc¸alo Raposo et. al, POSITNN: TRAINING DEEP NEURAL NETWORKS WITH MIXED LOW-PRECISION POSIT, ICASSP, Jun 6

33- Yang Wang et. al, LPE: Logarithm Posit Processing Element for Energy-Efficient Edge-Device Training, AICAS, Jun 9.

34- H. F. Langroudi et. al, ALPS: Adaptive Quantization of Deep Neural Networks With GeneraLized PositS, CVPRW, June 19.

35- Stefan Dan Ciocirlan et. al, The Accuracy and Efficiency of Posit Arithmetic, arXiv, Sept. 16.

36- M. Cococcioni et. al., A Lightweight Posit Processing Unit for RISC-V Processors in Deep Neural Network Applications, TETC, Oct. 21.

37- S. Walia et al. , Fast and low-power quantized fixed posit high-accuracy DNN implementation., TVLSI, Dec, 10.

2022

38- M. Cococcioni et. al., Small reals representations for Deep Learning at the edge: a comparison , CoNGA'22, March 2.

39- H. F. Langroudi et. al., ACTION: Automated Hardware-Software Codesign Framework for Low-precision Numerical Format SelecTION in TinyML, CoNGA'22, March 2.

40- Nhut-Minh Ho et. al., Qtorch+: Next Generation Arithmetic for Pytorch Machine Learning, CoNGA'22, March 2.

41- O. Desrentes et. al., A Posit8 Decompression Operator for Deep Neural Network Inference, CoNGA'22, March 2.

42- M. Cococcioni et. al., Experimental Results of Vectorized Posit-Based DNNs on a Real ARM SVE High Performance Computing Machine, APPLEPIES, April 9.

42- M. Zolfagharinejad et al. , Posit Process Element for Using in Energy-Efficient DNN Accelerators, TVLSI, April, 21.

43-Y. Nakahara et al. , A Posit Based Multiply-accumulate Unit with Small Quire Size for Deep Neural Networks, IPSJ-LSI, June 14.

44- Yang Wang et. al, PL-NPU: An Energy-Efficient Edge-Device DNN Training Processor With Posit-Based Logarithm-Domain Computing, TCAS-I, June 22.

About

This repository summaries the research studies related to deep learning with posit arithmetic. The studies are sorted by the publication dates either on Conference/Journal or ArXiv and OpenReview. Feel free to reach me (seyed hamed fatemi langroudi) at (email: [email protected]) if your publication is missed or the publication date is wrong.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published