Traditional machine-learning workloads have yet to be GPU-accelerated the way deep learning and other neural net methods have. RAPIDS aims to change that, while maintaining the ease of use of the PyData ecosystem. The goal is to build a ridiculously fast open-source data science platform that allows practitioners to explore data, train ML algorithms and build applications while primarily staying in GPU memory.
Materials contained here will provide a hands-on introduction to RAPIDS. It will begin with a brief introduction to RAPIDS and Azure ML (10-15 minutes). The remainder of the time will be a hands-on session on how to use RAPIDS to process and train a ML model.
We have two tracks:
notebooks/intro
introduces the concepts of GPU computing and contrasts the runs with CPUnotebooks/deep_dive
builds on the introductory scripts and provides more examples on how to extend and distrubute the workloads
- Tom Drabas (Microsoft)
- John Zedlewski (NVIDIA)
- Brad Rees (NVIDIA)
- Paul Mahler
- Nick Becker
-
Joshua Patterson, General Manager of AI Infrastructure, NVIDIA Josh leads engineering for RAPIDS.AI, and is a former White House Presidential Innovation Fellow. Prior to NVIDIA, Josh worked with leading experts across public sector, private sector, and academia to build a next-generation cyber defense platform. His current passions are graph analytics, machine learning, and large-scale system design. Josh also loves storytelling with data and creating interactive data visualizations. Josh holds a B.A. in economics from the University of North Carolina at Chapel Hill and an M.A. in economics from the University of South Carolina Moore School of Business.
-
Bartley Richardson
-
Brad Rees
-
Michael Beaumont
-
Chau Dang