Skip to content

Microsoft Machine Learning for Apache Spark

License

Notifications You must be signed in to change notification settings

marabout2015/mmlspark

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MMLSpark Microsoft Machine Learning for Apache Spark

MMLSpark is an ecosystem of tools aimed towards expanding the distributed computing framework Apache Spark in several new directions. MMLSpark adds many deep learning and data science tools to the Spark ecosystem, including seamless integration of Spark Machine Learning pipelines with Microsoft Cognitive Toolkit (CNTK), LightGBM and OpenCV. These tools enable powerful and highly-scalable predictive and analytical models for a variety of datasources.

MMLSpark also brings new networking capabilities to the Spark Ecosystem. With the HTTP on Spark project, users can embed any web service into their SparkML models. In this vein, MMLSpark provides easy to use SparkML transformers for a wide variety of Microsoft Cognitive Services. For production grade deployment, the Spark Serving project enables high throughput, sub-millisecond latency web services, backed by your Spark cluster.

MMLSpark requires Scala 2.11, Spark 2.3+, and either Python 2.7 or Python 3.5+. See the API documentation for Scala and for PySpark.

Table of Contents

Projects

The Cognitive Services on Spark: LIME on Spark: Spark Serving:
Leverage the Microsoft Cognitive Services at Unprecedented Scales in your existing SparkML pipelines Distributed, Model Agnostic, Interpretations for Image Classifiers Serve any Spark Computation as a Web Service with Sub-Millisecond Latency
LightGBM on Spark: CNTK on Spark: HTTP on Spark:
Train Gradient Boosted Machines with LightGBM Distributed Deep Learning with the Microsoft Cognitive Toolkit An Integration Between Spark and the HTTP Protocol, enabling Distributed Microservice Orchestration

Examples

  • Create a deep image classifier with transfer learning (example 9)
  • Fit a LightGBM classification or regression model on a biochemical dataset (example 3), to learn more check out the LightGBM documentation page.
  • Deploy a deep network as a distributed web service with MMLSpark Serving
  • Use web services in Spark with HTTP on Apache Spark
  • Use Bi-directional LSTMs from Keras for medical entity extraction (example 8)
  • Create a text analytics system on Amazon book reviews (example 4)
  • Perform distributed hyperparameter tuning to identify Breast Cancer (example 5)
  • Easily ingest images from HDFS into Spark DataFrame (example 6)
  • Use OpenCV on Spark to manipulate images (example 7)
  • Train classification and regression models easily via implicit featurization of data (example 1)
  • Train and evaluate a flight delay prediction system (example 2)

See our notebooks for all examples.

A short example

Below is an excerpt from a simple example of using a pre-trained CNN to classify images in the CIFAR-10 dataset. View the whole source code in notebook example 9.

...
import mmlspark
# Initialize CNTKModel and define input and output columns
cntkModel = mmlspark.CNTKModel() \
                    .setInputCol("images").setOutputCol("output") \
                    .setModelLocation(modelFile)
# Train on dataset with internal spark pipeline
scoredImages = cntkModel.transform(imagesWithLabels)
...

See other sample notebooks as well as the MMLSpark documentation for Scala and PySpark.

Setup and installation

Spark package

MMLSpark can be conveniently installed on existing Spark clusters via the --packages option, examples:

spark-shell --packages Azure:mmlspark:0.17
pyspark --packages Azure:mmlspark:0.17
spark-submit --packages Azure:mmlspark:0.17 MyApp.jar

This can be used in other Spark contexts too. For example, you can use MMLSpark in AZTK by adding it to the .aztk/spark-defaults.conf file.

Databricks

To install MMLSpark on the Databricks cloud, create a new library from Maven coordinates in your workspace.

For the coordinates use: Azure:mmlspark:0.17. Ensure this library is attached to all clusters you create.

Finally, ensure that your Spark cluster has at least Spark 2.1 and Scala 2.11.

You can use MMLSpark in both your Scala and PySpark notebooks. To get started with our example notebooks import the following databricks archive:

https://mmlspark.blob.core.windows.net/dbcs/MMLSpark%20Examples%20v0.17.dbc

Docker

The easiest way to evaluate MMLSpark is via our pre-built Docker container. To do so, run the following command:

docker run -it -p 8888:8888 -e ACCEPT_EULA=yes mcr.microsoft.com/mmlspark/release

Navigate to http://localhost:8888/ in your web browser to run the sample notebooks. See the documentation for more on Docker use.

To read the EULA for using the docker image, run
docker run -it -p 8888:8888 mcr.microsoft.com/mmlspark/release eula

GPU VM Setup

MMLSpark can be used to train deep learning models on GPU nodes from a Spark application. See the instructions for setting up an Azure GPU VM.

Python

To try out MMLSpark on a Python (or Conda) installation you can get Spark installed via pip with pip install pyspark. You can then use pyspark as in the above example, or from python:

import pyspark
spark = pyspark.sql.SparkSession.builder.appName("MyApp") \
            .config("spark.jars.packages", "Azure:mmlspark:0.17") \
            .getOrCreate()
import mmlspark

HDInsight

To install MMLSpark on an existing HDInsight Spark Cluster, you can execute a script action on the cluster head and worker nodes. For instructions on running script actions, see this guide.

The script action url is: https://mmlspark.azureedge.net/buildartifacts/0.17/install-mmlspark.sh.

If you're using the Azure Portal to run the script action, go to Script actionsSubmit new in the Overview section of your cluster blade. In the Bash script URI field, input the script action URL provided above. Mark the rest of the options as shown on the screenshot to the right.

Submit, and the cluster should finish configuring within 10 minutes or so.

SBT

If you are building a Spark application in Scala, add the following lines to your build.sbt:

resolvers += "MMLSpark Repo" at "https://mmlspark.azureedge.net/maven"
libraryDependencies += "com.microsoft.ml.spark" %% "mmlspark" % "0.17"

Building from source

You can also easily create your own build by cloning this repo and use the main build script: ./runme. Run it once to install the needed dependencies, and again to do a build. See this guide for more information.

R (Beta)

To try out MMLSpark using the R autogenerated wrappers see our instructions. Note: This feature is still under development and some necessary custom wrappers may be missing.

Learn More

Contributing & feedback

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

See CONTRIBUTING.md for contribution guidelines.

To give feedback and/or report an issue, open a GitHub Issue.

Other relevant projects

Apache®, Apache Spark, and Spark® are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries.

About

Microsoft Machine Learning for Apache Spark

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Scala 77.8%
  • Jupyter Notebook 8.0%
  • Shell 7.1%
  • Python 5.0%
  • Dockerfile 0.9%
  • JavaScript 0.4%
  • Other 0.8%