Skip to content

aicodex/docker-spark

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Apache Spark on Docker

Please download spark-2.1.1-bin-hadoop2.6.tgz first! and put it to this dict.

download url: https://archive.apache.org/dist/spark/spark-2.1.1/spark-2.1.1-bin-hadoop2.6.tgz

This project is forked from https://github.com/sequenceiq/docker-spark

I just update the spark version

This repository contains a Docker file to build a Docker image with Apache Spark. This Docker image depends on our previous Hadoop Docker image, available at the SequenceIQ GitHub page. The base Hadoop Docker image is also available as an official Docker image.

Building the image

docker build --rm -t yourname/spark:2.1.1 .

Running the image

  • if using boot2docker make sure your VM has more than 2GB memory
  • in your /etc/hosts file add $(boot2docker ip) as host 'sandbox' to make it easier to access your sandbox UI
  • open yarn UI ports when running container
docker run -it -p 8088:8088 -p 8042:8042 -p 4040:4040 -h -e LANG="en_US.UTF-8" yourname/spark:2.1.1 bash

or

docker run -d -h yourname/spark:2.1.1 -d

Versions

Hadoop 2.6.0 and Apache Spark v2.1.1 on Centos

Testing

There are two deploy modes that can be used to launch Spark applications on YARN.

YARN-client mode

In yarn-client mode, the driver runs in the client process, and the application master is only used for requesting resources from YARN.

# run the spark shell
spark-shell \
--master yarn-client \
--driver-memory 1g \
--executor-memory 1g \
--executor-cores 1

# execute the the following command which should return 1000
scala> sc.parallelize(1 to 1000).count()

YARN-cluster mode

In yarn-cluster mode, the Spark driver runs inside an application master process which is managed by YARN on the cluster, and the client can go away after initiating the application.

Estimating Pi (yarn-cluster mode):

# execute the the following command which should write the "Pi is roughly 3.1418" into the logs
# note you must specify --files argument in cluster mode to enable metrics
spark-submit \
--class org.apache.spark.examples.SparkPi \
--files $SPARK_HOME/conf/metrics.properties \
--master yarn-cluster \
--driver-memory 1g \
--executor-memory 1g \
--executor-cores 1 \
$SPARK_HOME/jars/spark-examples-2.1.1-hadoop2.6.0.jar

Estimating Pi (yarn-client mode):

# execute the the following command which should print the "Pi is roughly 3.1418" to the screen
spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn-client \
--driver-memory 1g \
--executor-memory 1g \
--executor-cores 1 \
$SPARK_HOME/jars/spark-examples-2.1.1-hadoop2.6.0.jar

Submitting from the outside of the container

To use Spark from outside of the container it is necessary to set the YARN_CONF_DIR environment variable to directory with a configuration appropriate for the docker. The repository contains such configuration in the yarn-remote-client directory.

export YARN_CONF_DIR="`pwd`/yarn-remote-client"

Docker's HDFS can be accessed only by root. When submitting Spark applications from outside of the cluster, and from a user different than root, it is necessary to configure the HADOOP_USER_NAME variable so that root user is used.

export HADOOP_USER_NAME=root

About

fork from github.com/sequenceiq/docker-spark

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages