Commit 52bce8f0 authored by Peter Parente's avatar Peter Parente Committed by Peter Parente

Use Apache Toree 0.2.0dev1 for Spark 2.0.2

parent 1339b518
...@@ -29,5 +29,5 @@ RUN conda config --add channels r && \ ...@@ -29,5 +29,5 @@ RUN conda config --add channels r && \
'r-rcurl=1.95*' && conda clean -tipsy 'r-rcurl=1.95*' && conda clean -tipsy
# Apache Toree kernel # Apache Toree kernel
RUN pip --no-cache-dir install toree==0.1.0.dev7 RUN pip --no-cache-dir install https://dist.apache.org/repos/dist/dev/incubator/toree/0.2.0/snapshots/dev1/toree-pip/toree-0.2.0.dev1.tar.gz
RUN jupyter toree install --user RUN jupyter toree install --user
...@@ -11,8 +11,8 @@ ...@@ -11,8 +11,8 @@
* Scala 2.10.x * Scala 2.10.x
* pyspark, pandas, matplotlib, scipy, seaborn, scikit-learn pre-installed for Python * pyspark, pandas, matplotlib, scipy, seaborn, scikit-learn pre-installed for Python
* ggplot2, rcurl preinstalled for R * ggplot2, rcurl preinstalled for R
* Spark 1.6.0 for use in local mode or to connect to a cluster of Spark workers * Spark 2.0.2 with Hadoop 2.7 for use in local mode or to connect to a cluster of Spark workers
* Mesos client 0.22 binary that can communicate with a Mesos master * Mesos client 0.25 binary that can communicate with a Mesos master
* Unprivileged user `jovyan` (uid=1000, configurable, see options) in group `users` (gid=100) with ownership over `/home/jovyan` and `/opt/conda` * Unprivileged user `jovyan` (uid=1000, configurable, see options) in group `users` (gid=100) with ownership over `/home/jovyan` and `/opt/conda`
* [tini](https://github.com/krallin/tini) as the container entrypoint and [start-notebook.sh](../base-notebook/start-notebook.sh) as the default command * [tini](https://github.com/krallin/tini) as the container entrypoint and [start-notebook.sh](../base-notebook/start-notebook.sh) as the default command
* A [start-singleuser.sh](../base-notebook/start-singleuser.sh) script useful for running a single-user instance of the Notebook server, as required by JupyterHub * A [start-singleuser.sh](../base-notebook/start-singleuser.sh) script useful for running a single-user instance of the Notebook server, as required by JupyterHub
...@@ -112,8 +112,8 @@ conf = pyspark.SparkConf() ...@@ -112,8 +112,8 @@ conf = pyspark.SparkConf()
# point to mesos master or zookeeper entry (e.g., zk://10.10.10.10:2181/mesos) # point to mesos master or zookeeper entry (e.g., zk://10.10.10.10:2181/mesos)
conf.setMaster("mesos://10.10.10.10:5050") conf.setMaster("mesos://10.10.10.10:5050")
# point to spark binary package in HDFS or on local filesystem on all slave # point to spark binary package in HDFS or on local filesystem on all slave
# nodes (e.g., file:///opt/spark/spark-1.6.0-bin-hadoop2.6.tgz) # nodes (e.g., file:///opt/spark/spark-2.0.2-bin-hadoop2.7.tgz)
conf.set("spark.executor.uri", "hdfs://10.10.10.10/spark/spark-1.6.0-bin-hadoop2.6.tgz") conf.set("spark.executor.uri", "hdfs://10.10.10.10/spark/spark-2.0.2-bin-hadoop2.7.tgz")
# set other options as desired # set other options as desired
conf.set("spark.executor.memory", "8g") conf.set("spark.executor.memory", "8g")
conf.set("spark.core.connection.ack.wait.timeout", "1200") conf.set("spark.core.connection.ack.wait.timeout", "1200")
...@@ -145,10 +145,10 @@ library(SparkR) ...@@ -145,10 +145,10 @@ library(SparkR)
# point to mesos master or zookeeper entry (e.g., zk://10.10.10.10:2181/mesos)\ # point to mesos master or zookeeper entry (e.g., zk://10.10.10.10:2181/mesos)\
# as the first argument # as the first argument
# point to spark binary package in HDFS or on local filesystem on all slave # point to spark binary package in HDFS or on local filesystem on all slave
# nodes (e.g., file:///opt/spark/spark-1.6.0-bin-hadoop2.6.tgz) in sparkEnvir # nodes (e.g., file:///opt/spark/spark-2.0.2-bin-hadoop2.7.tgz) in sparkEnvir
# set other options in sparkEnvir # set other options in sparkEnvir
sc <- sparkR.init("mesos://10.10.10.10:5050", sparkEnvir=list( sc <- sparkR.init("mesos://10.10.10.10:5050", sparkEnvir=list(
spark.executor.uri="hdfs://10.10.10.10/spark/spark-1.6.0-bin-hadoop2.6.tgz", spark.executor.uri="hdfs://10.10.10.10/spark/spark-2.0.2-bin-hadoop2.7.tgz",
spark.executor.memory="8g" spark.executor.memory="8g"
) )
) )
...@@ -172,7 +172,7 @@ The Apache Toree kernel automatically creates a `SparkContext` when it starts ba ...@@ -172,7 +172,7 @@ The Apache Toree kernel automatically creates a `SparkContext` when it starts ba
For instance, to pass information about a Mesos master, Spark binary location in HDFS, and an executor options, you could start the container like so: For instance, to pass information about a Mesos master, Spark binary location in HDFS, and an executor options, you could start the container like so:
`docker run -d -p 8888:8888 -e SPARK_OPTS '--master=mesos://10.10.10.10:5050 \ `docker run -d -p 8888:8888 -e SPARK_OPTS '--master=mesos://10.10.10.10:5050 \
--spark.executor.uri=hdfs://10.10.10.10/spark/spark-1.6.0-bin-hadoop2.6.tgz \ --spark.executor.uri=hdfs://10.10.10.10/spark/spark-2.0.2-bin-hadoop2.7.tgz \
--spark.executor.memory=8g' jupyter/all-spark-notebook` --spark.executor.memory=8g' jupyter/all-spark-notebook`
Note that this is the same information expressed in a notebook in the Python case above. Once the kernel spec has your cluster information, you can test your cluster in an Apache Toree notebook like so: Note that this is the same information expressed in a notebook in the Python case above. Once the kernel spec has your cluster information, you can test your cluster in an Apache Toree notebook like so:
......
...@@ -7,39 +7,36 @@ MAINTAINER Jupyter Project <jupyter@googlegroups.com> ...@@ -7,39 +7,36 @@ MAINTAINER Jupyter Project <jupyter@googlegroups.com>
USER root USER root
# Spark dependencies # Spark dependencies
ENV APACHE_SPARK_VERSION 2.0.0 ENV APACHE_SPARK_VERSION 2.0.2
ENV HADOOP_VERSION 2.7
# Temporarily add jessie backports to get openjdk 8, but then remove that source # Temporarily add jessie backports to get openjdk 8, but then remove that source
RUN echo 'deb http://ftp.debian.org/debian jessie-backports main' > /etc/apt/sources.list.d/jessie-backports.list && \ RUN echo 'deb http://cdn-fastly.deb.debian.org/debian jessie-backports main' > /etc/apt/sources.list.d/jessie-backports.list && \
apt-get -y update && \ apt-get -y update && \
apt-get install -y --no-install-recommends openjdk-8-jre-headless && \ apt-get install -y --no-install-recommends openjdk-8-jre-headless && \
rm /etc/apt/sources.list.d/jessie-backports.list && \ rm /etc/apt/sources.list.d/jessie-backports.list && \
apt-get clean && \ apt-get clean && \
rm -rf /var/lib/apt/lists/* rm -rf /var/lib/apt/lists/*
RUN cd /tmp && \ RUN cd /tmp && \
wget -q http://d3kbcqa49mib13.cloudfront.net/spark-${APACHE_SPARK_VERSION}-bin-hadoop2.6.tgz && \ wget -q http://d3kbcqa49mib13.cloudfront.net/spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz && \
echo "e17d9da4b3ac463ea3ce42289f2a71cefb479d154b1ffd00310c7d7ab207aa2c *spark-${APACHE_SPARK_VERSION}-bin-hadoop2.6.tgz" | sha256sum -c - && \ echo "e6349dd38ded84831e3ff7d391ae7f2525c359fb452b0fc32ee2ab637673552a *spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz" | sha256sum -c - && \
tar xzf spark-${APACHE_SPARK_VERSION}-bin-hadoop2.6.tgz -C /usr/local && \ tar xzf spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz -C /usr/local && \
rm spark-${APACHE_SPARK_VERSION}-bin-hadoop2.6.tgz rm spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz
RUN cd /usr/local && ln -s spark-${APACHE_SPARK_VERSION}-bin-hadoop2.6 spark RUN cd /usr/local && ln -s spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION} spark
# Mesos dependencies # Mesos dependencies
# Currently, Mesos is not available from Debian Jessie.
# So, we are installing it from Debian Wheezy. Once it
# becomes available for Debian Jessie. We should switch
# over to using that instead.
RUN apt-key adv --keyserver keyserver.ubuntu.com --recv E56151BF && \ RUN apt-key adv --keyserver keyserver.ubuntu.com --recv E56151BF && \
DISTRO=debian && \ DISTRO=debian && \
CODENAME=wheezy && \ CODENAME=jessie && \
echo "deb http://repos.mesosphere.io/${DISTRO} ${CODENAME} main" > /etc/apt/sources.list.d/mesosphere.list && \ echo "deb http://repos.mesosphere.io/${DISTRO} ${CODENAME} main" > /etc/apt/sources.list.d/mesosphere.list && \
apt-get -y update && \ apt-get -y update && \
apt-get --no-install-recommends -y --force-yes install mesos=0.22.1-1.0.debian78 && \ apt-get --no-install-recommends -y --force-yes install mesos=0.25.0-0.2.70.debian81 && \
apt-get clean && \ apt-get clean && \
rm -rf /var/lib/apt/lists/* rm -rf /var/lib/apt/lists/*
# Spark and Mesos config # Spark and Mesos config
ENV SPARK_HOME /usr/local/spark ENV SPARK_HOME /usr/local/spark
ENV PYTHONPATH $SPARK_HOME/python:$SPARK_HOME/python/lib/py4j-0.10.1-src.zip ENV PYTHONPATH $SPARK_HOME/python:$SPARK_HOME/python/lib/py4j-0.10.3-src.zip
ENV MESOS_NATIVE_LIBRARY /usr/local/lib/libmesos.so ENV MESOS_NATIVE_LIBRARY /usr/local/lib/libmesos.so
ENV SPARK_OPTS --driver-java-options=-Xms1024M --driver-java-options=-Xmx4096M --driver-java-options=-Dlog4j.logLevel=info ENV SPARK_OPTS --driver-java-options=-Xms1024M --driver-java-options=-Xmx4096M --driver-java-options=-Dlog4j.logLevel=info
......
...@@ -7,8 +7,8 @@ ...@@ -7,8 +7,8 @@
* Jupyter Notebook 4.2.x * Jupyter Notebook 4.2.x
* Conda Python 3.x and Python 2.7.x environments * Conda Python 3.x and Python 2.7.x environments
* pyspark, pandas, matplotlib, scipy, seaborn, scikit-learn pre-installed * pyspark, pandas, matplotlib, scipy, seaborn, scikit-learn pre-installed
* Spark 1.6.0 for use in local mode or to connect to a cluster of Spark workers * Spark 2.0.2 with Hadoop 2.7 for use in local mode or to connect to a cluster of Spark workers
* Mesos client 0.22 binary that can communicate with a Mesos master * Mesos client 0.25 binary that can communicate with a Mesos master
* Unprivileged user `jovyan` (uid=1000, configurable, see options) in group `users` (gid=100) with ownership over `/home/jovyan` and `/opt/conda` * Unprivileged user `jovyan` (uid=1000, configurable, see options) in group `users` (gid=100) with ownership over `/home/jovyan` and `/opt/conda`
* [tini](https://github.com/krallin/tini) as the container entrypoint and [start-notebook.sh](../base-notebook/start-notebook.sh) as the default command * [tini](https://github.com/krallin/tini) as the container entrypoint and [start-notebook.sh](../base-notebook/start-notebook.sh) as the default command
* A [start-singleuser.sh](../base-notebook/start-singleuser.sh) script useful for running a single-user instance of the Notebook server, as required by JupyterHub * A [start-singleuser.sh](../base-notebook/start-singleuser.sh) script useful for running a single-user instance of the Notebook server, as required by JupyterHub
...@@ -68,8 +68,8 @@ conf = pyspark.SparkConf() ...@@ -68,8 +68,8 @@ conf = pyspark.SparkConf()
# point to mesos master or zookeeper entry (e.g., zk://10.10.10.10:2181/mesos) # point to mesos master or zookeeper entry (e.g., zk://10.10.10.10:2181/mesos)
conf.setMaster("mesos://10.10.10.10:5050") conf.setMaster("mesos://10.10.10.10:5050")
# point to spark binary package in HDFS or on local filesystem on all slave # point to spark binary package in HDFS or on local filesystem on all slave
# nodes (e.g., file:///opt/spark/spark-1.6.0-bin-hadoop2.6.tgz) # nodes (e.g., file:///opt/spark/spark-2.0.2-bin-hadoop2.7.tgz)
conf.set("spark.executor.uri", "hdfs://10.122.193.209/spark/spark-1.6.0-bin-hadoop2.6.tgz") conf.set("spark.executor.uri", "hdfs://10.122.193.209/spark/spark-2.0.2-bin-hadoop2.7.tgz")
# set other options as desired # set other options as desired
conf.set("spark.executor.memory", "8g") conf.set("spark.executor.memory", "8g")
conf.set("spark.core.connection.ack.wait.timeout", "1200") conf.set("spark.core.connection.ack.wait.timeout", "1200")
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment