Commit 64c143d3 authored by Romain's avatar Romain

Merge branch 'master' into hadolint

parents 6f2e7cb5 76402a27
Thanks for contributing! Please see the Thanks for contributing! Please see the
[Contributor Guide](https://jupyter-docker-stacks.readthedocs.io) in the documentation for __Contributor Guide__ section in [the documentation](https://jupyter-docker-stacks.readthedocs.io) for
information about how to contribute information about how to contribute
[package updates](http://jupyter-docker-stacks.readthedocs.io/en/latest/contributing/packages.html), [package updates](http://jupyter-docker-stacks.readthedocs.io/en/latest/contributing/packages.html),
[recipes](http://jupyter-docker-stacks.readthedocs.io/en/latest/contributing/recipes.html), [recipes](http://jupyter-docker-stacks.readthedocs.io/en/latest/contributing/recipes.html),
......
...@@ -48,6 +48,8 @@ arch_patch/%: ## apply hardware architecture specific patches to the Dockerfile ...@@ -48,6 +48,8 @@ arch_patch/%: ## apply hardware architecture specific patches to the Dockerfile
build/%: DARGS?= build/%: DARGS?=
build/%: ## build the latest image for a stack build/%: ## build the latest image for a stack
docker build $(DARGS) --rm --force-rm -t $(OWNER)/$(notdir $@):latest ./$(notdir $@) docker build $(DARGS) --rm --force-rm -t $(OWNER)/$(notdir $@):latest ./$(notdir $@)
@echo -n "Built image size: "
@docker images $(OWNER)/$(notdir $@):latest --format "{{.Size}}"
build-all: $(foreach I,$(ALL_IMAGES),arch_patch/$(I) build/$(I) ) ## build all stacks build-all: $(foreach I,$(ALL_IMAGES),arch_patch/$(I) build/$(I) ) ## build all stacks
build-test-all: $(foreach I,$(ALL_IMAGES),arch_patch/$(I) build/$(I) test/$(I) ) ## build and test all stacks build-test-all: $(foreach I,$(ALL_IMAGES),arch_patch/$(I) build/$(I) test/$(I) ) ## build and test all stacks
...@@ -145,4 +147,4 @@ test/%: ## run tests against a stack (only common tests or common tests + specif ...@@ -145,4 +147,4 @@ test/%: ## run tests against a stack (only common tests or common tests + specif
@if [ ! -d "$(notdir $@)/test" ]; then TEST_IMAGE="$(OWNER)/$(notdir $@)" pytest -m "not info" test; \ @if [ ! -d "$(notdir $@)/test" ]; then TEST_IMAGE="$(OWNER)/$(notdir $@)" pytest -m "not info" test; \
else TEST_IMAGE="$(OWNER)/$(notdir $@)" pytest -m "not info" test $(notdir $@)/test; fi else TEST_IMAGE="$(OWNER)/$(notdir $@)" pytest -m "not info" test $(notdir $@)/test; fi
test-all: $(foreach I,$(ALL_IMAGES),test/$(I)) ## test all stacks test-all: $(foreach I,$(ALL_IMAGES),test/$(I)) ## test all stacks
\ No newline at end of file
[![docker pulls](https://img.shields.io/docker/pulls/jupyter/all-spark-notebook.svg)](https://hub.docker.com/r/jupyter/all-spark-notebook/) [![docker stars](https://img.shields.io/docker/stars/jupyter/all-spark-notebook.svg)](https://hub.docker.com/r/jupyter/all-spark-notebook/) [![image metadata](https://images.microbadger.com/badges/image/jupyter/all-spark-notebook.svg)](https://microbadger.com/images/jupyter/all-spark-notebook "jupyter/all-spark-notebook image metadata") [![docker pulls](https://img.shields.io/docker/pulls/jupyter/all-spark-notebook.svg)](https://hub.docker.com/r/jupyter/all-spark-notebook/) [![docker stars](https://img.shields.io/docker/stars/jupyter/all-spark-notebook.svg)](https://hub.docker.com/r/jupyter/all-spark-notebook/) [![image metadata](https://images.microbadger.com/badges/image/jupyter/all-spark-notebook.svg)](https://microbadger.com/images/jupyter/all-spark-notebook "jupyter/all-spark-notebook image metadata")
# Jupyter Notebook Python, Scala, R, Spark, Mesos Stack # Jupyter Notebook Python, Scala, R, Spark Stack
Please visit the documentation site for help using and contributing to this image and others. Please visit the documentation site for help using and contributing to this image and others.
......
...@@ -2,6 +2,7 @@ cat << EOF > "$MANIFEST_FILE" ...@@ -2,6 +2,7 @@ cat << EOF > "$MANIFEST_FILE"
* Build datetime: ${BUILD_TIMESTAMP} * Build datetime: ${BUILD_TIMESTAMP}
* DockerHub build code: ${BUILD_CODE} * DockerHub build code: ${BUILD_CODE}
* Docker image: ${DOCKER_REPO}:${GIT_SHA_TAG} * Docker image: ${DOCKER_REPO}:${GIT_SHA_TAG}
* Docker image size: $(docker images ${IMAGE_NAME} --format "{{.Size}}")
* Git commit SHA: [${SOURCE_COMMIT}](https://github.com/jupyter/docker-stacks/commit/${SOURCE_COMMIT}) * Git commit SHA: [${SOURCE_COMMIT}](https://github.com/jupyter/docker-stacks/commit/${SOURCE_COMMIT})
* Git commit message: * Git commit message:
\`\`\` \`\`\`
......
{
"cells": [
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"output_type": "error",
"ename": "Error",
"evalue": "Jupyter cannot be started. Error attempting to locate jupyter: Data Science libraries jupyter and notebook are not installed in interpreter Python 3.7.7 64-bit ('jupyter': conda).",
"traceback": [
"Error: Jupyter cannot be started. Error attempting to locate jupyter: Data Science libraries jupyter and notebook are not installed in interpreter Python 3.7.7 64-bit ('jupyter': conda).",
"at b.startServer (/Users/romain/.vscode/extensions/ms-python.python-2020.5.80290/out/client/extension.js:92:270430)",
"at async b.createServer (/Users/romain/.vscode/extensions/ms-python.python-2020.5.80290/out/client/extension.js:92:269873)",
"at async connect (/Users/romain/.vscode/extensions/ms-python.python-2020.5.80290/out/client/extension.js:92:397876)",
"at async w.ensureConnectionAndNotebookImpl (/Users/romain/.vscode/extensions/ms-python.python-2020.5.80290/out/client/extension.js:16:556625)",
"at async w.ensureConnectionAndNotebook (/Users/romain/.vscode/extensions/ms-python.python-2020.5.80290/out/client/extension.js:16:556303)",
"at async w.clearResult (/Users/romain/.vscode/extensions/ms-python.python-2020.5.80290/out/client/extension.js:16:552346)",
"at async w.reexecuteCell (/Users/romain/.vscode/extensions/ms-python.python-2020.5.80290/out/client/extension.js:16:540374)",
"at async w.reexecuteCells (/Users/romain/.vscode/extensions/ms-python.python-2020.5.80290/out/client/extension.js:16:537541)"
]
}
],
"source": [
"from pyspark.sql import SparkSession\n",
"\n",
"# Spark session & context\n",
"spark = SparkSession.builder.master('local').getOrCreate()\n",
"sc = spark.sparkContext\n",
"\n",
"# Sum of the first 100 whole numbers\n",
"rdd = sc.parallelize(range(100 + 1))\n",
"rdd.sum()\n",
"# 5050"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
\ No newline at end of file
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"library(SparkR)\n",
"\n",
"# Spark session & context\n",
"sc <- sparkR.session(\"local\")\n",
"\n",
"# Sum of the first 100 whole numbers\n",
"sdf <- createDataFrame(list(1:100))\n",
"dapplyCollect(sdf,\n",
" function(x) \n",
" { x <- sum(x)}\n",
" )\n",
"# 5050"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "R",
"language": "R",
"name": "ir"
},
"language_info": {
"codemirror_mode": "r",
"file_extension": ".r",
"mimetype": "text/x-r-source",
"name": "R",
"pygments_lexer": "r",
"version": "3.6.3"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
\ No newline at end of file
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"library(sparklyr)\n",
"\n",
"# get the default config\n",
"conf <- spark_config()\n",
"# Set the catalog implementation in-memory\n",
"conf$spark.sql.catalogImplementation <- \"in-memory\"\n",
"\n",
"# Spark session & context\n",
"sc <- spark_connect(master = \"local\", config = conf)\n",
"\n",
"# Sum of the first 100 whole numbers\n",
"sdf_len(sc, 100, repartition = 1) %>% \n",
" spark_apply(function(e) sum(e))\n",
"# 5050"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "R",
"language": "R",
"name": "ir"
},
"language_info": {
"codemirror_mode": "r",
"file_extension": ".r",
"mimetype": "text/x-r-source",
"name": "R",
"pygments_lexer": "r",
"version": "3.6.3"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
\ No newline at end of file
{
"cells": [
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"%%init_spark\n",
"# Spark session & context\n",
"launcher.master = \"local\"\n",
"launcher.conf.spark.executor.cores = 1"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[8] at parallelize at <console>:28\n",
"res4: Double = 5050.0\n"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"// Sum of the first 100 whole numbers\n",
"val rdd = sc.parallelize(0 to 100)\n",
"rdd.sum()\n",
"// 5050"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "spylon-kernel",
"language": "scala",
"name": "spylon-kernel"
},
"language_info": {
"codemirror_mode": "text/x-scala",
"file_extension": ".scala",
"help_links": [
{
"text": "MetaKernel Magics",
"url": "https://metakernel.readthedocs.io/en/latest/source/README.html"
}
],
"mimetype": "text/x-scala",
"name": "scala",
"pygments_lexer": "scala",
"version": "0.4.1"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
\ No newline at end of file
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Waiting for a Spark session to start..."
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"spark://master:7077\n"
]
}
],
"source": [
"// should print the value of --master in the kernel spec\n",
"println(sc.master)"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Waiting for a Spark session to start..."
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"rdd = ParallelCollectionRDD[0] at parallelize at <console>:28\n"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"5050.0"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"// Sum of the first 100 whole numbers\n",
"val rdd = sc.parallelize(0 to 100)\n",
"rdd.sum()\n",
"// 5050"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Apache Toree - Scala",
"language": "scala",
"name": "apache_toree_scala"
},
"language_info": {
"codemirror_mode": "text/x-scala",
"file_extension": ".scala",
"mimetype": "text/x-scala",
"name": "scala",
"pygments_lexer": "scala",
"version": "2.11.12"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
\ No newline at end of file
# Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License.
import logging
import pytest
import os
LOGGER = logging.getLogger(__name__)
@pytest.mark.parametrize(
"test_file",
# TODO: add local_sparklyr
["local_pyspark", "local_spylon", "local_toree", "local_sparkR"],
)
def test_nbconvert(container, test_file):
"""Check if Spark notebooks can be executed"""
host_data_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), "data")
cont_data_dir = "/home/jovyan/data"
output_dir = "/tmp"
timeout_ms = 600
LOGGER.info(f"Test that {test_file} notebook can be executed ...")
command = f"jupyter nbconvert --to markdown --ExecutePreprocessor.timeout={timeout_ms} --output-dir {output_dir} --execute {cont_data_dir}/{test_file}.ipynb"
c = container.run(
volumes={host_data_dir: {"bind": cont_data_dir, "mode": "ro"}},
tty=True,
command=["start.sh", "bash", "-c", command],
)
rv = c.wait(timeout=timeout_ms / 10 + 10)
assert rv == 0 or rv["StatusCode"] == 0, f"Command {command} failed"
logs = c.logs(stdout=True).decode("utf-8")
LOGGER.debug(logs)
expected_file = f"{output_dir}/{test_file}.md"
assert expected_file in logs, f"Expected file {expected_file} not generated"
...@@ -117,7 +117,7 @@ RUN conda install --quiet --yes 'tini=0.18.0' && \ ...@@ -117,7 +117,7 @@ RUN conda install --quiet --yes 'tini=0.18.0' && \
RUN conda install --quiet --yes \ RUN conda install --quiet --yes \
'notebook=6.0.3' \ 'notebook=6.0.3' \
'jupyterhub=1.1.0' \ 'jupyterhub=1.1.0' \
'jupyterlab=2.1.1' && \ 'jupyterlab=2.1.3' && \
conda clean --all -f -y && \ conda clean --all -f -y && \
npm cache clean --force && \ npm cache clean --force && \
jupyter notebook --generate-config && \ jupyter notebook --generate-config && \
......
...@@ -5,6 +5,9 @@ ...@@ -5,6 +5,9 @@
FROM ppc64le/ubuntu:18.04 FROM ppc64le/ubuntu:18.04
LABEL maintainer="Ilsiyar Gaynutdinov <ilsiyar_gaynutdinov@ru.ibm.com>" LABEL maintainer="Ilsiyar Gaynutdinov <ilsiyar_gaynutdinov@ru.ibm.com>"
ARG NB_USER="jovyan"
ARG NB_UID="1000"
ARG NB_GID="100"
USER root USER root
...@@ -13,88 +16,121 @@ USER root ...@@ -13,88 +16,121 @@ USER root
ENV DEBIAN_FRONTEND noninteractive ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update \ RUN apt-get update \
&& apt-get install -yq --no-install-recommends \ && apt-get install -yq --no-install-recommends \
build-essential \ wget \
bzip2 \ bzip2 \
ca-certificates \ ca-certificates \
cmake \
git \
locales \
sudo \ sudo \
wget \ locales \
fonts-liberation \
run-one \
&& apt-get clean && rm -rf /var/lib/apt/lists/* && apt-get clean && rm -rf /var/lib/apt/lists/*
RUN echo "LANGUAGE=en_US.UTF-8" >> /etc/default/locale RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && \
RUN echo "LC_ALL=en_US.UTF-8" >> /etc/default/locale locale-gen
RUN echo "LC_TYPE=en_US.UTF-8" >> /etc/default/locale
RUN locale-gen en_US en_US.UTF-8
#build and install Tini for ppc64le
RUN wget https://github.com/krallin/tini/archive/v0.18.0.tar.gz && \
tar zxvf v0.18.0.tar.gz && \
rm -rf v0.18.0.tar.gz
WORKDIR tini-0.18.0/
RUN cmake . && make install
RUN mv ./tini /usr/local/bin/tini && \
chmod +x /usr/local/bin/tini
WORKDIR ..
# Configure environment # Configure environment
ENV CONDA_DIR /opt/conda ENV CONDA_DIR=/opt/conda \
ENV PATH $CONDA_DIR/bin:$PATH SHELL=/bin/bash \
ENV SHELL /bin/bash NB_USER=$NB_USER \
ENV NB_USER jovyan NB_UID=$NB_UID \
ENV NB_UID 1000 NB_GID=$NB_GID \
ENV HOME /home/$NB_USER LC_ALL=en_US.UTF-8 \
ENV LC_ALL en_US.UTF-8 LANG=en_US.UTF-8 \
ENV LANG en_US.UTF-8 LANGUAGE=en_US.UTF-8
ENV LANGUAGE en_US.UTF-8 ENV PATH=$CONDA_DIR/bin:$PATH \
HOME=/home/$NB_USER
# Create jovyan user with UID=1000 and in the 'users' group
RUN useradd -m -s /bin/bash -N -u $NB_UID $NB_USER && \ # Copy a script that we will use to correct permissions after running certain commands
COPY fix-permissions /usr/local/bin/fix-permissions
RUN chmod a+rx /usr/local/bin/fix-permissions
# Enable prompt color in the skeleton .bashrc before creating the default NB_USER
RUN sed -i 's/^#force_color_prompt=yes/force_color_prompt=yes/' /etc/skel/.bashrc
# Create NB_USER wtih name jovyan user with UID=1000 and in the 'users' group
# and make sure these dirs are writable by the `users` group.
RUN echo "auth requisite pam_deny.so" >> /etc/pam.d/su && \
sed -i.bak -e 's/^%admin/#%admin/' /etc/sudoers && \
sed -i.bak -e 's/^%sudo/#%sudo/' /etc/sudoers && \
useradd -m -s /bin/bash -N -u $NB_UID $NB_USER && \
mkdir -p $CONDA_DIR && \ mkdir -p $CONDA_DIR && \
chown $NB_USER $CONDA_DIR chown $NB_USER:$NB_GID $CONDA_DIR && \
chmod g+w /etc/passwd && \
fix-permissions $HOME && \
fix-permissions $CONDA_DIR
USER $NB_UID USER $NB_UID
WORKDIR $HOME
ARG PYTHON_VERSION=default
# Setup jovyan home directory # Setup work directory for backward-compatibility
RUN mkdir /home/$NB_USER/work && \ RUN mkdir /home/$NB_USER/work && \
mkdir /home/$NB_USER/.jupyter && \ fix-permissions /home/$NB_USER
echo "cacert=/etc/ssl/certs/ca-certificates.crt" > /home/$NB_USER/.curlrc
# Install conda as jovyan and check the md5 sum provided on the download site
ENV MINICONDA_VERSION=4.8.2 \
MINICONDA_MD5=e50662a93f3f5e56ef2d3fdfaf2f8e91 \
CONDA_VERSION=4.8.2
# Install conda as jovyan # Install conda as jovyan
RUN cd /tmp && \ RUN cd /tmp && \
mkdir -p $CONDA_DIR && \ wget --quiet https://repo.continuum.io/miniconda/Miniconda3-py37_${MINICONDA_VERSION}-Linux-ppc64le.sh && \
wget https://repo.continuum.io/miniconda/Miniconda3-4.2.12-Linux-ppc64le.sh && \ echo "${MINICONDA_MD5} *Miniconda3-py37_${MINICONDA_VERSION}-Linux-ppc64le.sh" | md5sum -c - && \
/bin/bash Miniconda3-4.2.12-Linux-ppc64le.sh -f -b -p $CONDA_DIR && \ /bin/bash Miniconda3-py37_${MINICONDA_VERSION}-Linux-ppc64le.sh -f -b -p $CONDA_DIR && \
rm -rf Miniconda3-4.2.12-Linux-ppc64le.sh && \ rm -rf Miniconda3-py37_${MINICONDA_VERSION}-Linux-ppc64le.sh && \
$CONDA_DIR/bin/conda install --quiet --yes conda=4.2.12 && \ echo "conda ${CONDA_VERSION}" >> $CONDA_DIR/conda-meta/pinned && \
$CONDA_DIR/bin/conda config --system --add channels conda-forge && \ conda config --system --prepend channels conda-forge && \
$CONDA_DIR/bin/conda config --system --set auto_update_conda false && \ conda config --system --set auto_update_conda false && \
conda clean --all -f -y conda config --system --set show_channel_urls true && \
conda config --system --set channel_priority strict && \
# Install Jupyter notebook and Hub if [ ! $PYTHON_VERSION = 'default' ]; then conda install --yes python=$PYTHON_VERSION; fi && \
RUN yes | pip install --upgrade pip conda list python | grep '^python ' | tr -s ' ' | cut -d '.' -f 1,2 | sed 's/$/.*/' >> $CONDA_DIR/conda-meta/pinned && \
RUN yes | pip install --quiet --no-cache-dir \ conda install --quiet --yes conda && \
'notebook==5.2.*' \ conda install --quiet --yes pip && \
'jupyterhub==0.7.*' \ conda update --all --quiet --yes && \
'jupyterlab==0.18.*' conda clean --all -f -y && \
rm -rf /home/$NB_USER/.cache/yarn && \
USER root fix-permissions $CONDA_DIR && \
fix-permissions /home/$NB_USER
# Install Tini
RUN conda install --quiet --yes 'tini=0.18.0' && \
conda list tini | grep tini | tr -s ' ' | cut -d ' ' -f 1,2 >> $CONDA_DIR/conda-meta/pinned && \
conda clean --all -f -y && \
fix-permissions $CONDA_DIR && \
fix-permissions /home/$NB_USER
# Install Jupyter Notebook, Lab, and Hub
# Generate a notebook server config
# Cleanup temporary files
# Correct permissions
# Do all this in a single RUN command to avoid duplicating all of the
# files across image layers when the permissions change
RUN conda install --quiet --yes \
'notebook=6.0.3' \
'jupyterhub=1.1.0' \
'jupyterlab=2.1.1' && \
conda clean --all -f -y && \
npm cache clean --force && \
jupyter notebook --generate-config && \
rm -rf $CONDA_DIR/share/jupyter/lab/staging && \
rm -rf /home/$NB_USER/.cache/yarn && \
fix-permissions $CONDA_DIR && \
fix-permissions /home/$NB_USER
EXPOSE 8888 EXPOSE 8888
WORKDIR /home/$NB_USER/work
RUN echo "ALL ALL = (ALL) NOPASSWD: ALL" >> /etc/sudoers
# Configure container startup # Configure container startup
ENTRYPOINT ["tini", "-g", "--"] ENTRYPOINT ["tini", "-g", "--"]
CMD ["start-notebook.sh"] CMD ["start-notebook.sh"]
# Add local files as late as possible to avoid cache busting # Copy local files as late as possible to avoid cache busting
COPY start.sh /usr/local/bin/ COPY start.sh start-notebook.sh start-singleuser.sh /usr/local/bin/
COPY start-notebook.sh /usr/local/bin/ COPY jupyter_notebook_config.py /etc/jupyter/
COPY start-singleuser.sh /usr/local/bin/
COPY jupyter_notebook_config.py /home/$NB_USER/.jupyter/ # Fix permissions on /etc/jupyter as root
RUN chown -R $NB_USER:users /home/$NB_USER/.jupyter USER root
RUN fix-permissions /etc/jupyter/
# Switch back to jovyan to avoid accidental container runs as root # Switch back to jovyan to avoid accidental container runs as root
USER $NB_UID USER $NB_UID
...@@ -2,6 +2,7 @@ cat << EOF > "$MANIFEST_FILE" ...@@ -2,6 +2,7 @@ cat << EOF > "$MANIFEST_FILE"
* Build datetime: ${BUILD_TIMESTAMP} * Build datetime: ${BUILD_TIMESTAMP}
* DockerHub build code: ${BUILD_CODE} * DockerHub build code: ${BUILD_CODE}
* Docker image: ${DOCKER_REPO}:${GIT_SHA_TAG} * Docker image: ${DOCKER_REPO}:${GIT_SHA_TAG}
* Docker image size: $(docker images ${IMAGE_NAME} --format "{{.Size}}")
* Git commit SHA: [${SOURCE_COMMIT}](https://github.com/jupyter/docker-stacks/commit/${SOURCE_COMMIT}) * Git commit SHA: [${SOURCE_COMMIT}](https://github.com/jupyter/docker-stacks/commit/${SOURCE_COMMIT})
* Git commit message: * Git commit message:
\`\`\` \`\`\`
......
# Copyright (c) Jupyter Development Team. # Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License. # Distributed under the terms of the Modified BSD License.
import time import time
import logging
import pytest import pytest
LOGGER = logging.getLogger(__name__)
def test_cli_args(container, http_client): def test_cli_args(container, http_client):
"""Container should respect notebook server command line args """Container should respect notebook server command line args
...@@ -61,6 +64,37 @@ def test_gid_change(container): ...@@ -61,6 +64,37 @@ def test_gid_change(container):
assert 'groups=110(jovyan),100(users)' in logs assert 'groups=110(jovyan),100(users)' in logs
def test_nb_user_change(container):
"""Container should change the user name (`NB_USER`) of the default user."""
nb_user = "nayvoj"
running_container = container.run(
tty=True,
user="root",
environment=[f"NB_USER={nb_user}",
"CHOWN_HOME=yes"],
working_dir=f"/home/{nb_user}",
command=['start.sh', 'bash', '-c', 'sleep infinity']
)
LOGGER.info(f"Checking if the user is changed to {nb_user} by the start script ...")
output = running_container.logs(stdout=True).decode("utf-8")
assert f"Set username to: {nb_user}" in output, f"User is not changed to {nb_user}"
LOGGER.info(f"Checking {nb_user} id ...")
command = "id"
expected_output = f"uid=1000({nb_user}) gid=100(users) groups=100(users)"
cmd = running_container.exec_run(command, user=nb_user)
output = cmd.output.decode("utf-8").strip("\n")
assert output == expected_output, f"Bad user {output}, expected {expected_output}"
LOGGER.info(f"Checking if {nb_user} owns his home folder ...")
command = f'stat -c "%U %G" /home/{nb_user}/'
expected_output = f"{nb_user} users"
cmd = running_container.exec_run(command)
output = cmd.output.decode("utf-8").strip("\n")
assert output == expected_output, f"Bad owner for the {nb_user} home folder {output}, expected {expected_output}"
def test_chown_extra(container): def test_chown_extra(container):
"""Container should change the UID/GID of CHOWN_EXTRA.""" """Container should change the UID/GID of CHOWN_EXTRA."""
c = container.run( c = container.run(
......
...@@ -2,6 +2,7 @@ cat << EOF > "$MANIFEST_FILE" ...@@ -2,6 +2,7 @@ cat << EOF > "$MANIFEST_FILE"
* Build datetime: ${BUILD_TIMESTAMP} * Build datetime: ${BUILD_TIMESTAMP}
* DockerHub build code: ${BUILD_CODE} * DockerHub build code: ${BUILD_CODE}
* Docker image: ${DOCKER_REPO}:${GIT_SHA_TAG} * Docker image: ${DOCKER_REPO}:${GIT_SHA_TAG}
* Docker image size: $(docker images ${IMAGE_NAME} --format "{{.Size}}")
* Git commit SHA: [${SOURCE_COMMIT}](https://github.com/jupyter/docker-stacks/commit/${SOURCE_COMMIT}) * Git commit SHA: [${SOURCE_COMMIT}](https://github.com/jupyter/docker-stacks/commit/${SOURCE_COMMIT})
* Git commit message: * Git commit message:
\`\`\` \`\`\`
......
...@@ -25,9 +25,9 @@ If there's agreement that the feature belongs in one or more of the core stacks: ...@@ -25,9 +25,9 @@ If there's agreement that the feature belongs in one or more of the core stacks:
1. Implement the feature in a local clone of the `jupyter/docker-stacks` project. 1. Implement the feature in a local clone of the `jupyter/docker-stacks` project.
2. Please build the image locally before submitting a pull request. Building the image locally shortens the debugging cycle by taking some load off [Travis CI](http://travis-ci.org/), which graciously provides free build services for open source projects like this one. If you use `make`, call: 2. Please build the image locally before submitting a pull request. Building the image locally shortens the debugging cycle by taking some load off [Travis CI](http://travis-ci.org/), which graciously provides free build services for open source projects like this one. If you use `make`, call:
``` ```bash
make build/somestack-notebook make build/somestack-notebook
``` ```
3. [Submit a pull request](https://github.com/PointCloudLibrary/pcl/wiki/A-step-by-step-guide-on-preparing-and-submitting-a-pull-request) (PR) with your changes. 3. [Submit a pull request](https://github.com/PointCloudLibrary/pcl/wiki/A-step-by-step-guide-on-preparing-and-submitting-a-pull-request) (PR) with your changes.
4. Watch for Travis to report a build success or failure for your PR on GitHub. 4. Watch for Travis to report a build success or failure for your PR on GitHub.
5. Discuss changes with the maintainers and address any build issues. 5. Discuss changes with the maintainers and address any build issues.
...@@ -7,9 +7,9 @@ Please follow the process below to update a package version: ...@@ -7,9 +7,9 @@ Please follow the process below to update a package version:
1. Locate the Dockerfile containing the library you wish to update (e.g., [base-notebook/Dockerfile](https://github.com/jupyter/docker-stacks/blob/master/base-notebook/Dockerfile), [scipy-notebook/Dockerfile](https://github.com/jupyter/docker-stacks/blob/master/scipy-notebook/Dockerfile)) 1. Locate the Dockerfile containing the library you wish to update (e.g., [base-notebook/Dockerfile](https://github.com/jupyter/docker-stacks/blob/master/base-notebook/Dockerfile), [scipy-notebook/Dockerfile](https://github.com/jupyter/docker-stacks/blob/master/scipy-notebook/Dockerfile))
2. Adjust the version number for the package. We prefer to pin the major and minor version number of packages so as to minimize rebuild side-effects when users submit pull requests (PRs). For example, you'll find the Jupyter Notebook package, `notebook`, installed using conda with `notebook=5.4.*`. 2. Adjust the version number for the package. We prefer to pin the major and minor version number of packages so as to minimize rebuild side-effects when users submit pull requests (PRs). For example, you'll find the Jupyter Notebook package, `notebook`, installed using conda with `notebook=5.4.*`.
3. Please build the image locally before submitting a pull request. Building the image locally shortens the debugging cycle by taking some load off [Travis CI](http://travis-ci.org/), which graciously provides free build services for open source projects like this one. If you use `make`, call: 3. Please build the image locally before submitting a pull request. Building the image locally shortens the debugging cycle by taking some load off [Travis CI](http://travis-ci.org/), which graciously provides free build services for open source projects like this one. If you use `make`, call:
``` ```bash
make build/somestack-notebook make build/somestack-notebook
``` ```
4. [Submit a pull request](https://github.com/PointCloudLibrary/pcl/wiki/A-step-by-step-guide-on-preparing-and-submitting-a-pull-request) (PR) with your changes. 4. [Submit a pull request](https://github.com/PointCloudLibrary/pcl/wiki/A-step-by-step-guide-on-preparing-and-submitting-a-pull-request) (PR) with your changes.
5. Watch for Travis to report a build success or failure for your PR on GitHub. 5. Watch for Travis to report a build success or failure for your PR on GitHub.
6. Discuss changes with the maintainers and address any build issues. Version conflicts are the most common problem. You may need to upgrade additional packages to fix build failures. 6. Discuss changes with the maintainers and address any build issues. Version conflicts are the most common problem. You may need to upgrade additional packages to fix build failures.
......
...@@ -13,13 +13,13 @@ This approach mirrors how we build and share the core stack images. Feel free to ...@@ -13,13 +13,13 @@ This approach mirrors how we build and share the core stack images. Feel free to
First, install [cookiecutter](https://github.com/audreyr/cookiecutter) using pip or conda: First, install [cookiecutter](https://github.com/audreyr/cookiecutter) using pip or conda:
``` ```bash
pip install cookiecutter # or conda install cookiecutter pip install cookiecutter # or conda install cookiecutter
``` ```
Run the cookiecutter command pointing to the [jupyter/cookiecutter-docker-stacks](https://github.com/jupyter/cookiecutter-docker-stacks) project on GitHub. Run the cookiecutter command pointing to the [jupyter/cookiecutter-docker-stacks](https://github.com/jupyter/cookiecutter-docker-stacks) project on GitHub.
``` ```bash
cookiecutter https://github.com/jupyter/cookiecutter-docker-stacks.git cookiecutter https://github.com/jupyter/cookiecutter-docker-stacks.git
``` ```
......
...@@ -13,10 +13,10 @@ Please follow the process below to add new tests: ...@@ -13,10 +13,10 @@ Please follow the process below to add new tests:
1. If the test should run against every image built, add your test code to one of the modules in [test/](https://github.com/jupyter/docker-stacks/tree/master/test) or create a new module. 1. If the test should run against every image built, add your test code to one of the modules in [test/](https://github.com/jupyter/docker-stacks/tree/master/test) or create a new module.
2. If your test should run against a single image, add your test code to one of the modules in `some-notebook/test/` or create a new module. 2. If your test should run against a single image, add your test code to one of the modules in `some-notebook/test/` or create a new module.
3. Build one or more images you intend to test and run the tests locally. If you use `make`, call: 3. Build one or more images you intend to test and run the tests locally. If you use `make`, call:
``` ```bash
make build/somestack-notebook make build/somestack-notebook
make test/somestack-notebook make test/somestack-notebook
``` ```
4. [Submit a pull request](https://github.com/PointCloudLibrary/pcl/wiki/A-step-by-step-guide-on-preparing-and-submitting-a-pull-request) (PR) with your changes. 4. [Submit a pull request](https://github.com/PointCloudLibrary/pcl/wiki/A-step-by-step-guide-on-preparing-and-submitting-a-pull-request) (PR) with your changes.
5. Watch for Travis to report a build success or failure for your PR on GitHub. 5. Watch for Travis to report a build success or failure for your PR on GitHub.
6. Discuss changes with the maintainers and address any issues running the tests on Travis. 6. Discuss changes with the maintainers and address any issues running the tests on Travis.
\ No newline at end of file
...@@ -63,5 +63,5 @@ Table of Contents ...@@ -63,5 +63,5 @@ Table of Contents
:caption: Getting Help :caption: Getting Help
Jupyter Discourse Forum <https://discourse.jupyter.org> Jupyter Discourse Forum <https://discourse.jupyter.org>
Jupyter Docker Stacks Issue Tracker <https://github.com/jupyter/docker-stacks/issues> Stacks Issue Tracker <https://github.com/jupyter/docker-stacks/issues>
Jupyter Website <https://jupyter.org> Jupyter Website <https://jupyter.org>
\ No newline at end of file
...@@ -9,7 +9,7 @@ msgid "" ...@@ -9,7 +9,7 @@ msgid ""
msgstr "" msgstr ""
"Project-Id-Version: docker-stacks latest\n" "Project-Id-Version: docker-stacks latest\n"
"Report-Msgid-Bugs-To: \n" "Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2020-04-19 15:01+0000\n" "POT-Creation-Date: 2020-05-29 13:13+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n" "Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n" "Language-Team: LANGUAGE <LL@li.org>\n"
...@@ -18,12 +18,12 @@ msgstr "" ...@@ -18,12 +18,12 @@ msgstr ""
"Content-Transfer-Encoding: 8bit\n" "Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.8.0\n" "Generated-By: Babel 2.8.0\n"
#: ../../contributing/features.md:1 b29274ef20ab4dc0a9ccad4e7624ba93 #: ../../contributing/features.md:1 75d2338a7df843b2938eb01bbe66c7ee
msgid "# New Features" msgid "# New Features"
msgstr "" msgstr ""
# 64c3ecc68ada47afada78f945253c9e9 # 64c3ecc68ada47afada78f945253c9e9
#: ../../contributing/features.md:3 2f06fa440130405ba9e8b388e9a69309 #: ../../contributing/features.md:3 1ed4332ad73149a5a583228905b313a9
msgid "" msgid ""
"Thank you for contributing to the Jupyter Docker Stacks! We review pull " "Thank you for contributing to the Jupyter Docker Stacks! We review pull "
"requests of new features (e.g., new packages, new scripts, new flags) to " "requests of new features (e.g., new packages, new scripts, new flags) to "
...@@ -31,24 +31,24 @@ msgid "" ...@@ -31,24 +31,24 @@ msgid ""
" maintaining the images over time." " maintaining the images over time."
msgstr "" msgstr ""
#: ../../contributing/features.md:5 e98b04fb5d024f74a6a8a3fb0f93c126 #: ../../contributing/features.md:5 192da0c147dc4a4eb60281f063016537
msgid "## Suggesting a New Feature" msgid "## Suggesting a New Feature"
msgstr "" msgstr ""
# c995f8cabb1d4b4fb53a9c56ae8e017b # c995f8cabb1d4b4fb53a9c56ae8e017b
#: ../../contributing/features.md:7 a63f239a18154522ba078a533ed41108 #: ../../contributing/features.md:7 61db9ea05749413d8af6788a741c7241
msgid "" msgid ""
"Please follow the process below to suggest a new feature for inclusion in" "Please follow the process below to suggest a new feature for inclusion in"
" one of the core stacks:" " one of the core stacks:"
msgstr "" msgstr ""
#: ../../contributing/features.md:9 8ce7aa23d8de4655b397cdae9ac9a9f2 #: ../../contributing/features.md:9 d20f12cc70ec4dc1b31ad53e708900e9
msgid "" msgid ""
"[Open a GitHub issue](https://github.com/jupyter/docker-stacks/issues) " "[Open a GitHub issue](https://github.com/jupyter/docker-stacks/issues) "
"describing the feature you'd like to contribute." "describing the feature you'd like to contribute."
msgstr "" msgstr ""
#: ../../contributing/features.md:10 ac44a56f12cf4e11ae7a5f640be18588 #: ../../contributing/features.md:10 f33fdddae0c549f6ab6c00de109b5d27
msgid "" msgid ""
"Discuss with the maintainers whether the addition makes sense in [one of " "Discuss with the maintainers whether the addition makes sense in [one of "
"the core stacks](../using/selecting.md#Core-Stacks), as a [recipe in the " "the core stacks](../using/selecting.md#Core-Stacks), as a [recipe in the "
...@@ -56,32 +56,32 @@ msgid "" ...@@ -56,32 +56,32 @@ msgid ""
"something else entirely." "something else entirely."
msgstr "" msgstr ""
#: ../../contributing/features.md:12 9c6586bf515f414f8cb2d1cfb2974621 #: ../../contributing/features.md:12 9cda3fc10c9b4e5a8f4845655eac1d70
msgid "## Selection Criteria" msgid "## Selection Criteria"
msgstr "" msgstr ""
# ca139cf0df684011bdf6f6f68e151796 # ca139cf0df684011bdf6f6f68e151796
#: ../../contributing/features.md:14 d6466fa4105d4b0d828b3354d2b5d3e3 #: ../../contributing/features.md:14 634fe86173d042f1a40ca03ae49bc2ef
msgid "" msgid ""
"Roughly speaking, we evaluate new features based on the following " "Roughly speaking, we evaluate new features based on the following "
"criteria:" "criteria:"
msgstr "" msgstr ""
#: ../../contributing/features.md:16 931715f7135b40cc9e50be51d17c5d53 #: ../../contributing/features.md:16 158f3559593b45de97000a0e27ce4a6b
msgid "" msgid ""
"**Usefulness to Jupyter users**: Is the feature generally applicable " "**Usefulness to Jupyter users**: Is the feature generally applicable "
"across domains? Does it work with Jupyter Notebook, JupyterLab, " "across domains? Does it work with Jupyter Notebook, JupyterLab, "
"JupyterHub, etc.?" "JupyterHub, etc.?"
msgstr "" msgstr ""
#: ../../contributing/features.md:17 ae8a51d65f394811bc5a92ba54e3ad05 #: ../../contributing/features.md:17 42b42e6549084d3788899f1968902371
msgid "" msgid ""
"**Fit with the image purpose**: Does the feature match the theme of the " "**Fit with the image purpose**: Does the feature match the theme of the "
"stack in which it will be added? Would it fit better in a new, community " "stack in which it will be added? Would it fit better in a new, community "
"stack?" "stack?"
msgstr "" msgstr ""
#: ../../contributing/features.md:18 2ec7091a5ff841f186e7c5f22a0c8e76 #: ../../contributing/features.md:18 1b148a07824d49fa8cadda9e7fe64e6e
msgid "" msgid ""
"**Complexity of build / runtime configuration**: How many lines of code " "**Complexity of build / runtime configuration**: How many lines of code "
"does the feature require in one of the Dockerfiles or startup scripts? " "does the feature require in one of the Dockerfiles or startup scripts? "
...@@ -89,14 +89,14 @@ msgid "" ...@@ -89,14 +89,14 @@ msgid ""
"use the images?" "use the images?"
msgstr "" msgstr ""
#: ../../contributing/features.md:19 e2c773022bb643f98070a8fd92eda80e #: ../../contributing/features.md:19 5f46cea5d7b443fcb9aa88c723edad4e
msgid "" msgid ""
"**Impact on image metrics**: How many bytes does the feature and its " "**Impact on image metrics**: How many bytes does the feature and its "
"dependencies add to the image(s)? How many minutes do they add to the " "dependencies add to the image(s)? How many minutes do they add to the "
"build time?" "build time?"
msgstr "" msgstr ""
#: ../../contributing/features.md:20 ff5fc253c35d46339ec9537542ab72bc #: ../../contributing/features.md:20 69a59ce66e2b42b3b909fdbac24bd7e4
msgid "" msgid ""
"**Ability to support the addition**: Can existing maintainers answer user" "**Ability to support the addition**: Can existing maintainers answer user"
" questions and address future build issues? Are the contributors " " questions and address future build issues? Are the contributors "
...@@ -104,50 +104,71 @@ msgid "" ...@@ -104,50 +104,71 @@ msgid ""
"ensure the feature continues to work over time?" "ensure the feature continues to work over time?"
msgstr "" msgstr ""
#: ../../contributing/features.md:22 f46001e4db1041afbaeb78887da55375 #: ../../contributing/features.md:22 f4ad3fa23649453585a06bfbbb5e94a6
msgid "## Submitting a Pull Request" msgid "## Submitting a Pull Request"
msgstr "" msgstr ""
# f7ca9b40be90476eb97c8fcd67205e9d # f7ca9b40be90476eb97c8fcd67205e9d
#: ../../contributing/features.md:24 5738bff7db0a4e068cc6d58969d0fb2f #: ../../contributing/features.md:24 75b3feee9e9b4f2f8f1add9ae209b9af
msgid "" msgid ""
"If there's agreement that the feature belongs in one or more of the core " "If there's agreement that the feature belongs in one or more of the core "
"stacks:" "stacks:"
msgstr "" msgstr ""
#: ../../contributing/features.md:26 8510bc57387040dc9106033a1f451f90 #: ../../contributing/features.md:26 f901d30149094b0191e34edcb82ac751
msgid "" msgid ""
"Implement the feature in a local clone of the `jupyter/docker-stacks` " "Implement the feature in a local clone of the `jupyter/docker-stacks` "
"project." "project."
msgstr "" msgstr ""
#: ../../contributing/features.md:27 03aec785a12248c5af945445c192d4a3 #: ../../contributing/features.md:29 ca0d179b7bfc449daee401cfefd5dddb
msgid "" msgid ""
"2. Please build the image locally before submitting a pull request. " "Please build the image locally before submitting a pull request. Building"
"Building the image locally shortens the debugging cycle by taking some " " the image locally shortens the debugging cycle by taking some load off "
"load off [Travis CI](http://travis-ci.org/), which graciously provides " "[Travis CI](http://travis-ci.org/), which graciously provides free build "
"free build services for open source projects like this one. If you use " "services for open source projects like this one. If you use `make`, "
"`make`, call: ``` make build/somestack-notebook ``` 3. [Submit a pull " "call:"
"request](https://github.com/PointCloudLibrary/pcl/wiki/A-step-by-step-"
"guide-on-preparing-and-submitting-a-pull-request) (PR) with your changes."
" 4. Watch for Travis to report a build success or failure for your PR on "
"GitHub. 5. Discuss changes with the maintainers and address any build "
"issues."
msgstr "" msgstr ""
#: ../../contributing/issues.md:1 0a8022a52ea1457b8c972ead74718cab #: ../../contributing/features.md:28 ../../contributing/packages.md:10
#: 33609453c5b042d39ef23a23a6602391 ed786e76a3d44f51a1f4741dc5639bf5
msgid "```bash make build/somestack-notebook ```"
msgstr ""
#: ../../contributing/features.md:31 ../../contributing/packages.md:13
#: ../../contributing/tests.md:20 694756b9f11a4020ae66440a3a3073cd
#: 6f7ef0e838254b78a8788c43c92ab7ac 988385ba6b0c44bf805721528f5915cb
msgid ""
"[Submit a pull request](https://github.com/PointCloudLibrary/pcl/wiki/A"
"-step-by-step-guide-on-preparing-and-submitting-a-pull-request) (PR) with"
" your changes."
msgstr ""
#: ../../contributing/features.md:32 ../../contributing/packages.md:14
#: ../../contributing/tests.md:21 0018f237b0544b4a9986b1c8bc4f6747
#: 94c093d9273548a4bf63411dd313a18a c54af194694d4da293397dce6e946131
msgid ""
"Watch for Travis to report a build success or failure for your PR on "
"GitHub."
msgstr ""
#: ../../contributing/features.md:33 9ac694f350a1492688f35b1cc94b90f3
msgid "Discuss changes with the maintainers and address any build issues."
msgstr ""
#: ../../contributing/issues.md:1 e0aca6ee4ec045efb75ff849a33869ba
msgid "# Project Issues" msgid "# Project Issues"
msgstr "" msgstr ""
# 9c2a6e9f67354e86aca23758676fca43 # 9c2a6e9f67354e86aca23758676fca43
#: ../../contributing/issues.md:3 95f6877e39f34ced9d588a108888d7e1 #: ../../contributing/issues.md:3 21e3ece2076a4440b4f5bf405bb77d9b
msgid "" msgid ""
"We appreciate your taking the time to report an issue you encountered " "We appreciate your taking the time to report an issue you encountered "
"using the Jupyter Docker Stacks. Please review the following guidelines " "using the Jupyter Docker Stacks. Please review the following guidelines "
"when reporting your problem." "when reporting your problem."
msgstr "" msgstr ""
#: ../../contributing/issues.md:7 ee3d30cb46134c13a6488a03846b43c1 #: ../../contributing/issues.md:7 1b4130d2b65a434185c7c79f17aaa118
msgid "" msgid ""
"If you believe you’ve found a security vulnerability in any of the " "If you believe you’ve found a security vulnerability in any of the "
"Jupyter projects included in Jupyter Docker Stacks images, please report " "Jupyter projects included in Jupyter Docker Stacks images, please report "
...@@ -157,7 +178,7 @@ msgid "" ...@@ -157,7 +178,7 @@ msgid ""
"notebook.readthedocs.io/en/stable/_downloads/ipython_security.asc)." "notebook.readthedocs.io/en/stable/_downloads/ipython_security.asc)."
msgstr "" msgstr ""
#: ../../contributing/issues.md:13 bc40e0bfa400432ca3531203f1b524ea #: ../../contributing/issues.md:13 951fc60fdacf4b3aadd96b5a3b200316
msgid "" msgid ""
"If you think your problem is unique to the Jupyter Docker Stacks images, " "If you think your problem is unique to the Jupyter Docker Stacks images, "
"please search the [jupyter/docker-stacks issue " "please search the [jupyter/docker-stacks issue "
...@@ -168,14 +189,14 @@ msgid "" ...@@ -168,14 +189,14 @@ msgid ""
msgstr "" msgstr ""
# 69a18cc239b34b94800599bf185f58d6 # 69a18cc239b34b94800599bf185f58d6
#: ../../contributing/issues.md:19 f1b665bf85b144e2974992ec9df1a86a #: ../../contributing/issues.md:19 fc8bdf2071fe4f8bbdb7d4eb437a2579
msgid "" msgid ""
"If the issue you're seeing is with one of the open source libraries " "If the issue you're seeing is with one of the open source libraries "
"included in the Docker images and is reproducible outside the images, " "included in the Docker images and is reproducible outside the images, "
"please file a bug with the appropriate open source project." "please file a bug with the appropriate open source project."
msgstr "" msgstr ""
#: ../../contributing/issues.md:22 c1d809be14584ab1b2454573b2de2717 #: ../../contributing/issues.md:22 fa26d1ffb9d444b68279650988a09158
msgid "" msgid ""
"If you have a general question about how to use the Jupyter Docker Stacks" "If you have a general question about how to use the Jupyter Docker Stacks"
" in your environment, in conjunction with other tools, with " " in your environment, in conjunction with other tools, with "
...@@ -183,12 +204,12 @@ msgid "" ...@@ -183,12 +204,12 @@ msgid ""
"Discourse site](https://discourse.jupyter.org)." "Discourse site](https://discourse.jupyter.org)."
msgstr "" msgstr ""
#: ../../contributing/packages.md:1 191d97cae0084101ac9ea824ad3fa6d4 #: ../../contributing/packages.md:1 337149445e4b49138fc7fdeaf004a4aa
msgid "# Package Updates" msgid "# Package Updates"
msgstr "" msgstr ""
# 5f269a667f9a4c3ca342cfb49ecaefb2 # 5f269a667f9a4c3ca342cfb49ecaefb2
#: ../../contributing/packages.md:3 006f747c28b84127a7a309e14025b4e5 #: ../../contributing/packages.md:3 897b0d9a196441d29acbebcf99b2cdfd
msgid "" msgid ""
"We actively seek pull requests which update packages already included in " "We actively seek pull requests which update packages already included in "
"the project Dockerfiles. This is a great way for first-time contributors " "the project Dockerfiles. This is a great way for first-time contributors "
...@@ -196,11 +217,11 @@ msgid "" ...@@ -196,11 +217,11 @@ msgid ""
msgstr "" msgstr ""
# 30d4a79bce8d439d97e6e3555a088548 # 30d4a79bce8d439d97e6e3555a088548
#: ../../contributing/packages.md:5 00541ee400a04203a7dc26dbaafa2fb6 #: ../../contributing/packages.md:5 a13067f88b8b4b62aee962892df27a6d
msgid "Please follow the process below to update a package version:" msgid "Please follow the process below to update a package version:"
msgstr "" msgstr ""
#: ../../contributing/packages.md:7 9ae7cbf6e2cf451db53d550f0c99dac9 #: ../../contributing/packages.md:7 2c6c020feac74899bf18caa4227814cb
msgid "" msgid ""
"Locate the Dockerfile containing the library you wish to update (e.g., " "Locate the Dockerfile containing the library you wish to update (e.g., "
"[base-notebook/Dockerfile](https://github.com/jupyter/docker-" "[base-notebook/Dockerfile](https://github.com/jupyter/docker-"
...@@ -209,7 +230,7 @@ msgid "" ...@@ -209,7 +230,7 @@ msgid ""
"/scipy-notebook/Dockerfile))" "/scipy-notebook/Dockerfile))"
msgstr "" msgstr ""
#: ../../contributing/packages.md:8 71d0e0f8dc084cbc96a795164df3ab60 #: ../../contributing/packages.md:8 0c1092286f6b45fa9f60b3da1de118bd
msgid "" msgid ""
"Adjust the version number for the package. We prefer to pin the major and" "Adjust the version number for the package. We prefer to pin the major and"
" minor version number of packages so as to minimize rebuild side-effects " " minor version number of packages so as to minimize rebuild side-effects "
...@@ -218,26 +239,26 @@ msgid "" ...@@ -218,26 +239,26 @@ msgid ""
"`notebook=5.4.*`." "`notebook=5.4.*`."
msgstr "" msgstr ""
#: ../../contributing/packages.md:9 ffbf3d943a314b13a5c92b6951b387d0 #: ../../contributing/packages.md:11 a482b9e64916488291bf251a339a5f82
msgid ""
"Please build the image locally before submitting a pull request. Building"
" the image locally shortens the debugging cycle by taking some load off "
"[Travis CI](http://travis-ci.org/), which graciously provides free build "
"services for open source projects like this one. If you use `make`, call:"
msgstr ""
#: ../../contributing/packages.md:15 a2f3a70c805249b4ac0ad813abe8547a
msgid "" msgid ""
"3. Please build the image locally before submitting a pull request. " "Discuss changes with the maintainers and address any build issues. "
"Building the image locally shortens the debugging cycle by taking some " "Version conflicts are the most common problem. You may need to upgrade "
"load off [Travis CI](http://travis-ci.org/), which graciously provides " "additional packages to fix build failures."
"free build services for open source projects like this one. If you use "
"`make`, call: ``` make build/somestack-notebook ``` 4. [Submit a pull "
"request](https://github.com/PointCloudLibrary/pcl/wiki/A-step-by-step-"
"guide-on-preparing-and-submitting-a-pull-request) (PR) with your changes."
" 5. Watch for Travis to report a build success or failure for your PR on "
"GitHub. 6. Discuss changes with the maintainers and address any build "
"issues. Version conflicts are the most common problem. You may need to "
"upgrade additional packages to fix build failures."
msgstr "" msgstr ""
#: ../../contributing/packages.md:17 1cc47658a90a45539392f27c065883d9 #: ../../contributing/packages.md:17 07ac92ed999a4c53b2b2022f9f02649e
msgid "## Notes" msgid "## Notes"
msgstr "" msgstr ""
#: ../../contributing/packages.md:19 c0fbc3bab4f8478c9cbf1b9fd1579ede #: ../../contributing/packages.md:19 bd1253c218b04ab2abf1b6541805e8cf
msgid "" msgid ""
"In order to help identifying packages that can be updated you can use the" "In order to help identifying packages that can be updated you can use the"
" following helper tool. It will list all the packages installed in the " " following helper tool. It will list all the packages installed in the "
...@@ -245,11 +266,11 @@ msgid "" ...@@ -245,11 +266,11 @@ msgid ""
"only on requested packages." "only on requested packages."
msgstr "" msgstr ""
#: ../../contributing/packages.md:22 90a56375f19d43d684c5a5c26b13ba07 #: ../../contributing/packages.md:22 a8f2ac2736754100961c05932519c5a2
msgid "```bash $ make check-outdated/base-notebook" msgid "```bash $ make check-outdated/base-notebook"
msgstr "" msgstr ""
#: ../../contributing/packages.md:25 317eda5e4bf14d9482fb6802da089512 #: ../../contributing/packages.md:25 b9f718b69c5f4bfa93c3c0fce4298b4b
msgid "" msgid ""
"# INFO test_outdated:test_outdated.py:80 3/8 (38%) packages could be " "# INFO test_outdated:test_outdated.py:80 3/8 (38%) packages could be "
"updated # INFO test_outdated:test_outdated.py:82 # Package " "updated # INFO test_outdated:test_outdated.py:82 # Package "
...@@ -258,11 +279,11 @@ msgid "" ...@@ -258,11 +279,11 @@ msgid ""
"```" "```"
msgstr "" msgstr ""
#: ../../contributing/recipes.md:1 26c573f2fce04c6a82afb3d122c5f54c #: ../../contributing/recipes.md:1 6cd4f9700d334830aaa742ae246f0938
msgid "# New Recipes" msgid "# New Recipes"
msgstr "" msgstr ""
#: ../../contributing/recipes.md:3 7825cd32f5cd408dbaef39626cc093d7 #: ../../contributing/recipes.md:3 059e951fe37f40f3b68d837e13184e51
msgid "" msgid ""
"We welcome contributions of [recipes](../using/recipes.md), short " "We welcome contributions of [recipes](../using/recipes.md), short "
"examples of using, configuring, or extending the Docker Stacks, for " "examples of using, configuring, or extending the Docker Stacks, for "
...@@ -270,25 +291,25 @@ msgid "" ...@@ -270,25 +291,25 @@ msgid ""
"new recipe:" "new recipe:"
msgstr "" msgstr ""
#: ../../contributing/recipes.md:5 1717e837dc674a16baa36e88bc4c7a83 #: ../../contributing/recipes.md:5 e591368246344c768e85a854923256d1
msgid "Open the `docs/using/recipes.md` source file." msgid "Open the `docs/using/recipes.md` source file."
msgstr "" msgstr ""
#: ../../contributing/recipes.md:6 6f123bed05a54e858af74d0ae65e5e76 #: ../../contributing/recipes.md:6 e09e814cc87d4d0dac83f457128b086e
msgid "" msgid ""
"Add a second-level Markdown heading naming your recipe at the bottom of " "Add a second-level Markdown heading naming your recipe at the bottom of "
"the file (e.g., `## Add the RISE extension`)" "the file (e.g., `## Add the RISE extension`)"
msgstr "" msgstr ""
# 8838b0ff2be24c23afaca9a6f43a9b66 # 8838b0ff2be24c23afaca9a6f43a9b66
#: ../../contributing/recipes.md:7 36803f8091d74ab99eda6cd2c5e98577 #: ../../contributing/recipes.md:7 3d05c2a118e0498b94b5813bb9b0f53a
msgid "" msgid ""
"Write the body of your recipe under the heading, including whatever " "Write the body of your recipe under the heading, including whatever "
"command line, Dockerfile, links, etc. you need." "command line, Dockerfile, links, etc. you need."
msgstr "" msgstr ""
#: ../../contributing/recipes.md:8 ../../contributing/stacks.md:111 #: ../../contributing/recipes.md:8 ../../contributing/stacks.md:111
#: 09e3054dcc26430b96b954f7854816c0 55f773f52cc94c24b2d40408077d6eaf #: 15bac7d4563947a4810fda0f57825b65 1606bfa09c9743558da2a5619b8efeb1
msgid "" msgid ""
"[Submit a pull request](https://github.com/PointCloudLibrary/pcl/wiki/A" "[Submit a pull request](https://github.com/PointCloudLibrary/pcl/wiki/A"
"-step-by-step-guide-on-preparing-and-submitting-a-pull-request) (PR) with" "-step-by-step-guide-on-preparing-and-submitting-a-pull-request) (PR) with"
...@@ -296,11 +317,11 @@ msgid "" ...@@ -296,11 +317,11 @@ msgid ""
"formatting or content issues." "formatting or content issues."
msgstr "" msgstr ""
#: ../../contributing/stacks.md:1 601ebab810da44ca8b10abd3b238b0a5 #: ../../contributing/stacks.md:1 134550980fff4f2b987d3bf5fccc43eb
msgid "# Community Stacks" msgid "# Community Stacks"
msgstr "" msgstr ""
#: ../../contributing/stacks.md:3 422d7ab8ee1448c78c9fca371bd9d972 #: ../../contributing/stacks.md:3 7f3ec7bc3c13480fb5c422cc8b7494af
msgid "" msgid ""
"We love to see the community create and share new Jupyter Docker images. " "We love to see the community create and share new Jupyter Docker images. "
"We've put together a [cookiecutter project](https://github.com/jupyter" "We've put together a [cookiecutter project](https://github.com/jupyter"
...@@ -309,137 +330,137 @@ msgid "" ...@@ -309,137 +330,137 @@ msgid ""
"Docker. Following these steps will:" "Docker. Following these steps will:"
msgstr "" msgstr ""
#: ../../contributing/stacks.md:5 81bb0b0e119245e985db6852d74d1e69 #: ../../contributing/stacks.md:5 2976695bd2344b019122846f435183a3
msgid "" msgid ""
"Setup a project on GitHub containing a Dockerfile based on either the " "Setup a project on GitHub containing a Dockerfile based on either the "
"`jupyter/base-notebook` or `jupyter/minimal-notebook` image." "`jupyter/base-notebook` or `jupyter/minimal-notebook` image."
msgstr "" msgstr ""
# 8fa22b86dc9f4750b0b903371f16c1e6 # 8fa22b86dc9f4750b0b903371f16c1e6
#: ../../contributing/stacks.md:6 03c810bc1f3442b8abf53f480e4a9b17 #: ../../contributing/stacks.md:6 a5417897bb4845c985d7ecf9e781b6d6
msgid "" msgid ""
"Configure Travis CI to build and test your image when users submit pull " "Configure Travis CI to build and test your image when users submit pull "
"requests to your repository." "requests to your repository."
msgstr "" msgstr ""
# cb04d6b8877b47e78277b7025f642ae3 # cb04d6b8877b47e78277b7025f642ae3
#: ../../contributing/stacks.md:7 87231c9c4534492e9226c7dd7ae6f289 #: ../../contributing/stacks.md:7 359c229924ef4175a3e564c18ae36c79
msgid "Configure Docker Cloud to build and host your images for others to use." msgid "Configure Docker Cloud to build and host your images for others to use."
msgstr "" msgstr ""
#: ../../contributing/stacks.md:8 c634875ca00747709c4012dd47c84489 #: ../../contributing/stacks.md:8 6399d9f79260442b8a2c25287e655ada
msgid "" msgid ""
"Update the [list of community stacks](../using/selecting.html#community-" "Update the [list of community stacks](../using/selecting.html#community-"
"stacks) in this documentation to include your image." "stacks) in this documentation to include your image."
msgstr "" msgstr ""
# 8e0fd1dc73cc40ceab19307d0cd809c1 # 8e0fd1dc73cc40ceab19307d0cd809c1
#: ../../contributing/stacks.md:10 ad98c3c81c7940d490938c0aac8069e4 #: ../../contributing/stacks.md:10 d8fdffc26b8a47b1aadfa19c801df6f3
msgid "" msgid ""
"This approach mirrors how we build and share the core stack images. Feel " "This approach mirrors how we build and share the core stack images. Feel "
"free to follow it or pave your own path using alternative services and " "free to follow it or pave your own path using alternative services and "
"build tools." "build tools."
msgstr "" msgstr ""
#: ../../contributing/stacks.md:12 8fcd8bbf24f64eb1917c0511f300c8e7 #: ../../contributing/stacks.md:12 5d3cdb2a3d5a4febac02c0a65c1d0921
msgid "## Creating a Project" msgid "## Creating a Project"
msgstr "" msgstr ""
#: ../../contributing/stacks.md:14 097c859904cd44129b68d81b04df2630 #: ../../contributing/stacks.md:14 5d42e40646cb444598e54e9b5c539f70
msgid "" msgid ""
"First, install [cookiecutter](https://github.com/audreyr/cookiecutter) " "First, install [cookiecutter](https://github.com/audreyr/cookiecutter) "
"using pip or conda:" "using pip or conda:"
msgstr "" msgstr ""
#: ../../contributing/stacks.md:16 783d372143ee4fff92d0b2db6fbd148f #: ../../contributing/stacks.md:16 4ffb50e2ae124f7a8be150dc4b3267e0
msgid "``` pip install cookiecutter # or conda install cookiecutter ```" msgid "```bash pip install cookiecutter # or conda install cookiecutter ```"
msgstr "" msgstr ""
#: ../../contributing/stacks.md:20 458aaf595143463f8175516fc5df766e #: ../../contributing/stacks.md:20 c14dcf935d1f4cb5a058d98fd16e99a4
msgid "" msgid ""
"Run the cookiecutter command pointing to the [jupyter/cookiecutter-" "Run the cookiecutter command pointing to the [jupyter/cookiecutter-"
"docker-stacks](https://github.com/jupyter/cookiecutter-docker-stacks) " "docker-stacks](https://github.com/jupyter/cookiecutter-docker-stacks) "
"project on GitHub." "project on GitHub."
msgstr "" msgstr ""
#: ../../contributing/stacks.md:22 aa5b7a8ac89a434ea74a8eedccdadcf0 #: ../../contributing/stacks.md:22 75c50c170d7944229e570859a393e39a
msgid "" msgid ""
"``` cookiecutter https://github.com/jupyter/cookiecutter-docker-" "```bash cookiecutter https://github.com/jupyter/cookiecutter-docker-"
"stacks.git ```" "stacks.git ```"
msgstr "" msgstr ""
# 676ff068156d4ca7b1043b4a4fe2d1f1 # 676ff068156d4ca7b1043b4a4fe2d1f1
#: ../../contributing/stacks.md:26 226eb4138b194f5fa42973f838df4132 #: ../../contributing/stacks.md:26 c91ecedbdef0403fb06bc37884fd9618
msgid "" msgid ""
"Enter a name for your new stack image. This will serve as both the git " "Enter a name for your new stack image. This will serve as both the git "
"repository name and the part of the Docker image name after the slash." "repository name and the part of the Docker image name after the slash."
msgstr "" msgstr ""
#: ../../contributing/stacks.md:29 d634a15975e34da9aee8beee3793bdc9 #: ../../contributing/stacks.md:29 f4454df1631d4635a006f29b288c46b6
msgid "``` stack_name [my-jupyter-stack]: ```" msgid "``` stack_name [my-jupyter-stack]: ```"
msgstr "" msgstr ""
# 96deffa98bab47da82e5598e549c8a39 # 96deffa98bab47da82e5598e549c8a39
#: ../../contributing/stacks.md:33 7e57da3993d54a62a0988f6b100cb8da #: ../../contributing/stacks.md:33 1d43fe411ea649c999cc82a9ae24bf5d
msgid "" msgid ""
"Enter the user or organization name under which this stack will reside on" "Enter the user or organization name under which this stack will reside on"
" Docker Cloud / Hub. You must have access to manage this Docker Cloud org" " Docker Cloud / Hub. You must have access to manage this Docker Cloud org"
" in order to push images here and setup automated builds." " in order to push images here and setup automated builds."
msgstr "" msgstr ""
#: ../../contributing/stacks.md:37 e041a1ec4347482a93ae7f770500a9ec #: ../../contributing/stacks.md:37 8b411724c2054ca38808290b8cac6459
msgid "``` stack_org [my-project]: ```" msgid "``` stack_org [my-project]: ```"
msgstr "" msgstr ""
# b796c2d7c08b4a1db5cdfd3de7d84c16 # b796c2d7c08b4a1db5cdfd3de7d84c16
#: ../../contributing/stacks.md:41 4b0904fa6f794e5683a5609af263f8f1 #: ../../contributing/stacks.md:41 fb6511956cfd462fa9bdfade3c478835
msgid "" msgid ""
"Select an image from the jupyter/docker-stacks project that will serve as" "Select an image from the jupyter/docker-stacks project that will serve as"
" the base for your new image." " the base for your new image."
msgstr "" msgstr ""
#: ../../contributing/stacks.md:44 624f86b09eea4818867276dfbf1f67ce #: ../../contributing/stacks.md:44 f558dd4326144036be2bb4a7a8b6a628
msgid "``` stack_base_image [jupyter/base-notebook]: ```" msgid "``` stack_base_image [jupyter/base-notebook]: ```"
msgstr "" msgstr ""
# 7ef9d73286d04b12a1350e8d9565df65 # 7ef9d73286d04b12a1350e8d9565df65
#: ../../contributing/stacks.md:48 a4fbdfdfdec843349f52e7d3e42c0fb8 #: ../../contributing/stacks.md:48 fcfed0b5f01e4833b0693dda40f446a1
msgid "Enter a longer description of the stack for your README." msgid "Enter a longer description of the stack for your README."
msgstr "" msgstr ""
#: ../../contributing/stacks.md:50 6e8d1b0f4bee4d2fa9b6de9e97df33f3 #: ../../contributing/stacks.md:50 43505e5b045d4765a03acd882b4ce0b7
msgid "" msgid ""
"``` stack_description [my-jupyter-stack is a community maintained Jupyter" "``` stack_description [my-jupyter-stack is a community maintained Jupyter"
" Docker Stack image]: ```" " Docker Stack image]: ```"
msgstr "" msgstr ""
# 479d3a5c6ef9481a9dc4033224c540fa # 479d3a5c6ef9481a9dc4033224c540fa
#: ../../contributing/stacks.md:54 d71286ae8c8446a680da28ebfa45e3e4 #: ../../contributing/stacks.md:54 f296538abe2d4c28867dbee697e27cae
msgid "Initialize your project as a Git repository and push it to GitHub." msgid "Initialize your project as a Git repository and push it to GitHub."
msgstr "" msgstr ""
#: ../../contributing/stacks.md:56 39df7c89df544de1aff6587be4ae05a2 #: ../../contributing/stacks.md:56 8bec73b0ac4f403e9f6801f5309dec96
msgid "``` cd <stack_name you chose>" msgid "``` cd <stack_name you chose>"
msgstr "" msgstr ""
#: ../../contributing/stacks.md:59 b8e57aad9e1841449f022967caff3c0b #: ../../contributing/stacks.md:59 59db2fa814e2412fa23f6f4a79c900b5
msgid "" msgid ""
"git init git add . git commit -m 'Seed repo' git remote add origin <url " "git init git add . git commit -m 'Seed repo' git remote add origin <url "
"from github> git push -u origin master ```" "from github> git push -u origin master ```"
msgstr "" msgstr ""
#: ../../contributing/stacks.md:66 f36019972d8c40e3807b08730ffcb5f3 #: ../../contributing/stacks.md:66 7ad2b1ce5d3e4452b60a046a07ee070d
msgid "## Configuring Travis" msgid "## Configuring Travis"
msgstr "" msgstr ""
# 38e3784d96f64d7481f0e1fd17aff9cb # 38e3784d96f64d7481f0e1fd17aff9cb
#: ../../contributing/stacks.md:68 1297f1fb693649a8bd54c79c85146307 #: ../../contributing/stacks.md:68 f2178e8b53cf410e9a21288e50ad1b8e
msgid "" msgid ""
"Next, link your GitHub project to Travis CI to build your Docker image " "Next, link your GitHub project to Travis CI to build your Docker image "
"whenever you or someone else submits a pull request." "whenever you or someone else submits a pull request."
msgstr "" msgstr ""
#: ../../contributing/stacks.md:70 5c958db793ca4f4cb0e15424738ad5b5 #: ../../contributing/stacks.md:70 c59ea12ab2d449278a93078c98accc10
msgid "" msgid ""
"1. Visit [https://docs.travis-ci.com/user/getting-started/#To-get-" "1. Visit [https://docs.travis-ci.com/user/getting-started/#To-get-"
"started-with-Travis-CI](https://docs.travis-ci.com/user/getting-started" "started-with-Travis-CI](https://docs.travis-ci.com/user/getting-started"
...@@ -449,123 +470,123 @@ msgid "" ...@@ -449,123 +470,123 @@ msgid ""
"left sidebar." "left sidebar."
msgstr "" msgstr ""
#: ../../contributing/stacks.md:73 1cccee407b8e4ec2a5e4169ad696ef9d #: ../../contributing/stacks.md:73 202ef2d1fddc417a8cc9d42e2e3d8c08
msgid "" msgid ""
"![Travis sidebar with plus button screenshot](../_static/travis-plus-" "![Travis sidebar with plus button screenshot](../_static/travis-plus-"
"repo.png)" "repo.png)"
msgstr "" msgstr ""
# ac370ece6fb24becb8034cb994ad8f4b # ac370ece6fb24becb8034cb994ad8f4b
#: ../../contributing/stacks.md:74 19d0a6de857f4140a781fc6078785556 #: ../../contributing/stacks.md:74 5483e5000673485692e947375fb80dea
msgid "" msgid ""
"Locate your project repository either in your primary user account or in " "Locate your project repository either in your primary user account or in "
"one of the organizations to which you belong." "one of the organizations to which you belong."
msgstr "" msgstr ""
# 6b6a7bab547d4e25bd930009a6a9ea44 # 6b6a7bab547d4e25bd930009a6a9ea44
#: ../../contributing/stacks.md:75 952690f985f64ea68f23313b58a1f2fd #: ../../contributing/stacks.md:75 00001f96d2ba4e1c90a10cd0e24818e7
msgid "Click the toggle to enable builds for the project repository." msgid "Click the toggle to enable builds for the project repository."
msgstr "" msgstr ""
#: ../../contributing/stacks.md:76 6ff5be3b75764667870ac8e80c2fce31 #: ../../contributing/stacks.md:76 5bda85982bee4d98b26da3c9bc12bf47
msgid "Click the **Settings** button for that repository." msgid "Click the **Settings** button for that repository."
msgstr "" msgstr ""
#: ../../contributing/stacks.md:77 6e6048f5aa714993a154b59d14aa7889 #: ../../contributing/stacks.md:77 784ae3adc22b4699901637b95d6f4e22
msgid "" msgid ""
"![Travis enable build toggle screenshot](../_static/travis-enable-" "![Travis enable build toggle screenshot](../_static/travis-enable-"
"build.png)" "build.png)"
msgstr "" msgstr ""
#: ../../contributing/stacks.md:78 1d8d30bc3c9a41d7b429bd195ddf797a #: ../../contributing/stacks.md:78 3fac5e0b02204bc9922af7ee36db8f7a
msgid "" msgid ""
"Enable **Build only if .travis.yml is present** and **Build pushed pull " "Enable **Build only if .travis.yml is present** and **Build pushed pull "
"requests**." "requests**."
msgstr "" msgstr ""
#: ../../contributing/stacks.md:79 249abaaafbe34f72800fe34fd7ab46d3 #: ../../contributing/stacks.md:79 711a2aad9f1e4ae8a4f99725422a92ee
msgid "![Travis build settings screenshot](../_static/travis-build-settings.png)" msgid "![Travis build settings screenshot](../_static/travis-build-settings.png)"
msgstr "" msgstr ""
#: ../../contributing/stacks.md:80 34febb47393d4d2394d4b4afed87c990 #: ../../contributing/stacks.md:80 e17b07cdadef4c18ad968dd70d5f8947
msgid "Disable **Build pushed branches**." msgid "Disable **Build pushed branches**."
msgstr "" msgstr ""
#: ../../contributing/stacks.md:82 58dba607f5894a9ebccac5ccdeca33a1 #: ../../contributing/stacks.md:82 b36a850c104b4d73876959545089b033
msgid "## Configuring Docker Cloud" msgid "## Configuring Docker Cloud"
msgstr "" msgstr ""
# f0c01a2906494d039d73324e90cbae44 # f0c01a2906494d039d73324e90cbae44
#: ../../contributing/stacks.md:84 23c9b472185c4f3584f1859996d09df4 #: ../../contributing/stacks.md:84 90df9b8423c3452482065437e63912b0
msgid "" msgid ""
"Now, configure Docker Cloud to build your stack image and push it to " "Now, configure Docker Cloud to build your stack image and push it to "
"Docker Hub repository whenever you merge a GitHub pull request to the " "Docker Hub repository whenever you merge a GitHub pull request to the "
"master branch of your project." "master branch of your project."
msgstr "" msgstr ""
#: ../../contributing/stacks.md:86 64bacdb3cae04953ba70a1a57d38beac #: ../../contributing/stacks.md:86 3a54a669d6284ddfa7b335ddc7212e7e
msgid "Visit [https://cloud.docker.com/](https://cloud.docker.com/) and login." msgid "Visit [https://cloud.docker.com/](https://cloud.docker.com/) and login."
msgstr "" msgstr ""
#: ../../contributing/stacks.md:87 4399a56fd7ef424ea9a6bcd165c7f33f #: ../../contributing/stacks.md:87 2d8b23653837435e862d8da50fc1172c
msgid "" msgid ""
"Select the account or organization matching the one you entered when " "Select the account or organization matching the one you entered when "
"prompted with `stack_org` by the cookiecutter." "prompted with `stack_org` by the cookiecutter."
msgstr "" msgstr ""
#: ../../contributing/stacks.md:88 7cef897e823b4d28988122b71c777ac3 #: ../../contributing/stacks.md:88 c0bfd69d6d2646dc8c1a7ad069ebb4d4
msgid "![Docker account selection screenshot](../_static/docker-org-select.png)" msgid "![Docker account selection screenshot](../_static/docker-org-select.png)"
msgstr "" msgstr ""
#: ../../contributing/stacks.md:89 dab045ba7ad045eab98c1857b2ba2673 #: ../../contributing/stacks.md:89 4370325a9ec14601973ecfb3fc266197
msgid "Scroll to the bottom of the page and click **Create repository**." msgid "Scroll to the bottom of the page and click **Create repository**."
msgstr "" msgstr ""
#: ../../contributing/stacks.md:90 c8b34c5f5e4c429c84f353e2c9730d3c #: ../../contributing/stacks.md:90 d84f97e45d4e49eb9be9673fd94bcfe3
msgid "" msgid ""
"Enter the name of the image matching the one you entered when prompted " "Enter the name of the image matching the one you entered when prompted "
"with `stack_name` by the cookiecutter." "with `stack_name` by the cookiecutter."
msgstr "" msgstr ""
#: ../../contributing/stacks.md:91 5e8c66e7889c46b68a345bb6bf13e55b #: ../../contributing/stacks.md:91 69510f1bb335412eb7185ced1decb98c
msgid "" msgid ""
"![Docker image name and description screenshot](../_static/docker-repo-" "![Docker image name and description screenshot](../_static/docker-repo-"
"name.png)" "name.png)"
msgstr "" msgstr ""
# 79092e5007ba4bdead594a71e30cd58a # 79092e5007ba4bdead594a71e30cd58a
#: ../../contributing/stacks.md:92 6d474faa9e5e469f9bc5e4fe5bef68c0 #: ../../contributing/stacks.md:92 bb7b1853818548e58aee02f6d9b52801
msgid "Enter a description for your image." msgid "Enter a description for your image."
msgstr "" msgstr ""
#: ../../contributing/stacks.md:93 b8f74f84c0194768b182b25e3105966b #: ../../contributing/stacks.md:93 22777374f90246e693f8305a49e8be32
msgid "" msgid ""
"Click **GitHub** under the **Build Settings** and follow the prompts to " "Click **GitHub** under the **Build Settings** and follow the prompts to "
"connect your account if it is not already connected." "connect your account if it is not already connected."
msgstr "" msgstr ""
# e085cfd6d7664d04bcd14ce89f24b75a # e085cfd6d7664d04bcd14ce89f24b75a
#: ../../contributing/stacks.md:94 2c5b32972f5d442eacf402af3b911a0f #: ../../contributing/stacks.md:94 9dd909bd515e44afaa51b86e208af06c
msgid "" msgid ""
"Select the GitHub organization and repository containing your image " "Select the GitHub organization and repository containing your image "
"definition from the dropdowns." "definition from the dropdowns."
msgstr "" msgstr ""
#: ../../contributing/stacks.md:95 1dc7509a66a04095a838d8b34cc3a173 #: ../../contributing/stacks.md:95 173716294e2a4aeb8a638a957ad44c21
msgid "" msgid ""
"![Docker from GitHub automated build screenshot](../_static/docker-" "![Docker from GitHub automated build screenshot](../_static/docker-"
"github-settings.png)" "github-settings.png)"
msgstr "" msgstr ""
#: ../../contributing/stacks.md:96 2d7d529f9ec241aa910bc71650bc5cb4 #: ../../contributing/stacks.md:96 6e335c8fbfff4ff3a21be4c397ff102e
msgid "Click the **Create and Build** button." msgid "Click the **Create and Build** button."
msgstr "" msgstr ""
#: ../../contributing/stacks.md:98 819de4ed79cb4b06b3be64bb24e50ce1 #: ../../contributing/stacks.md:98 722aea257bbd45bd9a90d022797b2d04
msgid "## Defining Your Image" msgid "## Defining Your Image"
msgstr "" msgstr ""
#: ../../contributing/stacks.md:100 71d65cbc65d34a149365af97ff22b80a #: ../../contributing/stacks.md:100 3227f738bbee4bc7ab0dedf653f628ba
msgid "" msgid ""
"Make edits the Dockerfile in your project to add third-party libraries " "Make edits the Dockerfile in your project to add third-party libraries "
"and configure Jupyter applications. Refer to the Dockerfiles for the core" "and configure Jupyter applications. Refer to the Dockerfiles for the core"
...@@ -574,7 +595,7 @@ msgid "" ...@@ -574,7 +595,7 @@ msgid ""
"feel for what's possible and best practices." "feel for what's possible and best practices."
msgstr "" msgstr ""
#: ../../contributing/stacks.md:102 63acb805993d4e1a9b601cd5c25648d3 #: ../../contributing/stacks.md:102 97514db155ce48038621542f715518c1
msgid "" msgid ""
"[Submit pull requests](https://github.com/PointCloudLibrary/pcl/wiki/A" "[Submit pull requests](https://github.com/PointCloudLibrary/pcl/wiki/A"
"-step-by-step-guide-on-preparing-and-submitting-a-pull-request) to your " "-step-by-step-guide-on-preparing-and-submitting-a-pull-request) to your "
...@@ -583,52 +604,52 @@ msgid "" ...@@ -583,52 +604,52 @@ msgid ""
"master branch that you can `docker pull`." "master branch that you can `docker pull`."
msgstr "" msgstr ""
#: ../../contributing/stacks.md:104 e5412d6af0584a71a9a6677f0a7b10a1 #: ../../contributing/stacks.md:104 8dea5ac397874cfd987e004bc3bf24a9
msgid "## Sharing Your Image" msgid "## Sharing Your Image"
msgstr "" msgstr ""
# d8e9f1a37f4c4a72bb630e7a3b265b92 # d8e9f1a37f4c4a72bb630e7a3b265b92
#: ../../contributing/stacks.md:106 0258ac55a70a4a82bc9a05a9630903fd #: ../../contributing/stacks.md:106 d25799835eee4def88ffe2a0f447d4c9
msgid "" msgid ""
"Finally, if you'd like to add a link to your project to this " "Finally, if you'd like to add a link to your project to this "
"documentation site, please do the following:" "documentation site, please do the following:"
msgstr "" msgstr ""
#: ../../contributing/stacks.md:108 6c3d6c7bfd054d0bb849e26d32122fc6 #: ../../contributing/stacks.md:108 2a745d58d4084227910e5e4d45ee139d
msgid "" msgid ""
"Clone ths [jupyter/docker-stacks](https://github.com/jupyter/docker-" "Clone ths [jupyter/docker-stacks](https://github.com/jupyter/docker-"
"stacks) GitHub repository." "stacks) GitHub repository."
msgstr "" msgstr ""
#: ../../contributing/stacks.md:109 656fa1f4946849ecbf336a17429d19d2 #: ../../contributing/stacks.md:109 e0be278e7b874bd580e305e2f51e808a
msgid "" msgid ""
"Open the `docs/using/selecting.md` source file and locate the **Community" "Open the `docs/using/selecting.md` source file and locate the **Community"
" Stacks** section." " Stacks** section."
msgstr "" msgstr ""
# 9d37dfec6fba48e6966c254b476e1e81 # 9d37dfec6fba48e6966c254b476e1e81
#: ../../contributing/stacks.md:110 a4b5c6dea37b416d88b3585c9a5bc13d #: ../../contributing/stacks.md:110 0fbbcb03a1604ae580867cde55c40d63
msgid "" msgid ""
"Add a bullet with a link to your project and a short description of what " "Add a bullet with a link to your project and a short description of what "
"your Docker image contains." "your Docker image contains."
msgstr "" msgstr ""
#: ../../contributing/tests.md:1 c29772cebb8a4066bd82c438874ae4b5 #: ../../contributing/tests.md:1 f91724f822e24fcba919e083b084ca45
msgid "# Image Tests" msgid "# Image Tests"
msgstr "" msgstr ""
# 6dbd44985f3c4ba1a3823c90c5944ad0 # 6dbd44985f3c4ba1a3823c90c5944ad0
#: ../../contributing/tests.md:3 b97b018874834f21bc8d96b1f273b440 #: ../../contributing/tests.md:3 395ec2759b474c9f85f9ad3257698b96
msgid "" msgid ""
"We greatly appreciate pull requests that extend the automated tests that " "We greatly appreciate pull requests that extend the automated tests that "
"vet the basic functionality of the Docker images." "vet the basic functionality of the Docker images."
msgstr "" msgstr ""
#: ../../contributing/tests.md:5 df0a3a993155401bac1ccdfca88f743e #: ../../contributing/tests.md:5 b323f7a815d2454e83ed26d308216912
msgid "## How the Tests Work" msgid "## How the Tests Work"
msgstr "" msgstr ""
#: ../../contributing/tests.md:7 ed3cc425ea044dbc95af6bfac3913ee6 #: ../../contributing/tests.md:7 fcf6f9c0b04044bdbe2c7571af8d2b23
msgid "" msgid ""
"Travis executes `make build-test-all` against pull requests submitted to " "Travis executes `make build-test-all` against pull requests submitted to "
"the `jupyter/docker-stacks` repository. This `make` command builds every " "the `jupyter/docker-stacks` repository. This `make` command builds every "
...@@ -643,45 +664,49 @@ msgid "" ...@@ -643,45 +664,49 @@ msgid ""
"stacks/blob/master/conftest.py) file at the root of the projects." "stacks/blob/master/conftest.py) file at the root of the projects."
msgstr "" msgstr ""
#: ../../contributing/tests.md:9 2bcc1035f66442c0a8901e88150b49d2 #: ../../contributing/tests.md:9 c71863eb5bdb4854a39b34628d72dfd0
msgid "## Contributing New Tests" msgid "## Contributing New Tests"
msgstr "" msgstr ""
# d317e6be0fbf487e8528ff1fe0bbdb78 # d317e6be0fbf487e8528ff1fe0bbdb78
#: ../../contributing/tests.md:11 0d564e8d5d9940a190675e83b9eed5c1 #: ../../contributing/tests.md:11 470e47ae65fa4016aa0c1ae14e712ed6
msgid "Please follow the process below to add new tests:" msgid "Please follow the process below to add new tests:"
msgstr "" msgstr ""
#: ../../contributing/tests.md:13 20f87be8f8074767be91b5afcf3d5085 #: ../../contributing/tests.md:13 f4c00ca94aa94692b36ccb8f06206c47
msgid "" msgid ""
"If the test should run against every image built, add your test code to " "If the test should run against every image built, add your test code to "
"one of the modules in [test/](https://github.com/jupyter/docker-" "one of the modules in [test/](https://github.com/jupyter/docker-"
"stacks/tree/master/test) or create a new module." "stacks/tree/master/test) or create a new module."
msgstr "" msgstr ""
#: ../../contributing/tests.md:14 db83075776634dfe8915ead28fdce650 #: ../../contributing/tests.md:14 63ad10675b464777b040f97a4ed0a899
msgid "" msgid ""
"If your test should run against a single image, add your test code to one" "If your test should run against a single image, add your test code to one"
" of the modules in `some-notebook/test/` or create a new module." " of the modules in `some-notebook/test/` or create a new module."
msgstr "" msgstr ""
#: ../../contributing/tests.md:15 7ff4f9ba1e7b49f0bb4c5fb5a5723bdd #: ../../contributing/tests.md:18 b45d2bf2cf074d7bb93688eabfe0df6d
msgid "" msgid ""
"3. Build one or more images you intend to test and run the tests locally." "Build one or more images you intend to test and run the tests locally. If"
" If you use `make`, call: ``` make build/somestack-notebook make test" " you use `make`, call:"
"/somestack-notebook ``` 4. [Submit a pull "
"request](https://github.com/PointCloudLibrary/pcl/wiki/A-step-by-step-"
"guide-on-preparing-and-submitting-a-pull-request) (PR) with your changes."
" 5. Watch for Travis to report a build success or failure for your PR on "
"GitHub. 6. Discuss changes with the maintainers and address any issues "
"running the tests on Travis."
msgstr "" msgstr ""
#: ../../contributing/translations.md:1 e886a2065a7845d28c3b8c1760d6d1a5 #: ../../contributing/tests.md:16 b6e3eea4174c46d49e686db36eba888f
msgid "```bash make build/somestack-notebook make test/somestack-notebook ```"
msgstr ""
#: ../../contributing/tests.md:22 f467f7df33474582a828db15ee7dc02f
msgid ""
"Discuss changes with the maintainers and address any issues running the "
"tests on Travis."
msgstr ""
#: ../../contributing/translations.md:1 65ee5900baf24a08bd51bdc3e92ce7ba
msgid "# Doc Translations" msgid "# Doc Translations"
msgstr "" msgstr ""
#: ../../contributing/translations.md:3 40fb888162f84d2e8541849fa6039e84 #: ../../contributing/translations.md:3 2b1fa2892ced4b9f89b04d4a6b318935
msgid "" msgid ""
"We are delighted when members of the Jupyter community want to help " "We are delighted when members of the Jupyter community want to help "
"translate these documentation pages to other languages. If you're " "translate these documentation pages to other languages. If you're "
...@@ -690,14 +715,14 @@ msgid "" ...@@ -690,14 +715,14 @@ msgid ""
"updating translations of the Jupyter Docker Stacks documentation." "updating translations of the Jupyter Docker Stacks documentation."
msgstr "" msgstr ""
#: ../../contributing/translations.md:5 1fff8b880a22472988647971bb70a1b2 #: ../../contributing/translations.md:5 b7cca77b29a8449596dbf63616c7e904
msgid "" msgid ""
"Follow the steps documented on the [Getting Started as a " "Follow the steps documented on the [Getting Started as a "
"Translator](https://docs.transifex.com/getting-started-1/translators) " "Translator](https://docs.transifex.com/getting-started-1/translators) "
"page." "page."
msgstr "" msgstr ""
#: ../../contributing/translations.md:6 bd52b1a760324475aecd5dbcb3886679 #: ../../contributing/translations.md:6 b47cb99fb92a46feade7db04170c433b
msgid "" msgid ""
"Look for *jupyter-docker-stacks* when prompted to choose a translation " "Look for *jupyter-docker-stacks* when prompted to choose a translation "
"team. Alternatively, visit https://www.transifex.com/project-jupyter" "team. Alternatively, visit https://www.transifex.com/project-jupyter"
...@@ -705,7 +730,7 @@ msgid "" ...@@ -705,7 +730,7 @@ msgid ""
"the project." "the project."
msgstr "" msgstr ""
#: ../../contributing/translations.md:7 c7ef0859332d44d29c1adb2fec5ee086 #: ../../contributing/translations.md:7 606cb04206b6458498fd573ced96e22c
msgid "" msgid ""
"See [Translating with the Web " "See [Translating with the Web "
"Editor](https://docs.transifex.com/translation/translating-with-the-web-" "Editor](https://docs.transifex.com/translation/translating-with-the-web-"
...@@ -1216,3 +1241,74 @@ msgstr "" ...@@ -1216,3 +1241,74 @@ msgstr ""
#~ msgid "See Translating with the Web Editor in the Transifex documentation." #~ msgid "See Translating with the Web Editor in the Transifex documentation."
#~ msgstr "" #~ msgstr ""
#~ msgid ""
#~ "2. Please build the image locally "
#~ "before submitting a pull request. "
#~ "Building the image locally shortens the"
#~ " debugging cycle by taking some load"
#~ " off [Travis CI](http://travis-ci.org/), "
#~ "which graciously provides free build "
#~ "services for open source projects like"
#~ " this one. If you use `make`, "
#~ "call: ``` make build/somestack-notebook "
#~ "``` 3. [Submit a pull "
#~ "request](https://github.com/PointCloudLibrary/pcl/wiki/A-step-"
#~ "by-step-guide-on-preparing-and-"
#~ "submitting-a-pull-request) (PR) with your "
#~ "changes. 4. Watch for Travis to "
#~ "report a build success or failure "
#~ "for your PR on GitHub. 5. Discuss"
#~ " changes with the maintainers and "
#~ "address any build issues."
#~ msgstr ""
#~ msgid ""
#~ "3. Please build the image locally "
#~ "before submitting a pull request. "
#~ "Building the image locally shortens the"
#~ " debugging cycle by taking some load"
#~ " off [Travis CI](http://travis-ci.org/), "
#~ "which graciously provides free build "
#~ "services for open source projects like"
#~ " this one. If you use `make`, "
#~ "call: ``` make build/somestack-notebook "
#~ "``` 4. [Submit a pull "
#~ "request](https://github.com/PointCloudLibrary/pcl/wiki/A-step-"
#~ "by-step-guide-on-preparing-and-"
#~ "submitting-a-pull-request) (PR) with your "
#~ "changes. 5. Watch for Travis to "
#~ "report a build success or failure "
#~ "for your PR on GitHub. 6. Discuss"
#~ " changes with the maintainers and "
#~ "address any build issues. Version "
#~ "conflicts are the most common problem."
#~ " You may need to upgrade additional"
#~ " packages to fix build failures."
#~ msgstr ""
#~ msgid "``` pip install cookiecutter # or conda install cookiecutter ```"
#~ msgstr ""
#~ msgid ""
#~ "``` cookiecutter https://github.com/jupyter"
#~ "/cookiecutter-docker-stacks.git ```"
#~ msgstr ""
#~ msgid ""
#~ "3. Build one or more images you"
#~ " intend to test and run the "
#~ "tests locally. If you use `make`, "
#~ "call: ``` make build/somestack-notebook "
#~ "make test/somestack-notebook ``` 4. "
#~ "[Submit a pull "
#~ "request](https://github.com/PointCloudLibrary/pcl/wiki/A-step-"
#~ "by-step-guide-on-preparing-and-"
#~ "submitting-a-pull-request) (PR) with your "
#~ "changes. 5. Watch for Travis to "
#~ "report a build success or failure "
#~ "for your PR on GitHub. 6. Discuss"
#~ " changes with the maintainers and "
#~ "address any issues running the tests "
#~ "on Travis."
#~ msgstr ""
...@@ -9,7 +9,7 @@ msgid "" ...@@ -9,7 +9,7 @@ msgid ""
msgstr "" msgstr ""
"Project-Id-Version: docker-stacks latest\n" "Project-Id-Version: docker-stacks latest\n"
"Report-Msgid-Bugs-To: \n" "Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2020-01-20 04:43+0000\n" "POT-Creation-Date: 2020-05-28 00:44+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n" "Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n" "Language-Team: LANGUAGE <LL@li.org>\n"
...@@ -19,50 +19,44 @@ msgstr "" ...@@ -19,50 +19,44 @@ msgstr ""
"Generated-By: Babel 2.8.0\n" "Generated-By: Babel 2.8.0\n"
# 22f1bd46933144e092bf92e3af4c6f4f # 22f1bd46933144e092bf92e3af4c6f4f
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:32 #: ../../index.rst:32
#: 79072cbf86294c09b9313ee07735fb65
msgid "User Guide" msgid "User Guide"
msgstr "" msgstr ""
# f35d75046f8c42ae8cab58d826154823 # f35d75046f8c42ae8cab58d826154823
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:42 #: ../../index.rst:42
#: c9e3b347063f4b528690011606e8d5ea
msgid "Contributor Guide" msgid "Contributor Guide"
msgstr "" msgstr ""
# a737afe726cd49c4986d75b7d74eeed3 # a737afe726cd49c4986d75b7d74eeed3
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:54 #: ../../index.rst:54
#: c7b53fa9956546d691788706b3ef5dfc
msgid "Maintainer Guide" msgid "Maintainer Guide"
msgstr "" msgstr ""
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:60 #: ../../index.rst:60
msgid "Jupyter Discourse Forum" msgid "Jupyter Discourse Forum"
msgstr "" msgstr ""
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:60 #: ../../index.rst:60
msgid "Jupyter Docker Stacks Issue Tracker" msgid "Stacks Issue Tracker"
msgstr "" msgstr ""
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:60 #: ../../index.rst:60
msgid "Jupyter Website" msgid "Jupyter Website"
msgstr "" msgstr ""
# 9cd216fa91ef40bbb957373faaf93732 # 9cd216fa91ef40bbb957373faaf93732
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:60 #: ../../index.rst:60 774ed8768c6c4144ab19c7d7518d1932
#: 2789eaad173a43a495ff17fd0e1a1a38
msgid "Getting Help" msgid "Getting Help"
msgstr "" msgstr ""
# a0aa0bcd999c4c5e96cc57fd77780f96 # a0aa0bcd999c4c5e96cc57fd77780f96
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:2 #: ../../index.rst:2 dbc22a0d800749c6a2d4628595fe57b3
#: 121c8abde123400bbdb190b01441a180
msgid "Jupyter Docker Stacks" msgid "Jupyter Docker Stacks"
msgstr "" msgstr ""
# 5d06f458dc524214b2c97e865dd2dc81 # 5d06f458dc524214b2c97e865dd2dc81
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:4 #: ../../index.rst:4 8183867bf813431bb337b0594884f0fe
#: 6463d955c7724db682f6fa42da6b25a7
msgid "" msgid ""
"Jupyter Docker Stacks are a set of ready-to-run Docker images containing " "Jupyter Docker Stacks are a set of ready-to-run Docker images containing "
"Jupyter applications and interactive computing tools. You can use a stack" "Jupyter applications and interactive computing tools. You can use a stack"
...@@ -70,32 +64,27 @@ msgid "" ...@@ -70,32 +64,27 @@ msgid ""
msgstr "" msgstr ""
# c69f151c806e4cdf9bebda05b06c760e # c69f151c806e4cdf9bebda05b06c760e
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:6 #: ../../index.rst:6 271a99cccdd3476b9b9696e295647c92
#: 417a2a71d6bd4afdba0c10d1824afa36
msgid "Start a personal Jupyter Notebook server in a local Docker container" msgid "Start a personal Jupyter Notebook server in a local Docker container"
msgstr "" msgstr ""
# b26271409ab743b2a349b3a8ca95233e # b26271409ab743b2a349b3a8ca95233e
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:7 #: ../../index.rst:7 f01a318271d64f958c682ae241157bb2
#: 318b7b2a1f4644048ce7deb74fc8a2cf
msgid "Run JupyterLab servers for a team using JupyterHub" msgid "Run JupyterLab servers for a team using JupyterHub"
msgstr "" msgstr ""
# 4d60f4325fff4ffcad12703a4b9d6781 # 4d60f4325fff4ffcad12703a4b9d6781
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:8 #: ../../index.rst:8 8e3b6e8fe5e64b8a9523c0dd5b0369c9
#: faebaa8b57f24f52b0873a12b4da2a62
msgid "Write your own project Dockerfile" msgid "Write your own project Dockerfile"
msgstr "" msgstr ""
# 78b0d31eb6e9462888eef92e6a84cdb7 # 78b0d31eb6e9462888eef92e6a84cdb7
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:11 #: ../../index.rst:11 60ec3253d09e40be8e6852a495248467
#: 549f043c0b734a61817b2c737ac59d7c
msgid "Quick Start" msgid "Quick Start"
msgstr "" msgstr ""
# d4c0e237dbe74e0d9afbf2b2f0e219c8 # d4c0e237dbe74e0d9afbf2b2f0e219c8
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:13 #: ../../index.rst:13 38d5e9d5d0504acaa04b388f2ba031fc
#: bc586127ae4b4cbba1d9709841f2135c
msgid "" msgid ""
"You can try a `recent build of the jupyter/base-notebook image on " "You can try a `recent build of the jupyter/base-notebook image on "
"mybinder.org <https://mybinder.org/v2/gh/jupyter/docker-" "mybinder.org <https://mybinder.org/v2/gh/jupyter/docker-"
...@@ -107,16 +96,14 @@ msgid "" ...@@ -107,16 +96,14 @@ msgid ""
msgstr "" msgstr ""
# 051ed23ef62e41058a7c889604f96035 # 051ed23ef62e41058a7c889604f96035
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:15 #: ../../index.rst:15 1214b6056fe449b2a8ce59a5cda97355
#: 51538eb1f8d442acaae41b8e69a8704e
msgid "" msgid ""
"The other pages in this documentation describe additional uses and " "The other pages in this documentation describe additional uses and "
"features in detail." "features in detail."
msgstr "" msgstr ""
# e91f3b62a1b54166b966be6d7a4f061e # e91f3b62a1b54166b966be6d7a4f061e
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:17 #: ../../index.rst:17 7b198609a6214812b7922cb12e057279
#: 0c8148b23d704a1699d2812744b20c7c
msgid "" msgid ""
"**Example 1:** This command pulls the ``jupyter/scipy-notebook`` image " "**Example 1:** This command pulls the ``jupyter/scipy-notebook`` image "
"tagged ``17aba6048f44`` from Docker Hub if it is not already present on " "tagged ``17aba6048f44`` from Docker Hub if it is not already present on "
...@@ -130,8 +117,7 @@ msgid "" ...@@ -130,8 +117,7 @@ msgid ""
msgstr "" msgstr ""
# e04140e6cd8442f7a6f347d88224f591 # e04140e6cd8442f7a6f347d88224f591
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:21 #: ../../index.rst:21 1dead775c2d544abb3362633fdb93523
#: dcbbce6e5e67473aa32e264e422f334f
msgid "" msgid ""
"**Example 2:** This command performs the same operations as **Example " "**Example 2:** This command performs the same operations as **Example "
"1**, but it exposes the server on host port 10000 instead of port 8888. " "1**, but it exposes the server on host port 10000 instead of port 8888. "
...@@ -141,8 +127,7 @@ msgid "" ...@@ -141,8 +127,7 @@ msgid ""
msgstr "" msgstr ""
# 1c3229680cf44a5bb2d8450602bfcf7d # 1c3229680cf44a5bb2d8450602bfcf7d
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:25 #: ../../index.rst:25 8e75264b16a14d9bb4a1b4a9dee7b0b5
#: 449c01c1808b427381502b0d33f4efcb
msgid "" msgid ""
"**Example 3:** This command pulls the ``jupyter/datascience-notebook`` " "**Example 3:** This command pulls the ``jupyter/datascience-notebook`` "
"image tagged ``9b06df75e445`` from Docker Hub if it is not already " "image tagged ``9b06df75e445`` from Docker Hub if it is not already "
...@@ -158,8 +143,10 @@ msgid "" ...@@ -158,8 +143,10 @@ msgid ""
msgstr "" msgstr ""
# 3ac1a41d185844b1b43315a4cc74efc8 # 3ac1a41d185844b1b43315a4cc74efc8
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:30 #: ../../index.rst:30 e275f6561a2b408fa1202ebb59dfcd14
#: 3e1e8e2674784f5caad20d9c110707c5
msgid "Table of Contents" msgid "Table of Contents"
msgstr "" msgstr ""
#~ msgid "Jupyter Docker Stacks Issue Tracker"
#~ msgstr ""
This source diff could not be displayed because it is too large. You can view the blob instead.
...@@ -8,13 +8,13 @@ This page describes the options supported by the startup script as well as how t ...@@ -8,13 +8,13 @@ This page describes the options supported by the startup script as well as how t
You can pass [Jupyter command line options](https://jupyter.readthedocs.io/en/latest/projects/jupyter-command.html) to the `start-notebook.sh` script when launching the container. For example, to secure the Notebook server with a custom password hashed using `IPython.lib.passwd()` instead of the default token, you can run the following: You can pass [Jupyter command line options](https://jupyter.readthedocs.io/en/latest/projects/jupyter-command.html) to the `start-notebook.sh` script when launching the container. For example, to secure the Notebook server with a custom password hashed using `IPython.lib.passwd()` instead of the default token, you can run the following:
``` ```bash
docker run -d -p 8888:8888 jupyter/base-notebook start-notebook.sh --NotebookApp.password='sha1:74ba40f8a388:c913541b7ee99d15d5ed31d4226bf7838f83a50e' docker run -d -p 8888:8888 jupyter/base-notebook start-notebook.sh --NotebookApp.password='sha1:74ba40f8a388:c913541b7ee99d15d5ed31d4226bf7838f83a50e'
``` ```
For example, to set the base URL of the notebook server, you can run the following: For example, to set the base URL of the notebook server, you can run the following:
``` ```bash
docker run -d -p 8888:8888 jupyter/base-notebook start-notebook.sh --NotebookApp.base_url=/some/path docker run -d -p 8888:8888 jupyter/base-notebook start-notebook.sh --NotebookApp.base_url=/some/path
``` ```
...@@ -23,7 +23,7 @@ docker run -d -p 8888:8888 jupyter/base-notebook start-notebook.sh --NotebookApp ...@@ -23,7 +23,7 @@ docker run -d -p 8888:8888 jupyter/base-notebook start-notebook.sh --NotebookApp
You may instruct the `start-notebook.sh` script to customize the container environment before launching You may instruct the `start-notebook.sh` script to customize the container environment before launching
the notebook server. You do so by passing arguments to the `docker run` command. the notebook server. You do so by passing arguments to the `docker run` command.
* `-e NB_USER=jovyan` - Instructs the startup script to change the default container username from `jovyan` to the provided value. Causes the script to rename the `jovyan` user home folder. For this option to take effect, you must run the container with `--user root` and set the working directory `-w /home/$NB_USER`. This feature is useful when mounting host volumes with specific home folder. * `-e NB_USER=jovyan` - Instructs the startup script to change the default container username from `jovyan` to the provided value. Causes the script to rename the `jovyan` user home folder. For this option to take effect, you must run the container with `--user root`, set the working directory `-w /home/$NB_USER` and set the environment variable `-e CHOWN_HOME=yes` (see below for detail). This feature is useful when mounting host volumes with specific home folder.
* `-e NB_UID=1000` - Instructs the startup script to switch the numeric user ID of `$NB_USER` to the given value. This feature is useful when mounting host volumes with specific owner permissions. For this option to take effect, you must run the container with `--user root`. (The startup script will `su $NB_USER` after adjusting the user ID.) You might consider using modern Docker options `--user` and `--group-add` instead. See the last bullet below for details. * `-e NB_UID=1000` - Instructs the startup script to switch the numeric user ID of `$NB_USER` to the given value. This feature is useful when mounting host volumes with specific owner permissions. For this option to take effect, you must run the container with `--user root`. (The startup script will `su $NB_USER` after adjusting the user ID.) You might consider using modern Docker options `--user` and `--group-add` instead. See the last bullet below for details.
* `-e NB_GID=100` - Instructs the startup script to change the primary group of`$NB_USER` to `$NB_GID` (the new group is added with a name of `$NB_GROUP` if it is defined, otherwise the group is named `$NB_USER`). This feature is useful when mounting host volumes with specific group permissions. For this option to take effect, you must run the container with `--user root`. (The startup script will `su $NB_USER` after adjusting the group ID.) You might consider using modern Docker options `--user` and `--group-add` instead. See the last bullet below for details. The user is added to supplemental group `users` (gid 100) in order to allow write access to the home directory and `/opt/conda`. If you override the user/group logic, ensure the user stays in group `users` if you want them to be able to modify files in the image. * `-e NB_GID=100` - Instructs the startup script to change the primary group of`$NB_USER` to `$NB_GID` (the new group is added with a name of `$NB_GROUP` if it is defined, otherwise the group is named `$NB_USER`). This feature is useful when mounting host volumes with specific group permissions. For this option to take effect, you must run the container with `--user root`. (The startup script will `su $NB_USER` after adjusting the group ID.) You might consider using modern Docker options `--user` and `--group-add` instead. See the last bullet below for details. The user is added to supplemental group `users` (gid 100) in order to allow write access to the home directory and `/opt/conda`. If you override the user/group logic, ensure the user stays in group `users` if you want them to be able to modify files in the image.
* `-e NB_GROUP=<name>` - The name used for `$NB_GID`, which defaults to `$NB_USER`. This is only used if `$NB_GID` is specified and completely optional: there is only cosmetic effect. * `-e NB_GROUP=<name>` - The name used for `$NB_GID`, which defaults to `$NB_USER`. This is only used if `$NB_GID` is specified and completely optional: there is only cosmetic effect.
...@@ -54,7 +54,7 @@ script for execution details. ...@@ -54,7 +54,7 @@ script for execution details.
You may mount SSL key and certificate files into a container and configure Jupyter Notebook to use them to accept HTTPS connections. For example, to mount a host folder containing a `notebook.key` and `notebook.crt` and use them, you might run the following: You may mount SSL key and certificate files into a container and configure Jupyter Notebook to use them to accept HTTPS connections. For example, to mount a host folder containing a `notebook.key` and `notebook.crt` and use them, you might run the following:
``` ```bash
docker run -d -p 8888:8888 \ docker run -d -p 8888:8888 \
-v /some/host/folder:/etc/ssl/notebook \ -v /some/host/folder:/etc/ssl/notebook \
jupyter/base-notebook start-notebook.sh \ jupyter/base-notebook start-notebook.sh \
...@@ -64,7 +64,7 @@ docker run -d -p 8888:8888 \ ...@@ -64,7 +64,7 @@ docker run -d -p 8888:8888 \
Alternatively, you may mount a single PEM file containing both the key and certificate. For example: Alternatively, you may mount a single PEM file containing both the key and certificate. For example:
``` ```bash
docker run -d -p 8888:8888 \ docker run -d -p 8888:8888 \
-v /some/host/folder/notebook.pem:/etc/ssl/notebook.pem \ -v /some/host/folder/notebook.pem:/etc/ssl/notebook.pem \
jupyter/base-notebook start-notebook.sh \ jupyter/base-notebook start-notebook.sh \
...@@ -85,13 +85,13 @@ For additional information about using SSL, see the following: ...@@ -85,13 +85,13 @@ For additional information about using SSL, see the following:
The `start-notebook.sh` script actually inherits most of its option handling capability from a more generic `start.sh` script. The `start.sh` script supports all of the features described above, but allows you to specify an arbitrary command to execute. For example, to run the text-based `ipython` console in a container, do the following: The `start-notebook.sh` script actually inherits most of its option handling capability from a more generic `start.sh` script. The `start.sh` script supports all of the features described above, but allows you to specify an arbitrary command to execute. For example, to run the text-based `ipython` console in a container, do the following:
``` ```bash
docker run -it --rm jupyter/base-notebook start.sh ipython docker run -it --rm jupyter/base-notebook start.sh ipython
``` ```
Or, to run JupyterLab instead of the classic notebook, run the following: Or, to run JupyterLab instead of the classic notebook, run the following:
``` ```bash
docker run -it --rm -p 8888:8888 jupyter/base-notebook start.sh jupyter lab docker run -it --rm -p 8888:8888 jupyter/base-notebook start.sh jupyter lab
``` ```
...@@ -107,7 +107,7 @@ The default Python 3.x [Conda environment](http://conda.pydata.org/docs/using/en ...@@ -107,7 +107,7 @@ The default Python 3.x [Conda environment](http://conda.pydata.org/docs/using/en
The `jovyan` user has full read/write access to the `/opt/conda` directory. You can use either `conda` or `pip` to install new packages without any additional permissions. The `jovyan` user has full read/write access to the `/opt/conda` directory. You can use either `conda` or `pip` to install new packages without any additional permissions.
``` ```bash
# install a package into the default (python 3.x) environment # install a package into the default (python 3.x) environment
pip install some-package pip install some-package
conda install some-package conda install some-package
......
...@@ -17,7 +17,7 @@ orchestrator config. ...@@ -17,7 +17,7 @@ orchestrator config.
For example: For example:
``` ```bash
docker run -it -e GRANT_SUDO=yes --user root jupyter/minimal-notebook docker run -it -e GRANT_SUDO=yes --user root jupyter/minimal-notebook
``` ```
...@@ -75,7 +75,7 @@ Python 2.x was removed from all images on August 10th, 2017, starting in tag `cc ...@@ -75,7 +75,7 @@ Python 2.x was removed from all images on August 10th, 2017, starting in tag `cc
add a Python 2.x environment by defining your own Dockerfile inheriting from one of the images like add a Python 2.x environment by defining your own Dockerfile inheriting from one of the images like
so: so:
``` ```dockerfile
# Choose your desired base image # Choose your desired base image
FROM jupyter/scipy-notebook:latest FROM jupyter/scipy-notebook:latest
...@@ -103,7 +103,7 @@ Ref: ...@@ -103,7 +103,7 @@ Ref:
The default version of Python that ships with conda/ubuntu may not be the version you want. The default version of Python that ships with conda/ubuntu may not be the version you want.
To add a conda environment with a different version and make it accessible to Jupyter, the instructions are very similar to Python 2.x but are slightly simpler (no need to switch to `root`): To add a conda environment with a different version and make it accessible to Jupyter, the instructions are very similar to Python 2.x but are slightly simpler (no need to switch to `root`):
``` ```dockerfile
# Choose your desired base image # Choose your desired base image
FROM jupyter/minimal-notebook:latest FROM jupyter/minimal-notebook:latest
...@@ -168,12 +168,12 @@ ENTRYPOINT ["jupyter", "lab", "--ip=0.0.0.0", "--allow-root"] ...@@ -168,12 +168,12 @@ ENTRYPOINT ["jupyter", "lab", "--ip=0.0.0.0", "--allow-root"]
``` ```
And build the image as: And build the image as:
``` ```bash
docker build -t jupyter/scipy-dasklabextension:latest . docker build -t jupyter/scipy-dasklabextension:latest .
``` ```
Once built, run using the command: Once built, run using the command:
``` ```bash
docker run -it --rm -p 8888:8888 -p 8787:8787 jupyter/scipy-dasklabextension:latest docker run -it --rm -p 8888:8888 -p 8787:8787 jupyter/scipy-dasklabextension:latest
``` ```
...@@ -194,7 +194,7 @@ Ref: ...@@ -194,7 +194,7 @@ Ref:
[RISE](https://github.com/damianavila/RISE) allows via extension to create live slideshows of your [RISE](https://github.com/damianavila/RISE) allows via extension to create live slideshows of your
notebooks, with no conversion, adding javascript Reveal.js: notebooks, with no conversion, adding javascript Reveal.js:
``` ```bash
# Add Live slideshows with RISE # Add Live slideshows with RISE
RUN conda install -c damianavila82 rise RUN conda install -c damianavila82 rise
``` ```
...@@ -207,7 +207,7 @@ Credit: [Paolo D.](https://github.com/pdonorio) based on ...@@ -207,7 +207,7 @@ Credit: [Paolo D.](https://github.com/pdonorio) based on
You need to install conda's gcc for Python xgboost to work properly. Otherwise, you'll get an You need to install conda's gcc for Python xgboost to work properly. Otherwise, you'll get an
exception about libgomp.so.1 missing GOMP_4.0. exception about libgomp.so.1 missing GOMP_4.0.
``` ```bash
%%bash %%bash
conda install -y gcc conda install -y gcc
pip install xgboost pip install xgboost
...@@ -320,8 +320,8 @@ Credit: [Justin Tyberg](https://github.com/jtyberg), [quanghoc](https://github.c ...@@ -320,8 +320,8 @@ Credit: [Justin Tyberg](https://github.com/jtyberg), [quanghoc](https://github.c
To use a specific version of JupyterHub, the version of `jupyterhub` in your image should match the To use a specific version of JupyterHub, the version of `jupyterhub` in your image should match the
version in the Hub itself. version in the Hub itself.
``` ```dockerfile
FROM jupyter/base-notebook:5ded1de07260 FROM jupyter/base-notebook:5ded1de07260
RUN pip install jupyterhub==0.8.0b1 RUN pip install jupyterhub==0.8.0b1
``` ```
...@@ -383,7 +383,7 @@ Ref: ...@@ -383,7 +383,7 @@ Ref:
### Using Local Spark JARs ### Using Local Spark JARs
``` ```python
import os import os
os.environ['PYSPARK_SUBMIT_ARGS'] = '--jars /home/jovyan/spark-streaming-kafka-assembly_2.10-1.6.1.jar pyspark-shell' os.environ['PYSPARK_SUBMIT_ARGS'] = '--jars /home/jovyan/spark-streaming-kafka-assembly_2.10-1.6.1.jar pyspark-shell'
import pyspark import pyspark
...@@ -412,7 +412,7 @@ Ref: ...@@ -412,7 +412,7 @@ Ref:
### Use jupyter/all-spark-notebooks with an existing Spark/YARN cluster ### Use jupyter/all-spark-notebooks with an existing Spark/YARN cluster
``` ```dockerfile
FROM jupyter/all-spark-notebook FROM jupyter/all-spark-notebook
# Set env vars for pydoop # Set env vars for pydoop
...@@ -488,13 +488,13 @@ convenient to launch the server without a password or token. In this case, you s ...@@ -488,13 +488,13 @@ convenient to launch the server without a password or token. In this case, you s
For jupyterlab: For jupyterlab:
``` ```bash
docker run jupyter/base-notebook:6d2a05346196 start.sh jupyter lab --LabApp.token='' docker run jupyter/base-notebook:6d2a05346196 start.sh jupyter lab --LabApp.token=''
``` ```
For jupyter classic: For jupyter classic:
``` ```bash
docker run jupyter/base-notebook:6d2a05346196 start.sh jupyter notebook --NotebookApp.token='' docker run jupyter/base-notebook:6d2a05346196 start.sh jupyter notebook --NotebookApp.token=''
``` ```
...@@ -502,7 +502,7 @@ docker run jupyter/base-notebook:6d2a05346196 start.sh jupyter notebook --Notebo ...@@ -502,7 +502,7 @@ docker run jupyter/base-notebook:6d2a05346196 start.sh jupyter notebook --Notebo
NB: this works for classic notebooks only NB: this works for classic notebooks only
``` ```dockerfile
# Update with your base image of choice # Update with your base image of choice
FROM jupyter/minimal-notebook:latest FROM jupyter/minimal-notebook:latest
...@@ -521,7 +521,7 @@ Ref: ...@@ -521,7 +521,7 @@ Ref:
Using `auto-sklearn` requires `swig`, which the other notebook images lack, so it cant be experimented with. Also, there is no Conda package for `auto-sklearn`. Using `auto-sklearn` requires `swig`, which the other notebook images lack, so it cant be experimented with. Also, there is no Conda package for `auto-sklearn`.
``` ```dockerfile
ARG BASE_CONTAINER=jupyter/scipy-notebook ARG BASE_CONTAINER=jupyter/scipy-notebook
FROM jupyter/scipy-notebook:latest FROM jupyter/scipy-notebook:latest
......
...@@ -116,11 +116,10 @@ packages from [conda-forge](https://conda-forge.github.io/feedstocks) ...@@ -116,11 +116,10 @@ packages from [conda-forge](https://conda-forge.github.io/feedstocks)
| [Dockerfile commit history](https://github.com/jupyter/docker-stacks/commits/master/pyspark-notebook/Dockerfile) | [Dockerfile commit history](https://github.com/jupyter/docker-stacks/commits/master/pyspark-notebook/Dockerfile)
| [Docker Hub image tags](https://hub.docker.com/r/jupyter/pyspark-notebook/tags/) | [Docker Hub image tags](https://hub.docker.com/r/jupyter/pyspark-notebook/tags/)
`jupyter/pyspark-notebook` includes Python support for Apache Spark, optionally on Mesos. `jupyter/pyspark-notebook` includes Python support for Apache Spark.
* Everything in `jupyter/scipy-notebook` and its ancestor images * Everything in `jupyter/scipy-notebook` and its ancestor images
* [Apache Spark](https://spark.apache.org/) with Hadoop binaries * [Apache Spark](https://spark.apache.org/) with Hadoop binaries
* [Mesos](http://mesos.apache.org/) client libraries
### jupyter/all-spark-notebook ### jupyter/all-spark-notebook
...@@ -128,7 +127,7 @@ packages from [conda-forge](https://conda-forge.github.io/feedstocks) ...@@ -128,7 +127,7 @@ packages from [conda-forge](https://conda-forge.github.io/feedstocks)
| [Dockerfile commit history](https://github.com/jupyter/docker-stacks/commits/master/all-spark-notebook/Dockerfile) | [Dockerfile commit history](https://github.com/jupyter/docker-stacks/commits/master/all-spark-notebook/Dockerfile)
| [Docker Hub image tags](https://hub.docker.com/r/jupyter/all-spark-notebook/tags/) | [Docker Hub image tags](https://hub.docker.com/r/jupyter/all-spark-notebook/tags/)
`jupyter/all-spark-notebook` includes Python, R, and Scala support for Apache Spark, optionally on Mesos. `jupyter/all-spark-notebook` includes Python, R, and Scala support for Apache Spark.
* Everything in `jupyter/pyspark-notebook` and its ancestor images * Everything in `jupyter/pyspark-notebook` and its ancestor images
* [IRKernel](https://irkernel.github.io/) to support R code in Jupyter notebooks * [IRKernel](https://irkernel.github.io/) to support R code in Jupyter notebooks
......
...@@ -5,7 +5,8 @@ This page provides details about features specific to one or more images. ...@@ -5,7 +5,8 @@ This page provides details about features specific to one or more images.
## Apache Spark ## Apache Spark
**Specific Docker Image Options** **Specific Docker Image Options**
* `-p 4040:4040` - The `jupyter/pyspark-notebook` and `jupyter/all-spark-notebook` images open [SparkUI (Spark Monitoring and Instrumentation UI)](http://spark.apache.org/docs/latest/monitoring.html) at default port `4040`, this option map `4040` port inside docker container to `4040` port on host machine . Note every new spark context that is created is put onto an incrementing port (ie. 4040, 4041, 4042, etc.), and it might be necessary to open multiple ports. For example: `docker run -d -p 8888:8888 -p 4040:4040 -p 4041:4041 jupyter/pyspark-notebook`
* `-p 4040:4040` - The `jupyter/pyspark-notebook` and `jupyter/all-spark-notebook` images open [SparkUI (Spark Monitoring and Instrumentation UI)](http://spark.apache.org/docs/latest/monitoring.html) at default port `4040`, this option map `4040` port inside docker container to `4040` port on host machine . Note every new spark context that is created is put onto an incrementing port (ie. 4040, 4041, 4042, etc.), and it might be necessary to open multiple ports. For example: `docker run -d -p 8888:8888 -p 4040:4040 -p 4041:4041 jupyter/pyspark-notebook`.
**Usage Examples** **Usage Examples**
...@@ -13,137 +14,193 @@ The `jupyter/pyspark-notebook` and `jupyter/all-spark-notebook` images support t ...@@ -13,137 +14,193 @@ The `jupyter/pyspark-notebook` and `jupyter/all-spark-notebook` images support t
### Using Spark Local Mode ### Using Spark Local Mode
Spark local mode is useful for experimentation on small data when you do not have a Spark cluster available. Spark **local mode** is useful for experimentation on small data when you do not have a Spark cluster available.
#### In Python
#### In a Python Notebook In a Python notebook.
```python ```python
from pyspark.sql import SparkSession from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("SimpleApp").getOrCreate()
# do something to prove it works # Spark session & context
spark.sql('SELECT "Test" as c1').show() spark = SparkSession.builder.master('local').getOrCreate()
sc = spark.sparkContext
# Sum of the first 100 whole numbers
rdd = sc.parallelize(range(100 + 1))
rdd.sum()
# 5050
``` ```
#### In a R Notebook #### In R
In a R notebook with [SparkR][sparkr].
```r ```R
library(SparkR) library(SparkR)
as <- sparkR.session("local[*]") # Spark session & context
sc <- sparkR.session("local")
# do something to prove it works # Sum of the first 100 whole numbers
df <- as.DataFrame(iris) sdf <- createDataFrame(list(1:100))
head(filter(df, df$Petal_Width > 0.2)) dapplyCollect(sdf,
function(x)
{ x <- sum(x)}
)
# 5050
``` ```
#### In a Spylon Kernel Scala Notebook In a R notebook with [sparklyr][sparklyr].
```R
library(sparklyr)
# Spark configuration
conf <- spark_config()
# Set the catalog implementation in-memory
conf$spark.sql.catalogImplementation <- "in-memory"
# Spark session & context
sc <- spark_connect(master = "local", config = conf)
# Sum of the first 100 whole numbers
sdf_len(sc, 100, repartition = 1) %>%
spark_apply(function(e) sum(e))
# 5050
```
Spylon kernel instantiates a `SparkContext` for you in variable `sc` after you configure Spark options in a `%%init_spark` magic cell. #### In Scala
##### In a Spylon Kernel
Spylon kernel instantiates a `SparkContext` for you in variable `sc` after you configure Spark
options in a `%%init_spark` magic cell.
```python ```python
%%init_spark %%init_spark
# Configure Spark to use a local master # Configure Spark to use a local master
launcher.master = "local[*]" launcher.master = "local"
``` ```
```scala ```scala
// Now run Scala code that uses the initialized SparkContext in sc // Sum of the first 100 whole numbers
val rdd = sc.parallelize(0 to 999) val rdd = sc.parallelize(0 to 100)
rdd.takeSample(false, 5) rdd.sum()
// 5050
``` ```
#### In an Apache Toree Scala Notebook ##### In an Apache Toree Kernel
Apache Toree instantiates a local `SparkContext` for you in variable `sc` when the kernel starts. Apache Toree instantiates a local `SparkContext` for you in variable `sc` when the kernel starts.
```scala ```scala
val rdd = sc.parallelize(0 to 999) // Sum of the first 100 whole numbers
rdd.takeSample(false, 5) val rdd = sc.parallelize(0 to 100)
rdd.sum()
// 5050
``` ```
### Connecting to a Spark Cluster on Mesos ### Connecting to a Spark Cluster in Standalone Mode
Connection to Spark Cluster on **[Standalone Mode](https://spark.apache.org/docs/latest/spark-standalone.html)** requires the following set of steps:
0. Verify that the docker image (check the Dockerfile) and the Spark Cluster which is being
deployed, run the same version of Spark.
1. [Deploy Spark in Standalone Mode](http://spark.apache.org/docs/latest/spark-standalone.html).
2. Run the Docker container with `--net=host` in a location that is network addressable by all of
your Spark workers. (This is a [Spark networking
requirement](http://spark.apache.org/docs/latest/cluster-overview.html#components).)
* NOTE: When using `--net=host`, you must also use the flags `--pid=host -e
TINI_SUBREAPER=true`. See https://github.com/jupyter/docker-stacks/issues/64 for details.
This configuration allows your compute cluster to scale with your data. **Note**: In the following examples we are using the Spark master URL `spark://master:7077` that shall be replaced by the URL of the Spark master.
0. [Deploy Spark on Mesos](http://spark.apache.org/docs/latest/running-on-mesos.html). #### In Python
1. Configure each slave with [the `--no-switch_user` flag](https://open.mesosphere.com/reference/mesos-slave/) or create the `$NB_USER` account on every slave node.
2. Run the Docker container with `--net=host` in a location that is network addressable by all of your Spark workers. (This is a [Spark networking requirement](http://spark.apache.org/docs/latest/cluster-overview.html#components).)
* NOTE: When using `--net=host`, you must also use the flags `--pid=host -e TINI_SUBREAPER=true`. See https://github.com/jupyter/docker-stacks/issues/64 for details.
3. Follow the language specific instructions below.
#### In a Python Notebook The **same Python version** need to be used on the notebook (where the driver is located) and on the Spark workers.
The python version used at driver and worker side can be adjusted by setting the environment variables `PYSPARK_PYTHON` and / or `PYSPARK_DRIVER_PYTHON`, see [Spark Configuration][spark-conf] for more information.
```python ```python
import os from pyspark.sql import SparkSession
# make sure pyspark tells workers to use python3 not 2 if both are installed
os.environ['PYSPARK_PYTHON'] = '/usr/bin/python3' # Spark session & context
spark = SparkSession.builder.master('spark://master:7077').getOrCreate()
import pyspark sc = spark.sparkContext
conf = pyspark.SparkConf()
# Sum of the first 100 whole numbers
# point to mesos master or zookeeper entry (e.g., zk://10.10.10.10:2181/mesos) rdd = sc.parallelize(range(100 + 1))
conf.setMaster("mesos://10.10.10.10:5050") rdd.sum()
# point to spark binary package in HDFS or on local filesystem on all slave # 5050
# nodes (e.g., file:///opt/spark/spark-2.2.0-bin-hadoop2.7.tgz)
conf.set("spark.executor.uri", "hdfs://10.10.10.10/spark/spark-2.2.0-bin-hadoop2.7.tgz")
# set other options as desired
conf.set("spark.executor.memory", "8g")
conf.set("spark.core.connection.ack.wait.timeout", "1200")
# create the context
sc = pyspark.SparkContext(conf=conf)
# do something to prove it works
rdd = sc.parallelize(range(100000000))
rdd.sumApprox(3)
``` ```
#### In a R Notebook #### In R
In a R notebook with [SparkR][sparkr].
```r ```R
library(SparkR) library(SparkR)
# Point to mesos master or zookeeper entry (e.g., zk://10.10.10.10:2181/mesos) # Spark session & context
# Point to spark binary package in HDFS or on local filesystem on all slave sc <- sparkR.session("spark://master:7077")
# nodes (e.g., file:///opt/spark/spark-2.2.0-bin-hadoop2.7.tgz) in sparkEnvir
# Set other options in sparkEnvir # Sum of the first 100 whole numbers
sc <- sparkR.session("mesos://10.10.10.10:5050", sparkEnvir=list( sdf <- createDataFrame(list(1:100))
spark.executor.uri="hdfs://10.10.10.10/spark/spark-2.2.0-bin-hadoop2.7.tgz", dapplyCollect(sdf,
spark.executor.memory="8g" function(x)
) { x <- sum(x)}
) )
# 5050
# do something to prove it works ```
data(iris)
df <- as.DataFrame(iris) In a R notebook with [sparklyr][sparklyr].
head(filter(df, df$Petal_Width > 0.2))
```R
library(sparklyr)
# Spark session & context
# Spark configuration
conf <- spark_config()
# Set the catalog implementation in-memory
conf$spark.sql.catalogImplementation <- "in-memory"
sc <- spark_connect(master = "spark://master:7077", config = conf)
# Sum of the first 100 whole numbers
sdf_len(sc, 100, repartition = 1) %>%
spark_apply(function(e) sum(e))
# 5050
``` ```
#### In a Spylon Kernel Scala Notebook #### In Scala
##### In a Spylon Kernel
Spylon kernel instantiates a `SparkContext` for you in variable `sc` after you configure Spark
options in a `%%init_spark` magic cell.
```python ```python
%%init_spark %%init_spark
# Configure the location of the mesos master and spark distribution on HDFS # Configure Spark to use a local master
launcher.master = "mesos://10.10.10.10:5050" launcher.master = "spark://master:7077"
launcher.conf.spark.executor.uri=hdfs://10.10.10.10/spark/spark-2.2.0-bin-hadoop2.7.tgz
``` ```
```scala ```scala
// Now run Scala code that uses the initialized SparkContext in sc // Sum of the first 100 whole numbers
val rdd = sc.parallelize(0 to 999) val rdd = sc.parallelize(0 to 100)
rdd.takeSample(false, 5) rdd.sum()
// 5050
``` ```
#### In an Apache Toree Scala Notebook ##### In an Apache Toree Scala Notebook
The Apache Toree kernel automatically creates a `SparkContext` when it starts based on configuration information from its command line arguments and environment variables. You can pass information about your Mesos cluster via the `SPARK_OPTS` environment variable when you spawn a container. The Apache Toree kernel automatically creates a `SparkContext` when it starts based on configuration information from its command line arguments and environment variables. You can pass information about your cluster via the `SPARK_OPTS` environment variable when you spawn a container.
For instance, to pass information about a Mesos master, Spark binary location in HDFS, and an executor options, you could start the container like so: For instance, to pass information about a standalone Spark master, you could start the container like so:
``` ```bash
docker run -d -p 8888:8888 -e SPARK_OPTS='--master=mesos://10.10.10.10:5050 \ docker run -d -p 8888:8888 -e SPARK_OPTS='--master=spark://master:7077' \
--spark.executor.uri=hdfs://10.10.10.10/spark/spark-2.2.0-bin-hadoop2.7.tgz \ jupyter/all-spark-notebook
--spark.executor.memory=8g' jupyter/all-spark-notebook
``` ```
Note that this is the same information expressed in a notebook in the Python case above. Once the kernel spec has your cluster information, you can test your cluster in an Apache Toree notebook like so: Note that this is the same information expressed in a notebook in the Python case above. Once the kernel spec has your cluster information, you can test your cluster in an Apache Toree notebook like so:
...@@ -152,24 +209,16 @@ Note that this is the same information expressed in a notebook in the Python cas ...@@ -152,24 +209,16 @@ Note that this is the same information expressed in a notebook in the Python cas
// should print the value of --master in the kernel spec // should print the value of --master in the kernel spec
println(sc.master) println(sc.master)
// do something to prove it works // Sum of the first 100 whole numbers
val rdd = sc.parallelize(0 to 99999999) val rdd = sc.parallelize(0 to 100)
rdd.sum() rdd.sum()
// 5050
``` ```
### Connecting to a Spark Cluster in Standalone Mode
Connection to Spark Cluster on Standalone Mode requires the following set of steps:
0. Verify that the docker image (check the Dockerfile) and the Spark Cluster which is being deployed, run the same version of Spark.
1. [Deploy Spark in Standalone Mode](http://spark.apache.org/docs/latest/spark-standalone.html).
2. Run the Docker container with `--net=host` in a location that is network addressable by all of your Spark workers. (This is a [Spark networking requirement](http://spark.apache.org/docs/latest/cluster-overview.html#components).)
* NOTE: When using `--net=host`, you must also use the flags `--pid=host -e TINI_SUBREAPER=true`. See https://github.com/jupyter/docker-stacks/issues/64 for details.
3. The language specific instructions are almost same as mentioned above for Mesos, only the master url would now be something like spark://10.10.10.10:7077
## Tensorflow ## Tensorflow
The `jupyter/tensorflow-notebook` image supports the use of [Tensorflow](https://www.tensorflow.org/) in single machine or distributed mode. The `jupyter/tensorflow-notebook` image supports the use of
[Tensorflow](https://www.tensorflow.org/) in single machine or distributed mode.
### Single Machine Mode ### Single Machine Mode
...@@ -199,3 +248,7 @@ init = tf.global_variables_initializer() ...@@ -199,3 +248,7 @@ init = tf.global_variables_initializer()
sess.run(init) sess.run(init)
sess.run(hello) sess.run(hello)
``` ```
[sparkr]: https://spark.apache.org/docs/latest/sparkr.html
[sparklyr]: https://spark.rstudio.com/
[spark-conf]: https://spark.apache.org/docs/latest/configuration.html
\ No newline at end of file
...@@ -12,7 +12,7 @@ See the [installation instructions](https://docs.docker.com/engine/installation/ ...@@ -12,7 +12,7 @@ See the [installation instructions](https://docs.docker.com/engine/installation/
Build and run a `jupyter/minimal-notebook` container on a VirtualBox VM on local desktop. Build and run a `jupyter/minimal-notebook` container on a VirtualBox VM on local desktop.
``` ```bash
# create a Docker Machine-controlled VirtualBox VM # create a Docker Machine-controlled VirtualBox VM
bin/vbox.sh mymachine bin/vbox.sh mymachine
...@@ -28,7 +28,7 @@ notebook/up.sh ...@@ -28,7 +28,7 @@ notebook/up.sh
To stop and remove the container: To stop and remove the container:
``` ```bash
notebook/down.sh notebook/down.sh
``` ```
...@@ -39,14 +39,14 @@ notebook/down.sh ...@@ -39,14 +39,14 @@ notebook/down.sh
You can customize the docker-stack notebook image to deploy by modifying the `notebook/Dockerfile`. For example, you can build and deploy a `jupyter/all-spark-notebook` by modifying the Dockerfile like so: You can customize the docker-stack notebook image to deploy by modifying the `notebook/Dockerfile`. For example, you can build and deploy a `jupyter/all-spark-notebook` by modifying the Dockerfile like so:
``` ```dockerfile
FROM jupyter/all-spark-notebook:55d5ca6be183 FROM jupyter/all-spark-notebook:55d5ca6be183
... ...
``` ```
Once you modify the Dockerfile, don't forget to rebuild the image. Once you modify the Dockerfile, don't forget to rebuild the image.
``` ```bash
# activate the docker machine # activate the docker machine
eval "$(docker-machine env mymachine)" eval "$(docker-machine env mymachine)"
...@@ -57,14 +57,14 @@ notebook/build.sh ...@@ -57,14 +57,14 @@ notebook/build.sh
Yes. Set environment variables to specify unique names and ports when running the `up.sh` command. Yes. Set environment variables to specify unique names and ports when running the `up.sh` command.
``` ```bash
NAME=my-notebook PORT=9000 notebook/up.sh NAME=my-notebook PORT=9000 notebook/up.sh
NAME=your-notebook PORT=9001 notebook/up.sh NAME=your-notebook PORT=9001 notebook/up.sh
``` ```
To stop and remove the containers: To stop and remove the containers:
``` ```bash
NAME=my-notebook notebook/down.sh NAME=my-notebook notebook/down.sh
NAME=your-notebook notebook/down.sh NAME=your-notebook notebook/down.sh
``` ```
...@@ -78,7 +78,7 @@ The `up.sh` creates a Docker volume named after the notebook container with a `- ...@@ -78,7 +78,7 @@ The `up.sh` creates a Docker volume named after the notebook container with a `-
Yes. Set the `WORK_VOLUME` environment variable to the same value for each notebook. Yes. Set the `WORK_VOLUME` environment variable to the same value for each notebook.
``` ```bash
NAME=my-notebook PORT=9000 WORK_VOLUME=our-work notebook/up.sh NAME=my-notebook PORT=9000 WORK_VOLUME=our-work notebook/up.sh
NAME=your-notebook PORT=9001 WORK_VOLUME=our-work notebook/up.sh NAME=your-notebook PORT=9001 WORK_VOLUME=our-work notebook/up.sh
``` ```
...@@ -87,7 +87,7 @@ NAME=your-notebook PORT=9001 WORK_VOLUME=our-work notebook/up.sh ...@@ -87,7 +87,7 @@ NAME=your-notebook PORT=9001 WORK_VOLUME=our-work notebook/up.sh
To run the notebook server with a self-signed certificate, pass the `--secure` option to the `up.sh` script. You must also provide a password, which will be used to secure the notebook server. You can specify the password by setting the `PASSWORD` environment variable, or by passing it to the `up.sh` script. To run the notebook server with a self-signed certificate, pass the `--secure` option to the `up.sh` script. You must also provide a password, which will be used to secure the notebook server. You can specify the password by setting the `PASSWORD` environment variable, or by passing it to the `up.sh` script.
``` ```bash
PASSWORD=a_secret notebook/up.sh --secure PASSWORD=a_secret notebook/up.sh --secure
# or # or
...@@ -103,7 +103,7 @@ This example includes the `bin/letsencrypt.sh` script, which runs the `letsencry ...@@ -103,7 +103,7 @@ This example includes the `bin/letsencrypt.sh` script, which runs the `letsencry
The following command will create a certificate chain and store it in a Docker volume named `mydomain-secrets`. The following command will create a certificate chain and store it in a Docker volume named `mydomain-secrets`.
``` ```bash
FQDN=host.mydomain.com EMAIL=myemail@somewhere.com \ FQDN=host.mydomain.com EMAIL=myemail@somewhere.com \
SECRETS_VOLUME=mydomain-secrets \ SECRETS_VOLUME=mydomain-secrets \
bin/letsencrypt.sh bin/letsencrypt.sh
...@@ -111,7 +111,7 @@ FQDN=host.mydomain.com EMAIL=myemail@somewhere.com \ ...@@ -111,7 +111,7 @@ FQDN=host.mydomain.com EMAIL=myemail@somewhere.com \
Now run `up.sh` with the `--letsencrypt` option. You must also provide the name of the secrets volume and a password. Now run `up.sh` with the `--letsencrypt` option. You must also provide the name of the secrets volume and a password.
``` ```bash
PASSWORD=a_secret SECRETS_VOLUME=mydomain-secrets notebook/up.sh --letsencrypt PASSWORD=a_secret SECRETS_VOLUME=mydomain-secrets notebook/up.sh --letsencrypt
# or # or
...@@ -120,7 +120,7 @@ notebook/up.sh --letsencrypt --password a_secret --secrets mydomain-secrets ...@@ -120,7 +120,7 @@ notebook/up.sh --letsencrypt --password a_secret --secrets mydomain-secrets
Be aware that Let's Encrypt has a pretty [low rate limit per domain](https://community.letsencrypt.org/t/public-beta-rate-limits/4772/3) at the moment. You can avoid exhausting your limit by testing against the Let's Encrypt staging servers. To hit their staging servers, set the environment variable `CERT_SERVER=--staging`. Be aware that Let's Encrypt has a pretty [low rate limit per domain](https://community.letsencrypt.org/t/public-beta-rate-limits/4772/3) at the moment. You can avoid exhausting your limit by testing against the Let's Encrypt staging servers. To hit their staging servers, set the environment variable `CERT_SERVER=--staging`.
``` ```bash
FQDN=host.mydomain.com EMAIL=myemail@somewhere.com \ FQDN=host.mydomain.com EMAIL=myemail@somewhere.com \
CERT_SERVER=--staging \ CERT_SERVER=--staging \
bin/letsencrypt.sh bin/letsencrypt.sh
...@@ -134,13 +134,13 @@ Yes, you should be able to deploy to any Docker Machine-controlled host. To mak ...@@ -134,13 +134,13 @@ Yes, you should be able to deploy to any Docker Machine-controlled host. To mak
To create a Docker machine using a VirtualBox VM on local desktop: To create a Docker machine using a VirtualBox VM on local desktop:
``` ```bash
bin/vbox.sh mymachine bin/vbox.sh mymachine
``` ```
To create a Docker machine using a virtual device on IBM SoftLayer: To create a Docker machine using a virtual device on IBM SoftLayer:
``` ```bash
export SOFTLAYER_USER=my_softlayer_username export SOFTLAYER_USER=my_softlayer_username
export SOFTLAYER_API_KEY=my_softlayer_api_key export SOFTLAYER_API_KEY=my_softlayer_api_key
export SOFTLAYER_DOMAIN=my.domain export SOFTLAYER_DOMAIN=my.domain
......
...@@ -11,7 +11,7 @@ This folder contains a Makefile and a set of supporting files demonstrating how ...@@ -11,7 +11,7 @@ This folder contains a Makefile and a set of supporting files demonstrating how
To show what's possible, here's how to run the `jupyter/minimal-notebook` on a brand new local virtualbox. To show what's possible, here's how to run the `jupyter/minimal-notebook` on a brand new local virtualbox.
``` ```bash
# create a new VM # create a new VM
make virtualbox-vm NAME=dev make virtualbox-vm NAME=dev
# make the new VM the active docker machine # make the new VM the active docker machine
...@@ -30,7 +30,7 @@ The last command will log the IP address and port to visit in your browser. ...@@ -30,7 +30,7 @@ The last command will log the IP address and port to visit in your browser.
Yes. Specify a unique name and port on the `make notebook` command. Yes. Specify a unique name and port on the `make notebook` command.
``` ```bash
make notebook NAME=my-notebook PORT=9000 make notebook NAME=my-notebook PORT=9000
make notebook NAME=your-notebook PORT=9001 make notebook NAME=your-notebook PORT=9001
``` ```
...@@ -39,7 +39,7 @@ make notebook NAME=your-notebook PORT=9001 ...@@ -39,7 +39,7 @@ make notebook NAME=your-notebook PORT=9001
Yes. Yes.
``` ```bash
make notebook NAME=my-notebook PORT=9000 WORK_VOLUME=our-work make notebook NAME=my-notebook PORT=9000 WORK_VOLUME=our-work
make notebook NAME=your-notebook PORT=9001 WORK_VOLUME=our-work make notebook NAME=your-notebook PORT=9001 WORK_VOLUME=our-work
``` ```
...@@ -52,7 +52,7 @@ Instead of `make notebook`, run `make self-signed-notebook PASSWORD=your_desired ...@@ -52,7 +52,7 @@ Instead of `make notebook`, run `make self-signed-notebook PASSWORD=your_desired
Yes. Please. Yes. Please.
``` ```bash
make letsencrypt FQDN=host.mydomain.com EMAIL=myemail@somewhere.com make letsencrypt FQDN=host.mydomain.com EMAIL=myemail@somewhere.com
make letsencrypt-notebook make letsencrypt-notebook
``` ```
...@@ -61,7 +61,7 @@ The first command creates a Docker volume named after the notebook container wit ...@@ -61,7 +61,7 @@ The first command creates a Docker volume named after the notebook container wit
Be aware: Let's Encrypt has a pretty [low rate limit per domain](https://community.letsencrypt.org/t/public-beta-rate-limits/4772/3) at the moment. You can avoid exhausting your limit by testing against the Let's Encrypt staging servers. To hit their staging servers, set the environment variable `CERT_SERVER=--staging`. Be aware: Let's Encrypt has a pretty [low rate limit per domain](https://community.letsencrypt.org/t/public-beta-rate-limits/4772/3) at the moment. You can avoid exhausting your limit by testing against the Let's Encrypt staging servers. To hit their staging servers, set the environment variable `CERT_SERVER=--staging`.
``` ```bash
make letsencrypt FQDN=host.mydomain.com EMAIL=myemail@somewhere.com CERT_SERVER=--staging make letsencrypt FQDN=host.mydomain.com EMAIL=myemail@somewhere.com CERT_SERVER=--staging
``` ```
...@@ -69,7 +69,7 @@ Also, keep in mind Let's Encrypt certificates are short lived: 90 days at the mo ...@@ -69,7 +69,7 @@ Also, keep in mind Let's Encrypt certificates are short lived: 90 days at the mo
### My pip/conda/apt-get installs disappear every time I restart the container. Can I make them permanent? ### My pip/conda/apt-get installs disappear every time I restart the container. Can I make them permanent?
``` ```bash
# add your pip, conda, apt-get, etc. permanent features to the Dockerfile where # add your pip, conda, apt-get, etc. permanent features to the Dockerfile where
# indicated by the comments in the Dockerfile # indicated by the comments in the Dockerfile
vi Dockerfile vi Dockerfile
...@@ -79,7 +79,7 @@ make notebook ...@@ -79,7 +79,7 @@ make notebook
### How do I upgrade my Docker container? ### How do I upgrade my Docker container?
``` ```bash
make image DOCKER_ARGS=--pull make image DOCKER_ARGS=--pull
make notebook make notebook
``` ```
...@@ -90,7 +90,7 @@ The first line pulls the latest version of the Docker image used in the local Do ...@@ -90,7 +90,7 @@ The first line pulls the latest version of the Docker image used in the local Do
Yes. As an example, there's a `softlayer.makefile` included in this repo as an example. You would use it like so: Yes. As an example, there's a `softlayer.makefile` included in this repo as an example. You would use it like so:
``` ```bash
make softlayer-vm NAME=myhost \ make softlayer-vm NAME=myhost \
SOFTLAYER_DOMAIN=your_desired_domain \ SOFTLAYER_DOMAIN=your_desired_domain \
SOFTLAYER_USER=your_user_id \ SOFTLAYER_USER=your_user_id \
......
...@@ -16,7 +16,7 @@ Loading the Templates ...@@ -16,7 +16,7 @@ Loading the Templates
To load the templates, login to OpenShift from the command line and run: To load the templates, login to OpenShift from the command line and run:
``` ```bash
oc create -f https://raw.githubusercontent.com/jupyter-on-openshift/docker-stacks/master/examples/openshift/templates.json oc create -f https://raw.githubusercontent.com/jupyter-on-openshift/docker-stacks/master/examples/openshift/templates.json
``` ```
...@@ -33,7 +33,7 @@ Deploying a Notebook ...@@ -33,7 +33,7 @@ Deploying a Notebook
To deploy a notebook from the command line using the template, run: To deploy a notebook from the command line using the template, run:
``` ```bash
oc new-app --template jupyter-notebook oc new-app --template jupyter-notebook
``` ```
...@@ -71,7 +71,7 @@ A password you can use when accessing the notebook will be auto generated and is ...@@ -71,7 +71,7 @@ A password you can use when accessing the notebook will be auto generated and is
To see the hostname for accessing the notebook run: To see the hostname for accessing the notebook run:
``` ```bash
oc get routes oc get routes
``` ```
...@@ -95,7 +95,7 @@ Passing Template Parameters ...@@ -95,7 +95,7 @@ Passing Template Parameters
To override the name for the notebook, the image used, and the password, you can pass template parameters using the ``--param`` option. To override the name for the notebook, the image used, and the password, you can pass template parameters using the ``--param`` option.
``` ```bash
oc new-app --template jupyter-notebook \ oc new-app --template jupyter-notebook \
--param APPLICATION_NAME=mynotebook \ --param APPLICATION_NAME=mynotebook \
--param NOTEBOOK_IMAGE=jupyter/scipy-notebook:latest \ --param NOTEBOOK_IMAGE=jupyter/scipy-notebook:latest \
...@@ -120,7 +120,7 @@ Deleting the Notebook Instance ...@@ -120,7 +120,7 @@ Deleting the Notebook Instance
To delete the notebook instance, run ``oc delete`` using a label selector for the application name. To delete the notebook instance, run ``oc delete`` using a label selector for the application name.
``` ```bash
oc delete all,configmap --selector app=mynotebook oc delete all,configmap --selector app=mynotebook
``` ```
...@@ -129,7 +129,7 @@ Enabling Jupyter Lab Interface ...@@ -129,7 +129,7 @@ Enabling Jupyter Lab Interface
To enable the Jupyter Lab interface for a deployed notebook set the ``JUPYTER_ENABLE_LAB`` environment variable. To enable the Jupyter Lab interface for a deployed notebook set the ``JUPYTER_ENABLE_LAB`` environment variable.
``` ```bash
oc set env dc/mynotebook JUPYTER_ENABLE_LAB=true oc set env dc/mynotebook JUPYTER_ENABLE_LAB=true
``` ```
...@@ -140,7 +140,7 @@ Adding Persistent Storage ...@@ -140,7 +140,7 @@ Adding Persistent Storage
You can upload notebooks and other files using the web interface of the notebook. Any uploaded files or changes you make to them will be lost when the notebook instance is restarted. If you want to save your work, you need to add persistent storage to the notebook. To add persistent storage run: You can upload notebooks and other files using the web interface of the notebook. Any uploaded files or changes you make to them will be lost when the notebook instance is restarted. If you want to save your work, you need to add persistent storage to the notebook. To add persistent storage run:
``` ```bash
oc set volume dc/mynotebook --add \ oc set volume dc/mynotebook --add \
--type=pvc --claim-size=1Gi --claim-mode=ReadWriteOnce \ --type=pvc --claim-size=1Gi --claim-mode=ReadWriteOnce \
--claim-name mynotebook-data --name data \ --claim-name mynotebook-data --name data \
...@@ -149,7 +149,7 @@ oc set volume dc/mynotebook --add \ ...@@ -149,7 +149,7 @@ oc set volume dc/mynotebook --add \
When you have deleted the notebook instance, if using a persistent volume, you will need to delete it in a separate step. When you have deleted the notebook instance, if using a persistent volume, you will need to delete it in a separate step.
``` ```bash
oc delete pvc/mynotebook-data oc delete pvc/mynotebook-data
``` ```
...@@ -158,7 +158,7 @@ Customizing the Configuration ...@@ -158,7 +158,7 @@ Customizing the Configuration
If you want to set any custom configuration for the notebook, you can edit the config map created by the template. If you want to set any custom configuration for the notebook, you can edit the config map created by the template.
``` ```bash
oc edit configmap/mynotebook-cfg oc edit configmap/mynotebook-cfg
``` ```
...@@ -176,19 +176,19 @@ Because the configuration is Python code, ensure any indenting is correct. Any e ...@@ -176,19 +176,19 @@ Because the configuration is Python code, ensure any indenting is correct. Any e
If the error is in the config map, edit it again to fix it and trigged a new deployment if necessary by running: If the error is in the config map, edit it again to fix it and trigged a new deployment if necessary by running:
``` ```bash
oc rollout latest dc/mynotebook oc rollout latest dc/mynotebook
``` ```
If you make an error in the configuration file stored in the persistent volume, you will need to scale down the notebook so it isn't running. If you make an error in the configuration file stored in the persistent volume, you will need to scale down the notebook so it isn't running.
``` ```bash
oc scale dc/mynotebook --replicas 0 oc scale dc/mynotebook --replicas 0
``` ```
Then run: Then run:
``` ```bash
oc debug dc/mynotebook oc debug dc/mynotebook
``` ```
...@@ -196,7 +196,7 @@ to run the notebook in debug mode. This will provide you with an interactive ter ...@@ -196,7 +196,7 @@ to run the notebook in debug mode. This will provide you with an interactive ter
Start up the notebook again. Start up the notebook again.
``` ```bash
oc scale dc/mynotebook --replicas 1 oc scale dc/mynotebook --replicas 1
``` ```
...@@ -207,7 +207,7 @@ The password for the notebook is supplied as a template parameter, or if not sup ...@@ -207,7 +207,7 @@ The password for the notebook is supplied as a template parameter, or if not sup
If you want to change the password, you can do so by editing the environment variable on the deployment configuration. If you want to change the password, you can do so by editing the environment variable on the deployment configuration.
``` ```bash
oc set env dc/mynotebook JUPYTER_NOTEBOOK_PASSWORD=mypassword oc set env dc/mynotebook JUPYTER_NOTEBOOK_PASSWORD=mypassword
``` ```
...@@ -232,13 +232,13 @@ If the image is in your OpenShift project, because you imported the image into O ...@@ -232,13 +232,13 @@ If the image is in your OpenShift project, because you imported the image into O
This can be illustrated by first importing an image into the OpenShift project. This can be illustrated by first importing an image into the OpenShift project.
``` ```bash
oc import-image jupyter/datascience-notebook:latest --confirm oc import-image jupyter/datascience-notebook:latest --confirm
``` ```
Then deploy it using the name of the image stream created. Then deploy it using the name of the image stream created.
``` ```bash
oc new-app --template jupyter-notebook \ oc new-app --template jupyter-notebook \
--param APPLICATION_NAME=mynotebook \ --param APPLICATION_NAME=mynotebook \
--param NOTEBOOK_IMAGE=datascience-notebook \ --param NOTEBOOK_IMAGE=datascience-notebook \
......
...@@ -22,7 +22,7 @@ Getting Started with S2I ...@@ -22,7 +22,7 @@ Getting Started with S2I
As an example of how S2I can be used to create a custom image with a bundled set of notebooks, run: As an example of how S2I can be used to create a custom image with a bundled set of notebooks, run:
``` ```bash
s2i build \ s2i build \
--scripts-url https://raw.githubusercontent.com/jupyter/docker-stacks/master/examples/source-to-image \ --scripts-url https://raw.githubusercontent.com/jupyter/docker-stacks/master/examples/source-to-image \
--context-dir docs/source/examples/Notebook \ --context-dir docs/source/examples/Notebook \
...@@ -76,7 +76,7 @@ The supplied ``assemble`` script performs a few key steps. ...@@ -76,7 +76,7 @@ The supplied ``assemble`` script performs a few key steps.
The first steps copy files into the location they need to be when the image is run, from the directory where they are initially placed by the ``s2i`` command. The first steps copy files into the location they need to be when the image is run, from the directory where they are initially placed by the ``s2i`` command.
``` ```bash
cp -Rf /tmp/src/. /home/$NB_USER cp -Rf /tmp/src/. /home/$NB_USER
rm -rf /tmp/src rm -rf /tmp/src
...@@ -84,7 +84,7 @@ rm -rf /tmp/src ...@@ -84,7 +84,7 @@ rm -rf /tmp/src
The next steps are: The next steps are:
``` ```bash
if [ -f /home/$NB_USER/environment.yml ]; then if [ -f /home/$NB_USER/environment.yml ]; then
conda env update --name root --file /home/$NB_USER/environment.yml conda env update --name root --file /home/$NB_USER/environment.yml
conda clean --all -f -y conda clean --all -f -y
...@@ -101,7 +101,7 @@ This means that so long as a set of notebook files provides one of these files l ...@@ -101,7 +101,7 @@ This means that so long as a set of notebook files provides one of these files l
A final step is: A final step is:
``` ```bash
fix-permissions $CONDA_DIR fix-permissions $CONDA_DIR
fix-permissions /home/$NB_USER fix-permissions /home/$NB_USER
``` ```
...@@ -112,7 +112,7 @@ As long as you preserve the first and last set of steps, you can do whatever you ...@@ -112,7 +112,7 @@ As long as you preserve the first and last set of steps, you can do whatever you
The ``run`` script in this directory is very simple and just runs the notebook application. The ``run`` script in this directory is very simple and just runs the notebook application.
``` ```bash
exec start-notebook.sh "$@" exec start-notebook.sh "$@"
``` ```
...@@ -121,13 +121,13 @@ Integration with OpenShift ...@@ -121,13 +121,13 @@ Integration with OpenShift
The OpenShift platform provides integrated support for S2I type builds. Templates are provided for using the S2I build mechanism with the scripts in this directory. To load the templates run: The OpenShift platform provides integrated support for S2I type builds. Templates are provided for using the S2I build mechanism with the scripts in this directory. To load the templates run:
``` ```bash
oc create -f https://raw.githubusercontent.com/jupyter/docker-stacks/master/examples/source-to-image/templates.json oc create -f https://raw.githubusercontent.com/jupyter/docker-stacks/master/examples/source-to-image/templates.json
``` ```
This will create the templates: This will create the templates:
``` ```bash
jupyter-notebook-builder jupyter-notebook-builder
jupyter-notebook-quickstart jupyter-notebook-quickstart
``` ```
...@@ -136,7 +136,7 @@ The templates can be used from the OpenShift web console or command line. This ` ...@@ -136,7 +136,7 @@ The templates can be used from the OpenShift web console or command line. This `
To use the OpenShift command line to build into an image, and deploy, the set of notebooks used above, run: To use the OpenShift command line to build into an image, and deploy, the set of notebooks used above, run:
``` ```bash
oc new-app --template jupyter-notebook-quickstart \ oc new-app --template jupyter-notebook-quickstart \
--param APPLICATION_NAME=notebook-examples \ --param APPLICATION_NAME=notebook-examples \
--param GIT_REPOSITORY_URL=https://github.com/jupyter/notebook \ --param GIT_REPOSITORY_URL=https://github.com/jupyter/notebook \
......
...@@ -2,6 +2,7 @@ cat << EOF > "$MANIFEST_FILE" ...@@ -2,6 +2,7 @@ cat << EOF > "$MANIFEST_FILE"
* Build datetime: ${BUILD_TIMESTAMP} * Build datetime: ${BUILD_TIMESTAMP}
* DockerHub build code: ${BUILD_CODE} * DockerHub build code: ${BUILD_CODE}
* Docker image: ${DOCKER_REPO}:${GIT_SHA_TAG} * Docker image: ${DOCKER_REPO}:${GIT_SHA_TAG}
* Docker image size: $(docker images ${IMAGE_NAME} --format "{{.Size}}")
* Git commit SHA: [${SOURCE_COMMIT}](https://github.com/jupyter/docker-stacks/commit/${SOURCE_COMMIT}) * Git commit SHA: [${SOURCE_COMMIT}](https://github.com/jupyter/docker-stacks/commit/${SOURCE_COMMIT})
* Git commit message: * Git commit message:
\`\`\` \`\`\`
......
...@@ -15,7 +15,7 @@ RUN apt-get -y update && \ ...@@ -15,7 +15,7 @@ RUN apt-get -y update && \
apt-get install --no-install-recommends -y openjdk-8-jre-headless ca-certificates-java && \ apt-get install --no-install-recommends -y openjdk-8-jre-headless ca-certificates-java && \
rm -rf /var/lib/apt/lists/* rm -rf /var/lib/apt/lists/*
# Using the preferred mirror to download the file # Using the preferred mirror to download Spark
RUN cd /tmp && \ RUN cd /tmp && \
wget -q $(wget -qO- https://www.apache.org/dyn/closer.lua/spark/spark-${APACHE_SPARK_VERSION}/spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz\?as_json | \ wget -q $(wget -qO- https://www.apache.org/dyn/closer.lua/spark/spark-${APACHE_SPARK_VERSION}/spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz\?as_json | \
python -c "import sys, json; content=json.load(sys.stdin); print(content['preferred']+content['path_info'])") && \ python -c "import sys, json; content=json.load(sys.stdin); print(content['preferred']+content['path_info'])") && \
...@@ -24,23 +24,9 @@ RUN cd /tmp && \ ...@@ -24,23 +24,9 @@ RUN cd /tmp && \
rm spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz rm spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz
RUN cd /usr/local && ln -s spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION} spark RUN cd /usr/local && ln -s spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION} spark
# Mesos dependencies # Configure Spark
# Install from the Xenial Mesosphere repository since there does not (yet)
# exist a Bionic repository and the dependencies seem to be compatible for now.
COPY mesos.key /tmp/
RUN apt-get -y update && \
apt-get install --no-install-recommends -y gnupg && \
apt-key add /tmp/mesos.key && \
echo "deb http://repos.mesosphere.io/ubuntu xenial main" > /etc/apt/sources.list.d/mesosphere.list && \
apt-get -y update && \
apt-get --no-install-recommends -y install mesos=1.2\* && \
apt-get purge --auto-remove -y gnupg && \
rm -rf /var/lib/apt/lists/*
# Spark and Mesos config
ENV SPARK_HOME=/usr/local/spark ENV SPARK_HOME=/usr/local/spark
ENV PYTHONPATH=$SPARK_HOME/python:$SPARK_HOME/python/lib/py4j-0.10.7-src.zip \ ENV PYTHONPATH=$SPARK_HOME/python:$SPARK_HOME/python/lib/py4j-0.10.7-src.zip \
MESOS_NATIVE_LIBRARY=/usr/local/lib/libmesos.so \
SPARK_OPTS="--driver-java-options=-Xms1024M --driver-java-options=-Xmx4096M --driver-java-options=-Dlog4j.logLevel=info" \ SPARK_OPTS="--driver-java-options=-Xms1024M --driver-java-options=-Xmx4096M --driver-java-options=-Dlog4j.logLevel=info" \
PATH=$PATH:$SPARK_HOME/bin PATH=$PATH:$SPARK_HOME/bin
......
[![docker pulls](https://img.shields.io/docker/pulls/jupyter/pyspark-notebook.svg)](https://hub.docker.com/r/jupyter/pyspark-notebook/) [![docker stars](https://img.shields.io/docker/stars/jupyter/pyspark-notebook.svg)](https://hub.docker.com/r/jupyter/pyspark-notebook/) [![image metadata](https://images.microbadger.com/badges/image/jupyter/pyspark-notebook.svg)](https://microbadger.com/images/jupyter/pyspark-notebook "jupyter/pyspark-notebook image metadata") [![docker pulls](https://img.shields.io/docker/pulls/jupyter/pyspark-notebook.svg)](https://hub.docker.com/r/jupyter/pyspark-notebook/) [![docker stars](https://img.shields.io/docker/stars/jupyter/pyspark-notebook.svg)](https://hub.docker.com/r/jupyter/pyspark-notebook/) [![image metadata](https://images.microbadger.com/badges/image/jupyter/pyspark-notebook.svg)](https://microbadger.com/images/jupyter/pyspark-notebook "jupyter/pyspark-notebook image metadata")
# Jupyter Notebook Python, Spark, Mesos Stack # Jupyter Notebook Python, Spark Stack
Please visit the documentation site for help using and contributing to this image and others. Please visit the documentation site for help using and contributing to this image and others.
......
...@@ -2,6 +2,7 @@ cat << EOF > "$MANIFEST_FILE" ...@@ -2,6 +2,7 @@ cat << EOF > "$MANIFEST_FILE"
* Build datetime: ${BUILD_TIMESTAMP} * Build datetime: ${BUILD_TIMESTAMP}
* DockerHub build code: ${BUILD_CODE} * DockerHub build code: ${BUILD_CODE}
* Docker image: ${DOCKER_REPO}:${GIT_SHA_TAG} * Docker image: ${DOCKER_REPO}:${GIT_SHA_TAG}
* Docker image size: $(docker images ${IMAGE_NAME} --format "{{.Size}}")
* Git commit SHA: [${SOURCE_COMMIT}](https://github.com/jupyter/docker-stacks/commit/${SOURCE_COMMIT}) * Git commit SHA: [${SOURCE_COMMIT}](https://github.com/jupyter/docker-stacks/commit/${SOURCE_COMMIT})
* Git commit message: * Git commit message:
\`\`\` \`\`\`
......
...@@ -2,6 +2,7 @@ cat << EOF > "$MANIFEST_FILE" ...@@ -2,6 +2,7 @@ cat << EOF > "$MANIFEST_FILE"
* Build datetime: ${BUILD_TIMESTAMP} * Build datetime: ${BUILD_TIMESTAMP}
* DockerHub build code: ${BUILD_CODE} * DockerHub build code: ${BUILD_CODE}
* Docker image: ${DOCKER_REPO}:${GIT_SHA_TAG} * Docker image: ${DOCKER_REPO}:${GIT_SHA_TAG}
* Docker image size: $(docker images ${IMAGE_NAME} --format "{{.Size}}")
* Git commit SHA: [${SOURCE_COMMIT}](https://github.com/jupyter/docker-stacks/commit/${SOURCE_COMMIT}) * Git commit SHA: [${SOURCE_COMMIT}](https://github.com/jupyter/docker-stacks/commit/${SOURCE_COMMIT})
* Git commit message: * Git commit message:
\`\`\` \`\`\`
......
...@@ -2,6 +2,7 @@ cat << EOF > "$MANIFEST_FILE" ...@@ -2,6 +2,7 @@ cat << EOF > "$MANIFEST_FILE"
* Build datetime: ${BUILD_TIMESTAMP} * Build datetime: ${BUILD_TIMESTAMP}
* DockerHub build code: ${BUILD_CODE} * DockerHub build code: ${BUILD_CODE}
* Docker image: ${DOCKER_REPO}:${GIT_SHA_TAG} * Docker image: ${DOCKER_REPO}:${GIT_SHA_TAG}
* Docker image size: $(docker images ${IMAGE_NAME} --format "{{.Size}}")
* Git commit SHA: [${SOURCE_COMMIT}](https://github.com/jupyter/docker-stacks/commit/${SOURCE_COMMIT}) * Git commit SHA: [${SOURCE_COMMIT}](https://github.com/jupyter/docker-stacks/commit/${SOURCE_COMMIT})
* Git commit message: * Git commit message:
\`\`\` \`\`\`
......
...@@ -2,6 +2,7 @@ cat << EOF > "$MANIFEST_FILE" ...@@ -2,6 +2,7 @@ cat << EOF > "$MANIFEST_FILE"
* Build datetime: ${BUILD_TIMESTAMP} * Build datetime: ${BUILD_TIMESTAMP}
* DockerHub build code: ${BUILD_CODE} * DockerHub build code: ${BUILD_CODE}
* Docker image: ${DOCKER_REPO}:${GIT_SHA_TAG} * Docker image: ${DOCKER_REPO}:${GIT_SHA_TAG}
* Docker image size: $(docker images ${IMAGE_NAME} --format "{{.Size}}")
* Git commit SHA: [${SOURCE_COMMIT}](https://github.com/jupyter/docker-stacks/commit/${SOURCE_COMMIT}) * Git commit SHA: [${SOURCE_COMMIT}](https://github.com/jupyter/docker-stacks/commit/${SOURCE_COMMIT})
* Git commit message: * Git commit message:
\`\`\` \`\`\`
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment