Commit 64c143d3 authored by Romain's avatar Romain

Merge branch 'master' into hadolint

parents 6f2e7cb5 76402a27
Thanks for contributing! Please see the Thanks for contributing! Please see the
[Contributor Guide](https://jupyter-docker-stacks.readthedocs.io) in the documentation for __Contributor Guide__ section in [the documentation](https://jupyter-docker-stacks.readthedocs.io) for
information about how to contribute information about how to contribute
[package updates](http://jupyter-docker-stacks.readthedocs.io/en/latest/contributing/packages.html), [package updates](http://jupyter-docker-stacks.readthedocs.io/en/latest/contributing/packages.html),
[recipes](http://jupyter-docker-stacks.readthedocs.io/en/latest/contributing/recipes.html), [recipes](http://jupyter-docker-stacks.readthedocs.io/en/latest/contributing/recipes.html),
......
...@@ -48,6 +48,8 @@ arch_patch/%: ## apply hardware architecture specific patches to the Dockerfile ...@@ -48,6 +48,8 @@ arch_patch/%: ## apply hardware architecture specific patches to the Dockerfile
build/%: DARGS?= build/%: DARGS?=
build/%: ## build the latest image for a stack build/%: ## build the latest image for a stack
docker build $(DARGS) --rm --force-rm -t $(OWNER)/$(notdir $@):latest ./$(notdir $@) docker build $(DARGS) --rm --force-rm -t $(OWNER)/$(notdir $@):latest ./$(notdir $@)
@echo -n "Built image size: "
@docker images $(OWNER)/$(notdir $@):latest --format "{{.Size}}"
build-all: $(foreach I,$(ALL_IMAGES),arch_patch/$(I) build/$(I) ) ## build all stacks build-all: $(foreach I,$(ALL_IMAGES),arch_patch/$(I) build/$(I) ) ## build all stacks
build-test-all: $(foreach I,$(ALL_IMAGES),arch_patch/$(I) build/$(I) test/$(I) ) ## build and test all stacks build-test-all: $(foreach I,$(ALL_IMAGES),arch_patch/$(I) build/$(I) test/$(I) ) ## build and test all stacks
...@@ -145,4 +147,4 @@ test/%: ## run tests against a stack (only common tests or common tests + specif ...@@ -145,4 +147,4 @@ test/%: ## run tests against a stack (only common tests or common tests + specif
@if [ ! -d "$(notdir $@)/test" ]; then TEST_IMAGE="$(OWNER)/$(notdir $@)" pytest -m "not info" test; \ @if [ ! -d "$(notdir $@)/test" ]; then TEST_IMAGE="$(OWNER)/$(notdir $@)" pytest -m "not info" test; \
else TEST_IMAGE="$(OWNER)/$(notdir $@)" pytest -m "not info" test $(notdir $@)/test; fi else TEST_IMAGE="$(OWNER)/$(notdir $@)" pytest -m "not info" test $(notdir $@)/test; fi
test-all: $(foreach I,$(ALL_IMAGES),test/$(I)) ## test all stacks test-all: $(foreach I,$(ALL_IMAGES),test/$(I)) ## test all stacks
\ No newline at end of file
[![docker pulls](https://img.shields.io/docker/pulls/jupyter/all-spark-notebook.svg)](https://hub.docker.com/r/jupyter/all-spark-notebook/) [![docker stars](https://img.shields.io/docker/stars/jupyter/all-spark-notebook.svg)](https://hub.docker.com/r/jupyter/all-spark-notebook/) [![image metadata](https://images.microbadger.com/badges/image/jupyter/all-spark-notebook.svg)](https://microbadger.com/images/jupyter/all-spark-notebook "jupyter/all-spark-notebook image metadata") [![docker pulls](https://img.shields.io/docker/pulls/jupyter/all-spark-notebook.svg)](https://hub.docker.com/r/jupyter/all-spark-notebook/) [![docker stars](https://img.shields.io/docker/stars/jupyter/all-spark-notebook.svg)](https://hub.docker.com/r/jupyter/all-spark-notebook/) [![image metadata](https://images.microbadger.com/badges/image/jupyter/all-spark-notebook.svg)](https://microbadger.com/images/jupyter/all-spark-notebook "jupyter/all-spark-notebook image metadata")
# Jupyter Notebook Python, Scala, R, Spark, Mesos Stack # Jupyter Notebook Python, Scala, R, Spark Stack
Please visit the documentation site for help using and contributing to this image and others. Please visit the documentation site for help using and contributing to this image and others.
......
...@@ -2,6 +2,7 @@ cat << EOF > "$MANIFEST_FILE" ...@@ -2,6 +2,7 @@ cat << EOF > "$MANIFEST_FILE"
* Build datetime: ${BUILD_TIMESTAMP} * Build datetime: ${BUILD_TIMESTAMP}
* DockerHub build code: ${BUILD_CODE} * DockerHub build code: ${BUILD_CODE}
* Docker image: ${DOCKER_REPO}:${GIT_SHA_TAG} * Docker image: ${DOCKER_REPO}:${GIT_SHA_TAG}
* Docker image size: $(docker images ${IMAGE_NAME} --format "{{.Size}}")
* Git commit SHA: [${SOURCE_COMMIT}](https://github.com/jupyter/docker-stacks/commit/${SOURCE_COMMIT}) * Git commit SHA: [${SOURCE_COMMIT}](https://github.com/jupyter/docker-stacks/commit/${SOURCE_COMMIT})
* Git commit message: * Git commit message:
\`\`\` \`\`\`
......
{
"cells": [
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"output_type": "error",
"ename": "Error",
"evalue": "Jupyter cannot be started. Error attempting to locate jupyter: Data Science libraries jupyter and notebook are not installed in interpreter Python 3.7.7 64-bit ('jupyter': conda).",
"traceback": [
"Error: Jupyter cannot be started. Error attempting to locate jupyter: Data Science libraries jupyter and notebook are not installed in interpreter Python 3.7.7 64-bit ('jupyter': conda).",
"at b.startServer (/Users/romain/.vscode/extensions/ms-python.python-2020.5.80290/out/client/extension.js:92:270430)",
"at async b.createServer (/Users/romain/.vscode/extensions/ms-python.python-2020.5.80290/out/client/extension.js:92:269873)",
"at async connect (/Users/romain/.vscode/extensions/ms-python.python-2020.5.80290/out/client/extension.js:92:397876)",
"at async w.ensureConnectionAndNotebookImpl (/Users/romain/.vscode/extensions/ms-python.python-2020.5.80290/out/client/extension.js:16:556625)",
"at async w.ensureConnectionAndNotebook (/Users/romain/.vscode/extensions/ms-python.python-2020.5.80290/out/client/extension.js:16:556303)",
"at async w.clearResult (/Users/romain/.vscode/extensions/ms-python.python-2020.5.80290/out/client/extension.js:16:552346)",
"at async w.reexecuteCell (/Users/romain/.vscode/extensions/ms-python.python-2020.5.80290/out/client/extension.js:16:540374)",
"at async w.reexecuteCells (/Users/romain/.vscode/extensions/ms-python.python-2020.5.80290/out/client/extension.js:16:537541)"
]
}
],
"source": [
"from pyspark.sql import SparkSession\n",
"\n",
"# Spark session & context\n",
"spark = SparkSession.builder.master('local').getOrCreate()\n",
"sc = spark.sparkContext\n",
"\n",
"# Sum of the first 100 whole numbers\n",
"rdd = sc.parallelize(range(100 + 1))\n",
"rdd.sum()\n",
"# 5050"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
\ No newline at end of file
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"library(SparkR)\n",
"\n",
"# Spark session & context\n",
"sc <- sparkR.session(\"local\")\n",
"\n",
"# Sum of the first 100 whole numbers\n",
"sdf <- createDataFrame(list(1:100))\n",
"dapplyCollect(sdf,\n",
" function(x) \n",
" { x <- sum(x)}\n",
" )\n",
"# 5050"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "R",
"language": "R",
"name": "ir"
},
"language_info": {
"codemirror_mode": "r",
"file_extension": ".r",
"mimetype": "text/x-r-source",
"name": "R",
"pygments_lexer": "r",
"version": "3.6.3"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
\ No newline at end of file
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"library(sparklyr)\n",
"\n",
"# get the default config\n",
"conf <- spark_config()\n",
"# Set the catalog implementation in-memory\n",
"conf$spark.sql.catalogImplementation <- \"in-memory\"\n",
"\n",
"# Spark session & context\n",
"sc <- spark_connect(master = \"local\", config = conf)\n",
"\n",
"# Sum of the first 100 whole numbers\n",
"sdf_len(sc, 100, repartition = 1) %>% \n",
" spark_apply(function(e) sum(e))\n",
"# 5050"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "R",
"language": "R",
"name": "ir"
},
"language_info": {
"codemirror_mode": "r",
"file_extension": ".r",
"mimetype": "text/x-r-source",
"name": "R",
"pygments_lexer": "r",
"version": "3.6.3"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
\ No newline at end of file
{
"cells": [
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"%%init_spark\n",
"# Spark session & context\n",
"launcher.master = \"local\"\n",
"launcher.conf.spark.executor.cores = 1"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[8] at parallelize at <console>:28\n",
"res4: Double = 5050.0\n"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"// Sum of the first 100 whole numbers\n",
"val rdd = sc.parallelize(0 to 100)\n",
"rdd.sum()\n",
"// 5050"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "spylon-kernel",
"language": "scala",
"name": "spylon-kernel"
},
"language_info": {
"codemirror_mode": "text/x-scala",
"file_extension": ".scala",
"help_links": [
{
"text": "MetaKernel Magics",
"url": "https://metakernel.readthedocs.io/en/latest/source/README.html"
}
],
"mimetype": "text/x-scala",
"name": "scala",
"pygments_lexer": "scala",
"version": "0.4.1"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
\ No newline at end of file
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Waiting for a Spark session to start..."
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"spark://master:7077\n"
]
}
],
"source": [
"// should print the value of --master in the kernel spec\n",
"println(sc.master)"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Waiting for a Spark session to start..."
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"rdd = ParallelCollectionRDD[0] at parallelize at <console>:28\n"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"5050.0"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"// Sum of the first 100 whole numbers\n",
"val rdd = sc.parallelize(0 to 100)\n",
"rdd.sum()\n",
"// 5050"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Apache Toree - Scala",
"language": "scala",
"name": "apache_toree_scala"
},
"language_info": {
"codemirror_mode": "text/x-scala",
"file_extension": ".scala",
"mimetype": "text/x-scala",
"name": "scala",
"pygments_lexer": "scala",
"version": "2.11.12"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
\ No newline at end of file
# Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License.
import logging
import pytest
import os
LOGGER = logging.getLogger(__name__)
@pytest.mark.parametrize(
"test_file",
# TODO: add local_sparklyr
["local_pyspark", "local_spylon", "local_toree", "local_sparkR"],
)
def test_nbconvert(container, test_file):
"""Check if Spark notebooks can be executed"""
host_data_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), "data")
cont_data_dir = "/home/jovyan/data"
output_dir = "/tmp"
timeout_ms = 600
LOGGER.info(f"Test that {test_file} notebook can be executed ...")
command = f"jupyter nbconvert --to markdown --ExecutePreprocessor.timeout={timeout_ms} --output-dir {output_dir} --execute {cont_data_dir}/{test_file}.ipynb"
c = container.run(
volumes={host_data_dir: {"bind": cont_data_dir, "mode": "ro"}},
tty=True,
command=["start.sh", "bash", "-c", command],
)
rv = c.wait(timeout=timeout_ms / 10 + 10)
assert rv == 0 or rv["StatusCode"] == 0, f"Command {command} failed"
logs = c.logs(stdout=True).decode("utf-8")
LOGGER.debug(logs)
expected_file = f"{output_dir}/{test_file}.md"
assert expected_file in logs, f"Expected file {expected_file} not generated"
...@@ -117,7 +117,7 @@ RUN conda install --quiet --yes 'tini=0.18.0' && \ ...@@ -117,7 +117,7 @@ RUN conda install --quiet --yes 'tini=0.18.0' && \
RUN conda install --quiet --yes \ RUN conda install --quiet --yes \
'notebook=6.0.3' \ 'notebook=6.0.3' \
'jupyterhub=1.1.0' \ 'jupyterhub=1.1.0' \
'jupyterlab=2.1.1' && \ 'jupyterlab=2.1.3' && \
conda clean --all -f -y && \ conda clean --all -f -y && \
npm cache clean --force && \ npm cache clean --force && \
jupyter notebook --generate-config && \ jupyter notebook --generate-config && \
......
...@@ -5,6 +5,9 @@ ...@@ -5,6 +5,9 @@
FROM ppc64le/ubuntu:18.04 FROM ppc64le/ubuntu:18.04
LABEL maintainer="Ilsiyar Gaynutdinov <ilsiyar_gaynutdinov@ru.ibm.com>" LABEL maintainer="Ilsiyar Gaynutdinov <ilsiyar_gaynutdinov@ru.ibm.com>"
ARG NB_USER="jovyan"
ARG NB_UID="1000"
ARG NB_GID="100"
USER root USER root
...@@ -13,88 +16,121 @@ USER root ...@@ -13,88 +16,121 @@ USER root
ENV DEBIAN_FRONTEND noninteractive ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update \ RUN apt-get update \
&& apt-get install -yq --no-install-recommends \ && apt-get install -yq --no-install-recommends \
build-essential \ wget \
bzip2 \ bzip2 \
ca-certificates \ ca-certificates \
cmake \
git \
locales \
sudo \ sudo \
wget \ locales \
fonts-liberation \
run-one \
&& apt-get clean && rm -rf /var/lib/apt/lists/* && apt-get clean && rm -rf /var/lib/apt/lists/*
RUN echo "LANGUAGE=en_US.UTF-8" >> /etc/default/locale RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && \
RUN echo "LC_ALL=en_US.UTF-8" >> /etc/default/locale locale-gen
RUN echo "LC_TYPE=en_US.UTF-8" >> /etc/default/locale
RUN locale-gen en_US en_US.UTF-8
#build and install Tini for ppc64le
RUN wget https://github.com/krallin/tini/archive/v0.18.0.tar.gz && \
tar zxvf v0.18.0.tar.gz && \
rm -rf v0.18.0.tar.gz
WORKDIR tini-0.18.0/
RUN cmake . && make install
RUN mv ./tini /usr/local/bin/tini && \
chmod +x /usr/local/bin/tini
WORKDIR ..
# Configure environment # Configure environment
ENV CONDA_DIR /opt/conda ENV CONDA_DIR=/opt/conda \
ENV PATH $CONDA_DIR/bin:$PATH SHELL=/bin/bash \
ENV SHELL /bin/bash NB_USER=$NB_USER \
ENV NB_USER jovyan NB_UID=$NB_UID \
ENV NB_UID 1000 NB_GID=$NB_GID \
ENV HOME /home/$NB_USER LC_ALL=en_US.UTF-8 \
ENV LC_ALL en_US.UTF-8 LANG=en_US.UTF-8 \
ENV LANG en_US.UTF-8 LANGUAGE=en_US.UTF-8
ENV LANGUAGE en_US.UTF-8 ENV PATH=$CONDA_DIR/bin:$PATH \
HOME=/home/$NB_USER
# Create jovyan user with UID=1000 and in the 'users' group
RUN useradd -m -s /bin/bash -N -u $NB_UID $NB_USER && \ # Copy a script that we will use to correct permissions after running certain commands
COPY fix-permissions /usr/local/bin/fix-permissions
RUN chmod a+rx /usr/local/bin/fix-permissions
# Enable prompt color in the skeleton .bashrc before creating the default NB_USER
RUN sed -i 's/^#force_color_prompt=yes/force_color_prompt=yes/' /etc/skel/.bashrc
# Create NB_USER wtih name jovyan user with UID=1000 and in the 'users' group
# and make sure these dirs are writable by the `users` group.
RUN echo "auth requisite pam_deny.so" >> /etc/pam.d/su && \
sed -i.bak -e 's/^%admin/#%admin/' /etc/sudoers && \
sed -i.bak -e 's/^%sudo/#%sudo/' /etc/sudoers && \
useradd -m -s /bin/bash -N -u $NB_UID $NB_USER && \
mkdir -p $CONDA_DIR && \ mkdir -p $CONDA_DIR && \
chown $NB_USER $CONDA_DIR chown $NB_USER:$NB_GID $CONDA_DIR && \
chmod g+w /etc/passwd && \
fix-permissions $HOME && \
fix-permissions $CONDA_DIR
USER $NB_UID USER $NB_UID
WORKDIR $HOME
ARG PYTHON_VERSION=default
# Setup jovyan home directory # Setup work directory for backward-compatibility
RUN mkdir /home/$NB_USER/work && \ RUN mkdir /home/$NB_USER/work && \
mkdir /home/$NB_USER/.jupyter && \ fix-permissions /home/$NB_USER
echo "cacert=/etc/ssl/certs/ca-certificates.crt" > /home/$NB_USER/.curlrc
# Install conda as jovyan and check the md5 sum provided on the download site
ENV MINICONDA_VERSION=4.8.2 \
MINICONDA_MD5=e50662a93f3f5e56ef2d3fdfaf2f8e91 \
CONDA_VERSION=4.8.2
# Install conda as jovyan # Install conda as jovyan
RUN cd /tmp && \ RUN cd /tmp && \
mkdir -p $CONDA_DIR && \ wget --quiet https://repo.continuum.io/miniconda/Miniconda3-py37_${MINICONDA_VERSION}-Linux-ppc64le.sh && \
wget https://repo.continuum.io/miniconda/Miniconda3-4.2.12-Linux-ppc64le.sh && \ echo "${MINICONDA_MD5} *Miniconda3-py37_${MINICONDA_VERSION}-Linux-ppc64le.sh" | md5sum -c - && \
/bin/bash Miniconda3-4.2.12-Linux-ppc64le.sh -f -b -p $CONDA_DIR && \ /bin/bash Miniconda3-py37_${MINICONDA_VERSION}-Linux-ppc64le.sh -f -b -p $CONDA_DIR && \
rm -rf Miniconda3-4.2.12-Linux-ppc64le.sh && \ rm -rf Miniconda3-py37_${MINICONDA_VERSION}-Linux-ppc64le.sh && \
$CONDA_DIR/bin/conda install --quiet --yes conda=4.2.12 && \ echo "conda ${CONDA_VERSION}" >> $CONDA_DIR/conda-meta/pinned && \
$CONDA_DIR/bin/conda config --system --add channels conda-forge && \ conda config --system --prepend channels conda-forge && \
$CONDA_DIR/bin/conda config --system --set auto_update_conda false && \ conda config --system --set auto_update_conda false && \
conda clean --all -f -y conda config --system --set show_channel_urls true && \
conda config --system --set channel_priority strict && \
# Install Jupyter notebook and Hub if [ ! $PYTHON_VERSION = 'default' ]; then conda install --yes python=$PYTHON_VERSION; fi && \
RUN yes | pip install --upgrade pip conda list python | grep '^python ' | tr -s ' ' | cut -d '.' -f 1,2 | sed 's/$/.*/' >> $CONDA_DIR/conda-meta/pinned && \
RUN yes | pip install --quiet --no-cache-dir \ conda install --quiet --yes conda && \
'notebook==5.2.*' \ conda install --quiet --yes pip && \
'jupyterhub==0.7.*' \ conda update --all --quiet --yes && \
'jupyterlab==0.18.*' conda clean --all -f -y && \
rm -rf /home/$NB_USER/.cache/yarn && \
USER root fix-permissions $CONDA_DIR && \
fix-permissions /home/$NB_USER
# Install Tini
RUN conda install --quiet --yes 'tini=0.18.0' && \
conda list tini | grep tini | tr -s ' ' | cut -d ' ' -f 1,2 >> $CONDA_DIR/conda-meta/pinned && \
conda clean --all -f -y && \
fix-permissions $CONDA_DIR && \
fix-permissions /home/$NB_USER
# Install Jupyter Notebook, Lab, and Hub
# Generate a notebook server config
# Cleanup temporary files
# Correct permissions
# Do all this in a single RUN command to avoid duplicating all of the
# files across image layers when the permissions change
RUN conda install --quiet --yes \
'notebook=6.0.3' \
'jupyterhub=1.1.0' \
'jupyterlab=2.1.1' && \
conda clean --all -f -y && \
npm cache clean --force && \
jupyter notebook --generate-config && \
rm -rf $CONDA_DIR/share/jupyter/lab/staging && \
rm -rf /home/$NB_USER/.cache/yarn && \
fix-permissions $CONDA_DIR && \
fix-permissions /home/$NB_USER
EXPOSE 8888 EXPOSE 8888
WORKDIR /home/$NB_USER/work
RUN echo "ALL ALL = (ALL) NOPASSWD: ALL" >> /etc/sudoers
# Configure container startup # Configure container startup
ENTRYPOINT ["tini", "-g", "--"] ENTRYPOINT ["tini", "-g", "--"]
CMD ["start-notebook.sh"] CMD ["start-notebook.sh"]
# Add local files as late as possible to avoid cache busting # Copy local files as late as possible to avoid cache busting
COPY start.sh /usr/local/bin/ COPY start.sh start-notebook.sh start-singleuser.sh /usr/local/bin/
COPY start-notebook.sh /usr/local/bin/ COPY jupyter_notebook_config.py /etc/jupyter/
COPY start-singleuser.sh /usr/local/bin/
COPY jupyter_notebook_config.py /home/$NB_USER/.jupyter/ # Fix permissions on /etc/jupyter as root
RUN chown -R $NB_USER:users /home/$NB_USER/.jupyter USER root
RUN fix-permissions /etc/jupyter/
# Switch back to jovyan to avoid accidental container runs as root # Switch back to jovyan to avoid accidental container runs as root
USER $NB_UID USER $NB_UID
...@@ -2,6 +2,7 @@ cat << EOF > "$MANIFEST_FILE" ...@@ -2,6 +2,7 @@ cat << EOF > "$MANIFEST_FILE"
* Build datetime: ${BUILD_TIMESTAMP} * Build datetime: ${BUILD_TIMESTAMP}
* DockerHub build code: ${BUILD_CODE} * DockerHub build code: ${BUILD_CODE}
* Docker image: ${DOCKER_REPO}:${GIT_SHA_TAG} * Docker image: ${DOCKER_REPO}:${GIT_SHA_TAG}
* Docker image size: $(docker images ${IMAGE_NAME} --format "{{.Size}}")
* Git commit SHA: [${SOURCE_COMMIT}](https://github.com/jupyter/docker-stacks/commit/${SOURCE_COMMIT}) * Git commit SHA: [${SOURCE_COMMIT}](https://github.com/jupyter/docker-stacks/commit/${SOURCE_COMMIT})
* Git commit message: * Git commit message:
\`\`\` \`\`\`
......
# Copyright (c) Jupyter Development Team. # Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License. # Distributed under the terms of the Modified BSD License.
import time import time
import logging
import pytest import pytest
LOGGER = logging.getLogger(__name__)
def test_cli_args(container, http_client): def test_cli_args(container, http_client):
"""Container should respect notebook server command line args """Container should respect notebook server command line args
...@@ -61,6 +64,37 @@ def test_gid_change(container): ...@@ -61,6 +64,37 @@ def test_gid_change(container):
assert 'groups=110(jovyan),100(users)' in logs assert 'groups=110(jovyan),100(users)' in logs
def test_nb_user_change(container):
"""Container should change the user name (`NB_USER`) of the default user."""
nb_user = "nayvoj"
running_container = container.run(
tty=True,
user="root",
environment=[f"NB_USER={nb_user}",
"CHOWN_HOME=yes"],
working_dir=f"/home/{nb_user}",
command=['start.sh', 'bash', '-c', 'sleep infinity']
)
LOGGER.info(f"Checking if the user is changed to {nb_user} by the start script ...")
output = running_container.logs(stdout=True).decode("utf-8")
assert f"Set username to: {nb_user}" in output, f"User is not changed to {nb_user}"
LOGGER.info(f"Checking {nb_user} id ...")
command = "id"
expected_output = f"uid=1000({nb_user}) gid=100(users) groups=100(users)"
cmd = running_container.exec_run(command, user=nb_user)
output = cmd.output.decode("utf-8").strip("\n")
assert output == expected_output, f"Bad user {output}, expected {expected_output}"
LOGGER.info(f"Checking if {nb_user} owns his home folder ...")
command = f'stat -c "%U %G" /home/{nb_user}/'
expected_output = f"{nb_user} users"
cmd = running_container.exec_run(command)
output = cmd.output.decode("utf-8").strip("\n")
assert output == expected_output, f"Bad owner for the {nb_user} home folder {output}, expected {expected_output}"
def test_chown_extra(container): def test_chown_extra(container):
"""Container should change the UID/GID of CHOWN_EXTRA.""" """Container should change the UID/GID of CHOWN_EXTRA."""
c = container.run( c = container.run(
......
...@@ -2,6 +2,7 @@ cat << EOF > "$MANIFEST_FILE" ...@@ -2,6 +2,7 @@ cat << EOF > "$MANIFEST_FILE"
* Build datetime: ${BUILD_TIMESTAMP} * Build datetime: ${BUILD_TIMESTAMP}
* DockerHub build code: ${BUILD_CODE} * DockerHub build code: ${BUILD_CODE}
* Docker image: ${DOCKER_REPO}:${GIT_SHA_TAG} * Docker image: ${DOCKER_REPO}:${GIT_SHA_TAG}
* Docker image size: $(docker images ${IMAGE_NAME} --format "{{.Size}}")
* Git commit SHA: [${SOURCE_COMMIT}](https://github.com/jupyter/docker-stacks/commit/${SOURCE_COMMIT}) * Git commit SHA: [${SOURCE_COMMIT}](https://github.com/jupyter/docker-stacks/commit/${SOURCE_COMMIT})
* Git commit message: * Git commit message:
\`\`\` \`\`\`
......
...@@ -25,9 +25,9 @@ If there's agreement that the feature belongs in one or more of the core stacks: ...@@ -25,9 +25,9 @@ If there's agreement that the feature belongs in one or more of the core stacks:
1. Implement the feature in a local clone of the `jupyter/docker-stacks` project. 1. Implement the feature in a local clone of the `jupyter/docker-stacks` project.
2. Please build the image locally before submitting a pull request. Building the image locally shortens the debugging cycle by taking some load off [Travis CI](http://travis-ci.org/), which graciously provides free build services for open source projects like this one. If you use `make`, call: 2. Please build the image locally before submitting a pull request. Building the image locally shortens the debugging cycle by taking some load off [Travis CI](http://travis-ci.org/), which graciously provides free build services for open source projects like this one. If you use `make`, call:
``` ```bash
make build/somestack-notebook make build/somestack-notebook
``` ```
3. [Submit a pull request](https://github.com/PointCloudLibrary/pcl/wiki/A-step-by-step-guide-on-preparing-and-submitting-a-pull-request) (PR) with your changes. 3. [Submit a pull request](https://github.com/PointCloudLibrary/pcl/wiki/A-step-by-step-guide-on-preparing-and-submitting-a-pull-request) (PR) with your changes.
4. Watch for Travis to report a build success or failure for your PR on GitHub. 4. Watch for Travis to report a build success or failure for your PR on GitHub.
5. Discuss changes with the maintainers and address any build issues. 5. Discuss changes with the maintainers and address any build issues.
...@@ -7,9 +7,9 @@ Please follow the process below to update a package version: ...@@ -7,9 +7,9 @@ Please follow the process below to update a package version:
1. Locate the Dockerfile containing the library you wish to update (e.g., [base-notebook/Dockerfile](https://github.com/jupyter/docker-stacks/blob/master/base-notebook/Dockerfile), [scipy-notebook/Dockerfile](https://github.com/jupyter/docker-stacks/blob/master/scipy-notebook/Dockerfile)) 1. Locate the Dockerfile containing the library you wish to update (e.g., [base-notebook/Dockerfile](https://github.com/jupyter/docker-stacks/blob/master/base-notebook/Dockerfile), [scipy-notebook/Dockerfile](https://github.com/jupyter/docker-stacks/blob/master/scipy-notebook/Dockerfile))
2. Adjust the version number for the package. We prefer to pin the major and minor version number of packages so as to minimize rebuild side-effects when users submit pull requests (PRs). For example, you'll find the Jupyter Notebook package, `notebook`, installed using conda with `notebook=5.4.*`. 2. Adjust the version number for the package. We prefer to pin the major and minor version number of packages so as to minimize rebuild side-effects when users submit pull requests (PRs). For example, you'll find the Jupyter Notebook package, `notebook`, installed using conda with `notebook=5.4.*`.
3. Please build the image locally before submitting a pull request. Building the image locally shortens the debugging cycle by taking some load off [Travis CI](http://travis-ci.org/), which graciously provides free build services for open source projects like this one. If you use `make`, call: 3. Please build the image locally before submitting a pull request. Building the image locally shortens the debugging cycle by taking some load off [Travis CI](http://travis-ci.org/), which graciously provides free build services for open source projects like this one. If you use `make`, call:
``` ```bash
make build/somestack-notebook make build/somestack-notebook
``` ```
4. [Submit a pull request](https://github.com/PointCloudLibrary/pcl/wiki/A-step-by-step-guide-on-preparing-and-submitting-a-pull-request) (PR) with your changes. 4. [Submit a pull request](https://github.com/PointCloudLibrary/pcl/wiki/A-step-by-step-guide-on-preparing-and-submitting-a-pull-request) (PR) with your changes.
5. Watch for Travis to report a build success or failure for your PR on GitHub. 5. Watch for Travis to report a build success or failure for your PR on GitHub.
6. Discuss changes with the maintainers and address any build issues. Version conflicts are the most common problem. You may need to upgrade additional packages to fix build failures. 6. Discuss changes with the maintainers and address any build issues. Version conflicts are the most common problem. You may need to upgrade additional packages to fix build failures.
......
...@@ -13,13 +13,13 @@ This approach mirrors how we build and share the core stack images. Feel free to ...@@ -13,13 +13,13 @@ This approach mirrors how we build and share the core stack images. Feel free to
First, install [cookiecutter](https://github.com/audreyr/cookiecutter) using pip or conda: First, install [cookiecutter](https://github.com/audreyr/cookiecutter) using pip or conda:
``` ```bash
pip install cookiecutter # or conda install cookiecutter pip install cookiecutter # or conda install cookiecutter
``` ```
Run the cookiecutter command pointing to the [jupyter/cookiecutter-docker-stacks](https://github.com/jupyter/cookiecutter-docker-stacks) project on GitHub. Run the cookiecutter command pointing to the [jupyter/cookiecutter-docker-stacks](https://github.com/jupyter/cookiecutter-docker-stacks) project on GitHub.
``` ```bash
cookiecutter https://github.com/jupyter/cookiecutter-docker-stacks.git cookiecutter https://github.com/jupyter/cookiecutter-docker-stacks.git
``` ```
......
...@@ -13,10 +13,10 @@ Please follow the process below to add new tests: ...@@ -13,10 +13,10 @@ Please follow the process below to add new tests:
1. If the test should run against every image built, add your test code to one of the modules in [test/](https://github.com/jupyter/docker-stacks/tree/master/test) or create a new module. 1. If the test should run against every image built, add your test code to one of the modules in [test/](https://github.com/jupyter/docker-stacks/tree/master/test) or create a new module.
2. If your test should run against a single image, add your test code to one of the modules in `some-notebook/test/` or create a new module. 2. If your test should run against a single image, add your test code to one of the modules in `some-notebook/test/` or create a new module.
3. Build one or more images you intend to test and run the tests locally. If you use `make`, call: 3. Build one or more images you intend to test and run the tests locally. If you use `make`, call:
``` ```bash
make build/somestack-notebook make build/somestack-notebook
make test/somestack-notebook make test/somestack-notebook
``` ```
4. [Submit a pull request](https://github.com/PointCloudLibrary/pcl/wiki/A-step-by-step-guide-on-preparing-and-submitting-a-pull-request) (PR) with your changes. 4. [Submit a pull request](https://github.com/PointCloudLibrary/pcl/wiki/A-step-by-step-guide-on-preparing-and-submitting-a-pull-request) (PR) with your changes.
5. Watch for Travis to report a build success or failure for your PR on GitHub. 5. Watch for Travis to report a build success or failure for your PR on GitHub.
6. Discuss changes with the maintainers and address any issues running the tests on Travis. 6. Discuss changes with the maintainers and address any issues running the tests on Travis.
\ No newline at end of file
...@@ -63,5 +63,5 @@ Table of Contents ...@@ -63,5 +63,5 @@ Table of Contents
:caption: Getting Help :caption: Getting Help
Jupyter Discourse Forum <https://discourse.jupyter.org> Jupyter Discourse Forum <https://discourse.jupyter.org>
Jupyter Docker Stacks Issue Tracker <https://github.com/jupyter/docker-stacks/issues> Stacks Issue Tracker <https://github.com/jupyter/docker-stacks/issues>
Jupyter Website <https://jupyter.org> Jupyter Website <https://jupyter.org>
\ No newline at end of file
This diff is collapsed.
...@@ -9,7 +9,7 @@ msgid "" ...@@ -9,7 +9,7 @@ msgid ""
msgstr "" msgstr ""
"Project-Id-Version: docker-stacks latest\n" "Project-Id-Version: docker-stacks latest\n"
"Report-Msgid-Bugs-To: \n" "Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2020-01-20 04:43+0000\n" "POT-Creation-Date: 2020-05-28 00:44+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n" "Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n" "Language-Team: LANGUAGE <LL@li.org>\n"
...@@ -19,50 +19,44 @@ msgstr "" ...@@ -19,50 +19,44 @@ msgstr ""
"Generated-By: Babel 2.8.0\n" "Generated-By: Babel 2.8.0\n"
# 22f1bd46933144e092bf92e3af4c6f4f # 22f1bd46933144e092bf92e3af4c6f4f
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:32 #: ../../index.rst:32
#: 79072cbf86294c09b9313ee07735fb65
msgid "User Guide" msgid "User Guide"
msgstr "" msgstr ""
# f35d75046f8c42ae8cab58d826154823 # f35d75046f8c42ae8cab58d826154823
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:42 #: ../../index.rst:42
#: c9e3b347063f4b528690011606e8d5ea
msgid "Contributor Guide" msgid "Contributor Guide"
msgstr "" msgstr ""
# a737afe726cd49c4986d75b7d74eeed3 # a737afe726cd49c4986d75b7d74eeed3
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:54 #: ../../index.rst:54
#: c7b53fa9956546d691788706b3ef5dfc
msgid "Maintainer Guide" msgid "Maintainer Guide"
msgstr "" msgstr ""
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:60 #: ../../index.rst:60
msgid "Jupyter Discourse Forum" msgid "Jupyter Discourse Forum"
msgstr "" msgstr ""
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:60 #: ../../index.rst:60
msgid "Jupyter Docker Stacks Issue Tracker" msgid "Stacks Issue Tracker"
msgstr "" msgstr ""
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:60 #: ../../index.rst:60
msgid "Jupyter Website" msgid "Jupyter Website"
msgstr "" msgstr ""
# 9cd216fa91ef40bbb957373faaf93732 # 9cd216fa91ef40bbb957373faaf93732
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:60 #: ../../index.rst:60 774ed8768c6c4144ab19c7d7518d1932
#: 2789eaad173a43a495ff17fd0e1a1a38
msgid "Getting Help" msgid "Getting Help"
msgstr "" msgstr ""
# a0aa0bcd999c4c5e96cc57fd77780f96 # a0aa0bcd999c4c5e96cc57fd77780f96
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:2 #: ../../index.rst:2 dbc22a0d800749c6a2d4628595fe57b3
#: 121c8abde123400bbdb190b01441a180
msgid "Jupyter Docker Stacks" msgid "Jupyter Docker Stacks"
msgstr "" msgstr ""
# 5d06f458dc524214b2c97e865dd2dc81 # 5d06f458dc524214b2c97e865dd2dc81
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:4 #: ../../index.rst:4 8183867bf813431bb337b0594884f0fe
#: 6463d955c7724db682f6fa42da6b25a7
msgid "" msgid ""
"Jupyter Docker Stacks are a set of ready-to-run Docker images containing " "Jupyter Docker Stacks are a set of ready-to-run Docker images containing "
"Jupyter applications and interactive computing tools. You can use a stack" "Jupyter applications and interactive computing tools. You can use a stack"
...@@ -70,32 +64,27 @@ msgid "" ...@@ -70,32 +64,27 @@ msgid ""
msgstr "" msgstr ""
# c69f151c806e4cdf9bebda05b06c760e # c69f151c806e4cdf9bebda05b06c760e
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:6 #: ../../index.rst:6 271a99cccdd3476b9b9696e295647c92
#: 417a2a71d6bd4afdba0c10d1824afa36
msgid "Start a personal Jupyter Notebook server in a local Docker container" msgid "Start a personal Jupyter Notebook server in a local Docker container"
msgstr "" msgstr ""
# b26271409ab743b2a349b3a8ca95233e # b26271409ab743b2a349b3a8ca95233e
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:7 #: ../../index.rst:7 f01a318271d64f958c682ae241157bb2
#: 318b7b2a1f4644048ce7deb74fc8a2cf
msgid "Run JupyterLab servers for a team using JupyterHub" msgid "Run JupyterLab servers for a team using JupyterHub"
msgstr "" msgstr ""
# 4d60f4325fff4ffcad12703a4b9d6781 # 4d60f4325fff4ffcad12703a4b9d6781
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:8 #: ../../index.rst:8 8e3b6e8fe5e64b8a9523c0dd5b0369c9
#: faebaa8b57f24f52b0873a12b4da2a62
msgid "Write your own project Dockerfile" msgid "Write your own project Dockerfile"
msgstr "" msgstr ""
# 78b0d31eb6e9462888eef92e6a84cdb7 # 78b0d31eb6e9462888eef92e6a84cdb7
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:11 #: ../../index.rst:11 60ec3253d09e40be8e6852a495248467
#: 549f043c0b734a61817b2c737ac59d7c
msgid "Quick Start" msgid "Quick Start"
msgstr "" msgstr ""
# d4c0e237dbe74e0d9afbf2b2f0e219c8 # d4c0e237dbe74e0d9afbf2b2f0e219c8
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:13 #: ../../index.rst:13 38d5e9d5d0504acaa04b388f2ba031fc
#: bc586127ae4b4cbba1d9709841f2135c
msgid "" msgid ""
"You can try a `recent build of the jupyter/base-notebook image on " "You can try a `recent build of the jupyter/base-notebook image on "
"mybinder.org <https://mybinder.org/v2/gh/jupyter/docker-" "mybinder.org <https://mybinder.org/v2/gh/jupyter/docker-"
...@@ -107,16 +96,14 @@ msgid "" ...@@ -107,16 +96,14 @@ msgid ""
msgstr "" msgstr ""
# 051ed23ef62e41058a7c889604f96035 # 051ed23ef62e41058a7c889604f96035
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:15 #: ../../index.rst:15 1214b6056fe449b2a8ce59a5cda97355
#: 51538eb1f8d442acaae41b8e69a8704e
msgid "" msgid ""
"The other pages in this documentation describe additional uses and " "The other pages in this documentation describe additional uses and "
"features in detail." "features in detail."
msgstr "" msgstr ""
# e91f3b62a1b54166b966be6d7a4f061e # e91f3b62a1b54166b966be6d7a4f061e
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:17 #: ../../index.rst:17 7b198609a6214812b7922cb12e057279
#: 0c8148b23d704a1699d2812744b20c7c
msgid "" msgid ""
"**Example 1:** This command pulls the ``jupyter/scipy-notebook`` image " "**Example 1:** This command pulls the ``jupyter/scipy-notebook`` image "
"tagged ``17aba6048f44`` from Docker Hub if it is not already present on " "tagged ``17aba6048f44`` from Docker Hub if it is not already present on "
...@@ -130,8 +117,7 @@ msgid "" ...@@ -130,8 +117,7 @@ msgid ""
msgstr "" msgstr ""
# e04140e6cd8442f7a6f347d88224f591 # e04140e6cd8442f7a6f347d88224f591
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:21 #: ../../index.rst:21 1dead775c2d544abb3362633fdb93523
#: dcbbce6e5e67473aa32e264e422f334f
msgid "" msgid ""
"**Example 2:** This command performs the same operations as **Example " "**Example 2:** This command performs the same operations as **Example "
"1**, but it exposes the server on host port 10000 instead of port 8888. " "1**, but it exposes the server on host port 10000 instead of port 8888. "
...@@ -141,8 +127,7 @@ msgid "" ...@@ -141,8 +127,7 @@ msgid ""
msgstr "" msgstr ""
# 1c3229680cf44a5bb2d8450602bfcf7d # 1c3229680cf44a5bb2d8450602bfcf7d
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:25 #: ../../index.rst:25 8e75264b16a14d9bb4a1b4a9dee7b0b5
#: 449c01c1808b427381502b0d33f4efcb
msgid "" msgid ""
"**Example 3:** This command pulls the ``jupyter/datascience-notebook`` " "**Example 3:** This command pulls the ``jupyter/datascience-notebook`` "
"image tagged ``9b06df75e445`` from Docker Hub if it is not already " "image tagged ``9b06df75e445`` from Docker Hub if it is not already "
...@@ -158,8 +143,10 @@ msgid "" ...@@ -158,8 +143,10 @@ msgid ""
msgstr "" msgstr ""
# 3ac1a41d185844b1b43315a4cc74efc8 # 3ac1a41d185844b1b43315a4cc74efc8
#: /home/travis/build/jupyter/docker-stacks/docs/index.rst:30 #: ../../index.rst:30 e275f6561a2b408fa1202ebb59dfcd14
#: 3e1e8e2674784f5caad20d9c110707c5
msgid "Table of Contents" msgid "Table of Contents"
msgstr "" msgstr ""
#~ msgid "Jupyter Docker Stacks Issue Tracker"
#~ msgstr ""
This diff is collapsed.
...@@ -8,13 +8,13 @@ This page describes the options supported by the startup script as well as how t ...@@ -8,13 +8,13 @@ This page describes the options supported by the startup script as well as how t
You can pass [Jupyter command line options](https://jupyter.readthedocs.io/en/latest/projects/jupyter-command.html) to the `start-notebook.sh` script when launching the container. For example, to secure the Notebook server with a custom password hashed using `IPython.lib.passwd()` instead of the default token, you can run the following: You can pass [Jupyter command line options](https://jupyter.readthedocs.io/en/latest/projects/jupyter-command.html) to the `start-notebook.sh` script when launching the container. For example, to secure the Notebook server with a custom password hashed using `IPython.lib.passwd()` instead of the default token, you can run the following:
``` ```bash
docker run -d -p 8888:8888 jupyter/base-notebook start-notebook.sh --NotebookApp.password='sha1:74ba40f8a388:c913541b7ee99d15d5ed31d4226bf7838f83a50e' docker run -d -p 8888:8888 jupyter/base-notebook start-notebook.sh --NotebookApp.password='sha1:74ba40f8a388:c913541b7ee99d15d5ed31d4226bf7838f83a50e'
``` ```
For example, to set the base URL of the notebook server, you can run the following: For example, to set the base URL of the notebook server, you can run the following:
``` ```bash
docker run -d -p 8888:8888 jupyter/base-notebook start-notebook.sh --NotebookApp.base_url=/some/path docker run -d -p 8888:8888 jupyter/base-notebook start-notebook.sh --NotebookApp.base_url=/some/path
``` ```
...@@ -23,7 +23,7 @@ docker run -d -p 8888:8888 jupyter/base-notebook start-notebook.sh --NotebookApp ...@@ -23,7 +23,7 @@ docker run -d -p 8888:8888 jupyter/base-notebook start-notebook.sh --NotebookApp
You may instruct the `start-notebook.sh` script to customize the container environment before launching You may instruct the `start-notebook.sh` script to customize the container environment before launching
the notebook server. You do so by passing arguments to the `docker run` command. the notebook server. You do so by passing arguments to the `docker run` command.
* `-e NB_USER=jovyan` - Instructs the startup script to change the default container username from `jovyan` to the provided value. Causes the script to rename the `jovyan` user home folder. For this option to take effect, you must run the container with `--user root` and set the working directory `-w /home/$NB_USER`. This feature is useful when mounting host volumes with specific home folder. * `-e NB_USER=jovyan` - Instructs the startup script to change the default container username from `jovyan` to the provided value. Causes the script to rename the `jovyan` user home folder. For this option to take effect, you must run the container with `--user root`, set the working directory `-w /home/$NB_USER` and set the environment variable `-e CHOWN_HOME=yes` (see below for detail). This feature is useful when mounting host volumes with specific home folder.
* `-e NB_UID=1000` - Instructs the startup script to switch the numeric user ID of `$NB_USER` to the given value. This feature is useful when mounting host volumes with specific owner permissions. For this option to take effect, you must run the container with `--user root`. (The startup script will `su $NB_USER` after adjusting the user ID.) You might consider using modern Docker options `--user` and `--group-add` instead. See the last bullet below for details. * `-e NB_UID=1000` - Instructs the startup script to switch the numeric user ID of `$NB_USER` to the given value. This feature is useful when mounting host volumes with specific owner permissions. For this option to take effect, you must run the container with `--user root`. (The startup script will `su $NB_USER` after adjusting the user ID.) You might consider using modern Docker options `--user` and `--group-add` instead. See the last bullet below for details.
* `-e NB_GID=100` - Instructs the startup script to change the primary group of`$NB_USER` to `$NB_GID` (the new group is added with a name of `$NB_GROUP` if it is defined, otherwise the group is named `$NB_USER`). This feature is useful when mounting host volumes with specific group permissions. For this option to take effect, you must run the container with `--user root`. (The startup script will `su $NB_USER` after adjusting the group ID.) You might consider using modern Docker options `--user` and `--group-add` instead. See the last bullet below for details. The user is added to supplemental group `users` (gid 100) in order to allow write access to the home directory and `/opt/conda`. If you override the user/group logic, ensure the user stays in group `users` if you want them to be able to modify files in the image. * `-e NB_GID=100` - Instructs the startup script to change the primary group of`$NB_USER` to `$NB_GID` (the new group is added with a name of `$NB_GROUP` if it is defined, otherwise the group is named `$NB_USER`). This feature is useful when mounting host volumes with specific group permissions. For this option to take effect, you must run the container with `--user root`. (The startup script will `su $NB_USER` after adjusting the group ID.) You might consider using modern Docker options `--user` and `--group-add` instead. See the last bullet below for details. The user is added to supplemental group `users` (gid 100) in order to allow write access to the home directory and `/opt/conda`. If you override the user/group logic, ensure the user stays in group `users` if you want them to be able to modify files in the image.
* `-e NB_GROUP=<name>` - The name used for `$NB_GID`, which defaults to `$NB_USER`. This is only used if `$NB_GID` is specified and completely optional: there is only cosmetic effect. * `-e NB_GROUP=<name>` - The name used for `$NB_GID`, which defaults to `$NB_USER`. This is only used if `$NB_GID` is specified and completely optional: there is only cosmetic effect.
...@@ -54,7 +54,7 @@ script for execution details. ...@@ -54,7 +54,7 @@ script for execution details.
You may mount SSL key and certificate files into a container and configure Jupyter Notebook to use them to accept HTTPS connections. For example, to mount a host folder containing a `notebook.key` and `notebook.crt` and use them, you might run the following: You may mount SSL key and certificate files into a container and configure Jupyter Notebook to use them to accept HTTPS connections. For example, to mount a host folder containing a `notebook.key` and `notebook.crt` and use them, you might run the following:
``` ```bash
docker run -d -p 8888:8888 \ docker run -d -p 8888:8888 \
-v /some/host/folder:/etc/ssl/notebook \ -v /some/host/folder:/etc/ssl/notebook \
jupyter/base-notebook start-notebook.sh \ jupyter/base-notebook start-notebook.sh \
...@@ -64,7 +64,7 @@ docker run -d -p 8888:8888 \ ...@@ -64,7 +64,7 @@ docker run -d -p 8888:8888 \
Alternatively, you may mount a single PEM file containing both the key and certificate. For example: Alternatively, you may mount a single PEM file containing both the key and certificate. For example:
``` ```bash
docker run -d -p 8888:8888 \ docker run -d -p 8888:8888 \
-v /some/host/folder/notebook.pem:/etc/ssl/notebook.pem \ -v /some/host/folder/notebook.pem:/etc/ssl/notebook.pem \
jupyter/base-notebook start-notebook.sh \ jupyter/base-notebook start-notebook.sh \
...@@ -85,13 +85,13 @@ For additional information about using SSL, see the following: ...@@ -85,13 +85,13 @@ For additional information about using SSL, see the following:
The `start-notebook.sh` script actually inherits most of its option handling capability from a more generic `start.sh` script. The `start.sh` script supports all of the features described above, but allows you to specify an arbitrary command to execute. For example, to run the text-based `ipython` console in a container, do the following: The `start-notebook.sh` script actually inherits most of its option handling capability from a more generic `start.sh` script. The `start.sh` script supports all of the features described above, but allows you to specify an arbitrary command to execute. For example, to run the text-based `ipython` console in a container, do the following:
``` ```bash
docker run -it --rm jupyter/base-notebook start.sh ipython docker run -it --rm jupyter/base-notebook start.sh ipython
``` ```
Or, to run JupyterLab instead of the classic notebook, run the following: Or, to run JupyterLab instead of the classic notebook, run the following:
``` ```bash
docker run -it --rm -p 8888:8888 jupyter/base-notebook start.sh jupyter lab docker run -it --rm -p 8888:8888 jupyter/base-notebook start.sh jupyter lab
``` ```
...@@ -107,7 +107,7 @@ The default Python 3.x [Conda environment](http://conda.pydata.org/docs/using/en ...@@ -107,7 +107,7 @@ The default Python 3.x [Conda environment](http://conda.pydata.org/docs/using/en
The `jovyan` user has full read/write access to the `/opt/conda` directory. You can use either `conda` or `pip` to install new packages without any additional permissions. The `jovyan` user has full read/write access to the `/opt/conda` directory. You can use either `conda` or `pip` to install new packages without any additional permissions.
``` ```bash
# install a package into the default (python 3.x) environment # install a package into the default (python 3.x) environment
pip install some-package pip install some-package
conda install some-package conda install some-package
......
...@@ -17,7 +17,7 @@ orchestrator config. ...@@ -17,7 +17,7 @@ orchestrator config.
For example: For example:
``` ```bash
docker run -it -e GRANT_SUDO=yes --user root jupyter/minimal-notebook docker run -it -e GRANT_SUDO=yes --user root jupyter/minimal-notebook
``` ```
...@@ -75,7 +75,7 @@ Python 2.x was removed from all images on August 10th, 2017, starting in tag `cc ...@@ -75,7 +75,7 @@ Python 2.x was removed from all images on August 10th, 2017, starting in tag `cc
add a Python 2.x environment by defining your own Dockerfile inheriting from one of the images like add a Python 2.x environment by defining your own Dockerfile inheriting from one of the images like
so: so:
``` ```dockerfile
# Choose your desired base image # Choose your desired base image
FROM jupyter/scipy-notebook:latest FROM jupyter/scipy-notebook:latest
...@@ -103,7 +103,7 @@ Ref: ...@@ -103,7 +103,7 @@ Ref:
The default version of Python that ships with conda/ubuntu may not be the version you want. The default version of Python that ships with conda/ubuntu may not be the version you want.
To add a conda environment with a different version and make it accessible to Jupyter, the instructions are very similar to Python 2.x but are slightly simpler (no need to switch to `root`): To add a conda environment with a different version and make it accessible to Jupyter, the instructions are very similar to Python 2.x but are slightly simpler (no need to switch to `root`):
``` ```dockerfile
# Choose your desired base image # Choose your desired base image
FROM jupyter/minimal-notebook:latest FROM jupyter/minimal-notebook:latest
...@@ -168,12 +168,12 @@ ENTRYPOINT ["jupyter", "lab", "--ip=0.0.0.0", "--allow-root"] ...@@ -168,12 +168,12 @@ ENTRYPOINT ["jupyter", "lab", "--ip=0.0.0.0", "--allow-root"]
``` ```
And build the image as: And build the image as:
``` ```bash
docker build -t jupyter/scipy-dasklabextension:latest . docker build -t jupyter/scipy-dasklabextension:latest .
``` ```
Once built, run using the command: Once built, run using the command:
``` ```bash
docker run -it --rm -p 8888:8888 -p 8787:8787 jupyter/scipy-dasklabextension:latest docker run -it --rm -p 8888:8888 -p 8787:8787 jupyter/scipy-dasklabextension:latest
``` ```
...@@ -194,7 +194,7 @@ Ref: ...@@ -194,7 +194,7 @@ Ref:
[RISE](https://github.com/damianavila/RISE) allows via extension to create live slideshows of your [RISE](https://github.com/damianavila/RISE) allows via extension to create live slideshows of your
notebooks, with no conversion, adding javascript Reveal.js: notebooks, with no conversion, adding javascript Reveal.js:
``` ```bash
# Add Live slideshows with RISE # Add Live slideshows with RISE
RUN conda install -c damianavila82 rise RUN conda install -c damianavila82 rise
``` ```
...@@ -207,7 +207,7 @@ Credit: [Paolo D.](https://github.com/pdonorio) based on ...@@ -207,7 +207,7 @@ Credit: [Paolo D.](https://github.com/pdonorio) based on
You need to install conda's gcc for Python xgboost to work properly. Otherwise, you'll get an You need to install conda's gcc for Python xgboost to work properly. Otherwise, you'll get an
exception about libgomp.so.1 missing GOMP_4.0. exception about libgomp.so.1 missing GOMP_4.0.
``` ```bash
%%bash %%bash
conda install -y gcc conda install -y gcc
pip install xgboost pip install xgboost
...@@ -320,8 +320,8 @@ Credit: [Justin Tyberg](https://github.com/jtyberg), [quanghoc](https://github.c ...@@ -320,8 +320,8 @@ Credit: [Justin Tyberg](https://github.com/jtyberg), [quanghoc](https://github.c
To use a specific version of JupyterHub, the version of `jupyterhub` in your image should match the To use a specific version of JupyterHub, the version of `jupyterhub` in your image should match the
version in the Hub itself. version in the Hub itself.
``` ```dockerfile
FROM jupyter/base-notebook:5ded1de07260 FROM jupyter/base-notebook:5ded1de07260
RUN pip install jupyterhub==0.8.0b1 RUN pip install jupyterhub==0.8.0b1
``` ```
...@@ -383,7 +383,7 @@ Ref: ...@@ -383,7 +383,7 @@ Ref:
### Using Local Spark JARs ### Using Local Spark JARs
``` ```python
import os import os
os.environ['PYSPARK_SUBMIT_ARGS'] = '--jars /home/jovyan/spark-streaming-kafka-assembly_2.10-1.6.1.jar pyspark-shell' os.environ['PYSPARK_SUBMIT_ARGS'] = '--jars /home/jovyan/spark-streaming-kafka-assembly_2.10-1.6.1.jar pyspark-shell'
import pyspark import pyspark
...@@ -412,7 +412,7 @@ Ref: ...@@ -412,7 +412,7 @@ Ref:
### Use jupyter/all-spark-notebooks with an existing Spark/YARN cluster ### Use jupyter/all-spark-notebooks with an existing Spark/YARN cluster
``` ```dockerfile
FROM jupyter/all-spark-notebook FROM jupyter/all-spark-notebook
# Set env vars for pydoop # Set env vars for pydoop
...@@ -488,13 +488,13 @@ convenient to launch the server without a password or token. In this case, you s ...@@ -488,13 +488,13 @@ convenient to launch the server without a password or token. In this case, you s
For jupyterlab: For jupyterlab:
``` ```bash
docker run jupyter/base-notebook:6d2a05346196 start.sh jupyter lab --LabApp.token='' docker run jupyter/base-notebook:6d2a05346196 start.sh jupyter lab --LabApp.token=''
``` ```
For jupyter classic: For jupyter classic:
``` ```bash
docker run jupyter/base-notebook:6d2a05346196 start.sh jupyter notebook --NotebookApp.token='' docker run jupyter/base-notebook:6d2a05346196 start.sh jupyter notebook --NotebookApp.token=''
``` ```
...@@ -502,7 +502,7 @@ docker run jupyter/base-notebook:6d2a05346196 start.sh jupyter notebook --Notebo ...@@ -502,7 +502,7 @@ docker run jupyter/base-notebook:6d2a05346196 start.sh jupyter notebook --Notebo
NB: this works for classic notebooks only NB: this works for classic notebooks only
``` ```dockerfile
# Update with your base image of choice # Update with your base image of choice
FROM jupyter/minimal-notebook:latest FROM jupyter/minimal-notebook:latest
...@@ -521,7 +521,7 @@ Ref: ...@@ -521,7 +521,7 @@ Ref:
Using `auto-sklearn` requires `swig`, which the other notebook images lack, so it cant be experimented with. Also, there is no Conda package for `auto-sklearn`. Using `auto-sklearn` requires `swig`, which the other notebook images lack, so it cant be experimented with. Also, there is no Conda package for `auto-sklearn`.
``` ```dockerfile
ARG BASE_CONTAINER=jupyter/scipy-notebook ARG BASE_CONTAINER=jupyter/scipy-notebook
FROM jupyter/scipy-notebook:latest FROM jupyter/scipy-notebook:latest
......
...@@ -116,11 +116,10 @@ packages from [conda-forge](https://conda-forge.github.io/feedstocks) ...@@ -116,11 +116,10 @@ packages from [conda-forge](https://conda-forge.github.io/feedstocks)
| [Dockerfile commit history](https://github.com/jupyter/docker-stacks/commits/master/pyspark-notebook/Dockerfile) | [Dockerfile commit history](https://github.com/jupyter/docker-stacks/commits/master/pyspark-notebook/Dockerfile)
| [Docker Hub image tags](https://hub.docker.com/r/jupyter/pyspark-notebook/tags/) | [Docker Hub image tags](https://hub.docker.com/r/jupyter/pyspark-notebook/tags/)
`jupyter/pyspark-notebook` includes Python support for Apache Spark, optionally on Mesos. `jupyter/pyspark-notebook` includes Python support for Apache Spark.
* Everything in `jupyter/scipy-notebook` and its ancestor images * Everything in `jupyter/scipy-notebook` and its ancestor images
* [Apache Spark](https://spark.apache.org/) with Hadoop binaries * [Apache Spark](https://spark.apache.org/) with Hadoop binaries
* [Mesos](http://mesos.apache.org/) client libraries
### jupyter/all-spark-notebook ### jupyter/all-spark-notebook
...@@ -128,7 +127,7 @@ packages from [conda-forge](https://conda-forge.github.io/feedstocks) ...@@ -128,7 +127,7 @@ packages from [conda-forge](https://conda-forge.github.io/feedstocks)
| [Dockerfile commit history](https://github.com/jupyter/docker-stacks/commits/master/all-spark-notebook/Dockerfile) | [Dockerfile commit history](https://github.com/jupyter/docker-stacks/commits/master/all-spark-notebook/Dockerfile)
| [Docker Hub image tags](https://hub.docker.com/r/jupyter/all-spark-notebook/tags/) | [Docker Hub image tags](https://hub.docker.com/r/jupyter/all-spark-notebook/tags/)
`jupyter/all-spark-notebook` includes Python, R, and Scala support for Apache Spark, optionally on Mesos. `jupyter/all-spark-notebook` includes Python, R, and Scala support for Apache Spark.
* Everything in `jupyter/pyspark-notebook` and its ancestor images * Everything in `jupyter/pyspark-notebook` and its ancestor images
* [IRKernel](https://irkernel.github.io/) to support R code in Jupyter notebooks * [IRKernel](https://irkernel.github.io/) to support R code in Jupyter notebooks
......
This diff is collapsed.
...@@ -12,7 +12,7 @@ See the [installation instructions](https://docs.docker.com/engine/installation/ ...@@ -12,7 +12,7 @@ See the [installation instructions](https://docs.docker.com/engine/installation/
Build and run a `jupyter/minimal-notebook` container on a VirtualBox VM on local desktop. Build and run a `jupyter/minimal-notebook` container on a VirtualBox VM on local desktop.
``` ```bash
# create a Docker Machine-controlled VirtualBox VM # create a Docker Machine-controlled VirtualBox VM
bin/vbox.sh mymachine bin/vbox.sh mymachine
...@@ -28,7 +28,7 @@ notebook/up.sh ...@@ -28,7 +28,7 @@ notebook/up.sh
To stop and remove the container: To stop and remove the container:
``` ```bash
notebook/down.sh notebook/down.sh
``` ```
...@@ -39,14 +39,14 @@ notebook/down.sh ...@@ -39,14 +39,14 @@ notebook/down.sh
You can customize the docker-stack notebook image to deploy by modifying the `notebook/Dockerfile`. For example, you can build and deploy a `jupyter/all-spark-notebook` by modifying the Dockerfile like so: You can customize the docker-stack notebook image to deploy by modifying the `notebook/Dockerfile`. For example, you can build and deploy a `jupyter/all-spark-notebook` by modifying the Dockerfile like so:
``` ```dockerfile
FROM jupyter/all-spark-notebook:55d5ca6be183 FROM jupyter/all-spark-notebook:55d5ca6be183
... ...
``` ```
Once you modify the Dockerfile, don't forget to rebuild the image. Once you modify the Dockerfile, don't forget to rebuild the image.
``` ```bash
# activate the docker machine # activate the docker machine
eval "$(docker-machine env mymachine)" eval "$(docker-machine env mymachine)"
...@@ -57,14 +57,14 @@ notebook/build.sh ...@@ -57,14 +57,14 @@ notebook/build.sh
Yes. Set environment variables to specify unique names and ports when running the `up.sh` command. Yes. Set environment variables to specify unique names and ports when running the `up.sh` command.
``` ```bash
NAME=my-notebook PORT=9000 notebook/up.sh NAME=my-notebook PORT=9000 notebook/up.sh
NAME=your-notebook PORT=9001 notebook/up.sh NAME=your-notebook PORT=9001 notebook/up.sh
``` ```
To stop and remove the containers: To stop and remove the containers:
``` ```bash
NAME=my-notebook notebook/down.sh NAME=my-notebook notebook/down.sh
NAME=your-notebook notebook/down.sh NAME=your-notebook notebook/down.sh
``` ```
...@@ -78,7 +78,7 @@ The `up.sh` creates a Docker volume named after the notebook container with a `- ...@@ -78,7 +78,7 @@ The `up.sh` creates a Docker volume named after the notebook container with a `-
Yes. Set the `WORK_VOLUME` environment variable to the same value for each notebook. Yes. Set the `WORK_VOLUME` environment variable to the same value for each notebook.
``` ```bash
NAME=my-notebook PORT=9000 WORK_VOLUME=our-work notebook/up.sh NAME=my-notebook PORT=9000 WORK_VOLUME=our-work notebook/up.sh
NAME=your-notebook PORT=9001 WORK_VOLUME=our-work notebook/up.sh NAME=your-notebook PORT=9001 WORK_VOLUME=our-work notebook/up.sh
``` ```
...@@ -87,7 +87,7 @@ NAME=your-notebook PORT=9001 WORK_VOLUME=our-work notebook/up.sh ...@@ -87,7 +87,7 @@ NAME=your-notebook PORT=9001 WORK_VOLUME=our-work notebook/up.sh
To run the notebook server with a self-signed certificate, pass the `--secure` option to the `up.sh` script. You must also provide a password, which will be used to secure the notebook server. You can specify the password by setting the `PASSWORD` environment variable, or by passing it to the `up.sh` script. To run the notebook server with a self-signed certificate, pass the `--secure` option to the `up.sh` script. You must also provide a password, which will be used to secure the notebook server. You can specify the password by setting the `PASSWORD` environment variable, or by passing it to the `up.sh` script.
``` ```bash
PASSWORD=a_secret notebook/up.sh --secure PASSWORD=a_secret notebook/up.sh --secure
# or # or
...@@ -103,7 +103,7 @@ This example includes the `bin/letsencrypt.sh` script, which runs the `letsencry ...@@ -103,7 +103,7 @@ This example includes the `bin/letsencrypt.sh` script, which runs the `letsencry
The following command will create a certificate chain and store it in a Docker volume named `mydomain-secrets`. The following command will create a certificate chain and store it in a Docker volume named `mydomain-secrets`.
``` ```bash
FQDN=host.mydomain.com EMAIL=myemail@somewhere.com \ FQDN=host.mydomain.com EMAIL=myemail@somewhere.com \
SECRETS_VOLUME=mydomain-secrets \ SECRETS_VOLUME=mydomain-secrets \
bin/letsencrypt.sh bin/letsencrypt.sh
...@@ -111,7 +111,7 @@ FQDN=host.mydomain.com EMAIL=myemail@somewhere.com \ ...@@ -111,7 +111,7 @@ FQDN=host.mydomain.com EMAIL=myemail@somewhere.com \
Now run `up.sh` with the `--letsencrypt` option. You must also provide the name of the secrets volume and a password. Now run `up.sh` with the `--letsencrypt` option. You must also provide the name of the secrets volume and a password.
``` ```bash
PASSWORD=a_secret SECRETS_VOLUME=mydomain-secrets notebook/up.sh --letsencrypt PASSWORD=a_secret SECRETS_VOLUME=mydomain-secrets notebook/up.sh --letsencrypt
# or # or
...@@ -120,7 +120,7 @@ notebook/up.sh --letsencrypt --password a_secret --secrets mydomain-secrets ...@@ -120,7 +120,7 @@ notebook/up.sh --letsencrypt --password a_secret --secrets mydomain-secrets
Be aware that Let's Encrypt has a pretty [low rate limit per domain](https://community.letsencrypt.org/t/public-beta-rate-limits/4772/3) at the moment. You can avoid exhausting your limit by testing against the Let's Encrypt staging servers. To hit their staging servers, set the environment variable `CERT_SERVER=--staging`. Be aware that Let's Encrypt has a pretty [low rate limit per domain](https://community.letsencrypt.org/t/public-beta-rate-limits/4772/3) at the moment. You can avoid exhausting your limit by testing against the Let's Encrypt staging servers. To hit their staging servers, set the environment variable `CERT_SERVER=--staging`.
``` ```bash
FQDN=host.mydomain.com EMAIL=myemail@somewhere.com \ FQDN=host.mydomain.com EMAIL=myemail@somewhere.com \
CERT_SERVER=--staging \ CERT_SERVER=--staging \
bin/letsencrypt.sh bin/letsencrypt.sh
...@@ -134,13 +134,13 @@ Yes, you should be able to deploy to any Docker Machine-controlled host. To mak ...@@ -134,13 +134,13 @@ Yes, you should be able to deploy to any Docker Machine-controlled host. To mak
To create a Docker machine using a VirtualBox VM on local desktop: To create a Docker machine using a VirtualBox VM on local desktop:
``` ```bash
bin/vbox.sh mymachine bin/vbox.sh mymachine
``` ```
To create a Docker machine using a virtual device on IBM SoftLayer: To create a Docker machine using a virtual device on IBM SoftLayer:
``` ```bash
export SOFTLAYER_USER=my_softlayer_username export SOFTLAYER_USER=my_softlayer_username
export SOFTLAYER_API_KEY=my_softlayer_api_key export SOFTLAYER_API_KEY=my_softlayer_api_key
export SOFTLAYER_DOMAIN=my.domain export SOFTLAYER_DOMAIN=my.domain
......
...@@ -11,7 +11,7 @@ This folder contains a Makefile and a set of supporting files demonstrating how ...@@ -11,7 +11,7 @@ This folder contains a Makefile and a set of supporting files demonstrating how
To show what's possible, here's how to run the `jupyter/minimal-notebook` on a brand new local virtualbox. To show what's possible, here's how to run the `jupyter/minimal-notebook` on a brand new local virtualbox.
``` ```bash
# create a new VM # create a new VM
make virtualbox-vm NAME=dev make virtualbox-vm NAME=dev
# make the new VM the active docker machine # make the new VM the active docker machine
...@@ -30,7 +30,7 @@ The last command will log the IP address and port to visit in your browser. ...@@ -30,7 +30,7 @@ The last command will log the IP address and port to visit in your browser.
Yes. Specify a unique name and port on the `make notebook` command. Yes. Specify a unique name and port on the `make notebook` command.
``` ```bash
make notebook NAME=my-notebook PORT=9000 make notebook NAME=my-notebook PORT=9000
make notebook NAME=your-notebook PORT=9001 make notebook NAME=your-notebook PORT=9001
``` ```
...@@ -39,7 +39,7 @@ make notebook NAME=your-notebook PORT=9001 ...@@ -39,7 +39,7 @@ make notebook NAME=your-notebook PORT=9001
Yes. Yes.
``` ```bash
make notebook NAME=my-notebook PORT=9000 WORK_VOLUME=our-work make notebook NAME=my-notebook PORT=9000 WORK_VOLUME=our-work
make notebook NAME=your-notebook PORT=9001 WORK_VOLUME=our-work make notebook NAME=your-notebook PORT=9001 WORK_VOLUME=our-work
``` ```
...@@ -52,7 +52,7 @@ Instead of `make notebook`, run `make self-signed-notebook PASSWORD=your_desired ...@@ -52,7 +52,7 @@ Instead of `make notebook`, run `make self-signed-notebook PASSWORD=your_desired
Yes. Please. Yes. Please.
``` ```bash
make letsencrypt FQDN=host.mydomain.com EMAIL=myemail@somewhere.com make letsencrypt FQDN=host.mydomain.com EMAIL=myemail@somewhere.com
make letsencrypt-notebook make letsencrypt-notebook
``` ```
...@@ -61,7 +61,7 @@ The first command creates a Docker volume named after the notebook container wit ...@@ -61,7 +61,7 @@ The first command creates a Docker volume named after the notebook container wit
Be aware: Let's Encrypt has a pretty [low rate limit per domain](https://community.letsencrypt.org/t/public-beta-rate-limits/4772/3) at the moment. You can avoid exhausting your limit by testing against the Let's Encrypt staging servers. To hit their staging servers, set the environment variable `CERT_SERVER=--staging`. Be aware: Let's Encrypt has a pretty [low rate limit per domain](https://community.letsencrypt.org/t/public-beta-rate-limits/4772/3) at the moment. You can avoid exhausting your limit by testing against the Let's Encrypt staging servers. To hit their staging servers, set the environment variable `CERT_SERVER=--staging`.
``` ```bash
make letsencrypt FQDN=host.mydomain.com EMAIL=myemail@somewhere.com CERT_SERVER=--staging make letsencrypt FQDN=host.mydomain.com EMAIL=myemail@somewhere.com CERT_SERVER=--staging
``` ```
...@@ -69,7 +69,7 @@ Also, keep in mind Let's Encrypt certificates are short lived: 90 days at the mo ...@@ -69,7 +69,7 @@ Also, keep in mind Let's Encrypt certificates are short lived: 90 days at the mo
### My pip/conda/apt-get installs disappear every time I restart the container. Can I make them permanent? ### My pip/conda/apt-get installs disappear every time I restart the container. Can I make them permanent?
``` ```bash
# add your pip, conda, apt-get, etc. permanent features to the Dockerfile where # add your pip, conda, apt-get, etc. permanent features to the Dockerfile where
# indicated by the comments in the Dockerfile # indicated by the comments in the Dockerfile
vi Dockerfile vi Dockerfile
...@@ -79,7 +79,7 @@ make notebook ...@@ -79,7 +79,7 @@ make notebook
### How do I upgrade my Docker container? ### How do I upgrade my Docker container?
``` ```bash
make image DOCKER_ARGS=--pull make image DOCKER_ARGS=--pull
make notebook make notebook
``` ```
...@@ -90,7 +90,7 @@ The first line pulls the latest version of the Docker image used in the local Do ...@@ -90,7 +90,7 @@ The first line pulls the latest version of the Docker image used in the local Do
Yes. As an example, there's a `softlayer.makefile` included in this repo as an example. You would use it like so: Yes. As an example, there's a `softlayer.makefile` included in this repo as an example. You would use it like so:
``` ```bash
make softlayer-vm NAME=myhost \ make softlayer-vm NAME=myhost \
SOFTLAYER_DOMAIN=your_desired_domain \ SOFTLAYER_DOMAIN=your_desired_domain \
SOFTLAYER_USER=your_user_id \ SOFTLAYER_USER=your_user_id \
......
...@@ -16,7 +16,7 @@ Loading the Templates ...@@ -16,7 +16,7 @@ Loading the Templates
To load the templates, login to OpenShift from the command line and run: To load the templates, login to OpenShift from the command line and run:
``` ```bash
oc create -f https://raw.githubusercontent.com/jupyter-on-openshift/docker-stacks/master/examples/openshift/templates.json oc create -f https://raw.githubusercontent.com/jupyter-on-openshift/docker-stacks/master/examples/openshift/templates.json
``` ```
...@@ -33,7 +33,7 @@ Deploying a Notebook ...@@ -33,7 +33,7 @@ Deploying a Notebook
To deploy a notebook from the command line using the template, run: To deploy a notebook from the command line using the template, run:
``` ```bash
oc new-app --template jupyter-notebook oc new-app --template jupyter-notebook
``` ```
...@@ -71,7 +71,7 @@ A password you can use when accessing the notebook will be auto generated and is ...@@ -71,7 +71,7 @@ A password you can use when accessing the notebook will be auto generated and is
To see the hostname for accessing the notebook run: To see the hostname for accessing the notebook run:
``` ```bash
oc get routes oc get routes
``` ```
...@@ -95,7 +95,7 @@ Passing Template Parameters ...@@ -95,7 +95,7 @@ Passing Template Parameters
To override the name for the notebook, the image used, and the password, you can pass template parameters using the ``--param`` option. To override the name for the notebook, the image used, and the password, you can pass template parameters using the ``--param`` option.
``` ```bash
oc new-app --template jupyter-notebook \ oc new-app --template jupyter-notebook \
--param APPLICATION_NAME=mynotebook \ --param APPLICATION_NAME=mynotebook \
--param NOTEBOOK_IMAGE=jupyter/scipy-notebook:latest \ --param NOTEBOOK_IMAGE=jupyter/scipy-notebook:latest \
...@@ -120,7 +120,7 @@ Deleting the Notebook Instance ...@@ -120,7 +120,7 @@ Deleting the Notebook Instance
To delete the notebook instance, run ``oc delete`` using a label selector for the application name. To delete the notebook instance, run ``oc delete`` using a label selector for the application name.
``` ```bash
oc delete all,configmap --selector app=mynotebook oc delete all,configmap --selector app=mynotebook
``` ```
...@@ -129,7 +129,7 @@ Enabling Jupyter Lab Interface ...@@ -129,7 +129,7 @@ Enabling Jupyter Lab Interface
To enable the Jupyter Lab interface for a deployed notebook set the ``JUPYTER_ENABLE_LAB`` environment variable. To enable the Jupyter Lab interface for a deployed notebook set the ``JUPYTER_ENABLE_LAB`` environment variable.
``` ```bash
oc set env dc/mynotebook JUPYTER_ENABLE_LAB=true oc set env dc/mynotebook JUPYTER_ENABLE_LAB=true
``` ```
...@@ -140,7 +140,7 @@ Adding Persistent Storage ...@@ -140,7 +140,7 @@ Adding Persistent Storage
You can upload notebooks and other files using the web interface of the notebook. Any uploaded files or changes you make to them will be lost when the notebook instance is restarted. If you want to save your work, you need to add persistent storage to the notebook. To add persistent storage run: You can upload notebooks and other files using the web interface of the notebook. Any uploaded files or changes you make to them will be lost when the notebook instance is restarted. If you want to save your work, you need to add persistent storage to the notebook. To add persistent storage run:
``` ```bash
oc set volume dc/mynotebook --add \ oc set volume dc/mynotebook --add \
--type=pvc --claim-size=1Gi --claim-mode=ReadWriteOnce \ --type=pvc --claim-size=1Gi --claim-mode=ReadWriteOnce \
--claim-name mynotebook-data --name data \ --claim-name mynotebook-data --name data \
...@@ -149,7 +149,7 @@ oc set volume dc/mynotebook --add \ ...@@ -149,7 +149,7 @@ oc set volume dc/mynotebook --add \
When you have deleted the notebook instance, if using a persistent volume, you will need to delete it in a separate step. When you have deleted the notebook instance, if using a persistent volume, you will need to delete it in a separate step.
``` ```bash
oc delete pvc/mynotebook-data oc delete pvc/mynotebook-data
``` ```
...@@ -158,7 +158,7 @@ Customizing the Configuration ...@@ -158,7 +158,7 @@ Customizing the Configuration
If you want to set any custom configuration for the notebook, you can edit the config map created by the template. If you want to set any custom configuration for the notebook, you can edit the config map created by the template.
``` ```bash
oc edit configmap/mynotebook-cfg oc edit configmap/mynotebook-cfg
``` ```
...@@ -176,19 +176,19 @@ Because the configuration is Python code, ensure any indenting is correct. Any e ...@@ -176,19 +176,19 @@ Because the configuration is Python code, ensure any indenting is correct. Any e
If the error is in the config map, edit it again to fix it and trigged a new deployment if necessary by running: If the error is in the config map, edit it again to fix it and trigged a new deployment if necessary by running:
``` ```bash
oc rollout latest dc/mynotebook oc rollout latest dc/mynotebook
``` ```
If you make an error in the configuration file stored in the persistent volume, you will need to scale down the notebook so it isn't running. If you make an error in the configuration file stored in the persistent volume, you will need to scale down the notebook so it isn't running.
``` ```bash
oc scale dc/mynotebook --replicas 0 oc scale dc/mynotebook --replicas 0
``` ```
Then run: Then run:
``` ```bash
oc debug dc/mynotebook oc debug dc/mynotebook
``` ```
...@@ -196,7 +196,7 @@ to run the notebook in debug mode. This will provide you with an interactive ter ...@@ -196,7 +196,7 @@ to run the notebook in debug mode. This will provide you with an interactive ter
Start up the notebook again. Start up the notebook again.
``` ```bash
oc scale dc/mynotebook --replicas 1 oc scale dc/mynotebook --replicas 1
``` ```
...@@ -207,7 +207,7 @@ The password for the notebook is supplied as a template parameter, or if not sup ...@@ -207,7 +207,7 @@ The password for the notebook is supplied as a template parameter, or if not sup
If you want to change the password, you can do so by editing the environment variable on the deployment configuration. If you want to change the password, you can do so by editing the environment variable on the deployment configuration.
``` ```bash
oc set env dc/mynotebook JUPYTER_NOTEBOOK_PASSWORD=mypassword oc set env dc/mynotebook JUPYTER_NOTEBOOK_PASSWORD=mypassword
``` ```
...@@ -232,13 +232,13 @@ If the image is in your OpenShift project, because you imported the image into O ...@@ -232,13 +232,13 @@ If the image is in your OpenShift project, because you imported the image into O
This can be illustrated by first importing an image into the OpenShift project. This can be illustrated by first importing an image into the OpenShift project.
``` ```bash
oc import-image jupyter/datascience-notebook:latest --confirm oc import-image jupyter/datascience-notebook:latest --confirm
``` ```
Then deploy it using the name of the image stream created. Then deploy it using the name of the image stream created.
``` ```bash
oc new-app --template jupyter-notebook \ oc new-app --template jupyter-notebook \
--param APPLICATION_NAME=mynotebook \ --param APPLICATION_NAME=mynotebook \
--param NOTEBOOK_IMAGE=datascience-notebook \ --param NOTEBOOK_IMAGE=datascience-notebook \
......
...@@ -22,7 +22,7 @@ Getting Started with S2I ...@@ -22,7 +22,7 @@ Getting Started with S2I
As an example of how S2I can be used to create a custom image with a bundled set of notebooks, run: As an example of how S2I can be used to create a custom image with a bundled set of notebooks, run:
``` ```bash
s2i build \ s2i build \
--scripts-url https://raw.githubusercontent.com/jupyter/docker-stacks/master/examples/source-to-image \ --scripts-url https://raw.githubusercontent.com/jupyter/docker-stacks/master/examples/source-to-image \
--context-dir docs/source/examples/Notebook \ --context-dir docs/source/examples/Notebook \
...@@ -76,7 +76,7 @@ The supplied ``assemble`` script performs a few key steps. ...@@ -76,7 +76,7 @@ The supplied ``assemble`` script performs a few key steps.
The first steps copy files into the location they need to be when the image is run, from the directory where they are initially placed by the ``s2i`` command. The first steps copy files into the location they need to be when the image is run, from the directory where they are initially placed by the ``s2i`` command.
``` ```bash
cp -Rf /tmp/src/. /home/$NB_USER cp -Rf /tmp/src/. /home/$NB_USER
rm -rf /tmp/src rm -rf /tmp/src
...@@ -84,7 +84,7 @@ rm -rf /tmp/src ...@@ -84,7 +84,7 @@ rm -rf /tmp/src
The next steps are: The next steps are:
``` ```bash
if [ -f /home/$NB_USER/environment.yml ]; then if [ -f /home/$NB_USER/environment.yml ]; then
conda env update --name root --file /home/$NB_USER/environment.yml conda env update --name root --file /home/$NB_USER/environment.yml
conda clean --all -f -y conda clean --all -f -y
...@@ -101,7 +101,7 @@ This means that so long as a set of notebook files provides one of these files l ...@@ -101,7 +101,7 @@ This means that so long as a set of notebook files provides one of these files l
A final step is: A final step is:
``` ```bash
fix-permissions $CONDA_DIR fix-permissions $CONDA_DIR
fix-permissions /home/$NB_USER fix-permissions /home/$NB_USER
``` ```
...@@ -112,7 +112,7 @@ As long as you preserve the first and last set of steps, you can do whatever you ...@@ -112,7 +112,7 @@ As long as you preserve the first and last set of steps, you can do whatever you
The ``run`` script in this directory is very simple and just runs the notebook application. The ``run`` script in this directory is very simple and just runs the notebook application.
``` ```bash
exec start-notebook.sh "$@" exec start-notebook.sh "$@"
``` ```
...@@ -121,13 +121,13 @@ Integration with OpenShift ...@@ -121,13 +121,13 @@ Integration with OpenShift
The OpenShift platform provides integrated support for S2I type builds. Templates are provided for using the S2I build mechanism with the scripts in this directory. To load the templates run: The OpenShift platform provides integrated support for S2I type builds. Templates are provided for using the S2I build mechanism with the scripts in this directory. To load the templates run:
``` ```bash
oc create -f https://raw.githubusercontent.com/jupyter/docker-stacks/master/examples/source-to-image/templates.json oc create -f https://raw.githubusercontent.com/jupyter/docker-stacks/master/examples/source-to-image/templates.json
``` ```
This will create the templates: This will create the templates:
``` ```bash
jupyter-notebook-builder jupyter-notebook-builder
jupyter-notebook-quickstart jupyter-notebook-quickstart
``` ```
...@@ -136,7 +136,7 @@ The templates can be used from the OpenShift web console or command line. This ` ...@@ -136,7 +136,7 @@ The templates can be used from the OpenShift web console or command line. This `
To use the OpenShift command line to build into an image, and deploy, the set of notebooks used above, run: To use the OpenShift command line to build into an image, and deploy, the set of notebooks used above, run:
``` ```bash
oc new-app --template jupyter-notebook-quickstart \ oc new-app --template jupyter-notebook-quickstart \
--param APPLICATION_NAME=notebook-examples \ --param APPLICATION_NAME=notebook-examples \
--param GIT_REPOSITORY_URL=https://github.com/jupyter/notebook \ --param GIT_REPOSITORY_URL=https://github.com/jupyter/notebook \
......
...@@ -2,6 +2,7 @@ cat << EOF > "$MANIFEST_FILE" ...@@ -2,6 +2,7 @@ cat << EOF > "$MANIFEST_FILE"
* Build datetime: ${BUILD_TIMESTAMP} * Build datetime: ${BUILD_TIMESTAMP}
* DockerHub build code: ${BUILD_CODE} * DockerHub build code: ${BUILD_CODE}
* Docker image: ${DOCKER_REPO}:${GIT_SHA_TAG} * Docker image: ${DOCKER_REPO}:${GIT_SHA_TAG}
* Docker image size: $(docker images ${IMAGE_NAME} --format "{{.Size}}")
* Git commit SHA: [${SOURCE_COMMIT}](https://github.com/jupyter/docker-stacks/commit/${SOURCE_COMMIT}) * Git commit SHA: [${SOURCE_COMMIT}](https://github.com/jupyter/docker-stacks/commit/${SOURCE_COMMIT})
* Git commit message: * Git commit message:
\`\`\` \`\`\`
......
...@@ -15,7 +15,7 @@ RUN apt-get -y update && \ ...@@ -15,7 +15,7 @@ RUN apt-get -y update && \
apt-get install --no-install-recommends -y openjdk-8-jre-headless ca-certificates-java && \ apt-get install --no-install-recommends -y openjdk-8-jre-headless ca-certificates-java && \
rm -rf /var/lib/apt/lists/* rm -rf /var/lib/apt/lists/*
# Using the preferred mirror to download the file # Using the preferred mirror to download Spark
RUN cd /tmp && \ RUN cd /tmp && \
wget -q $(wget -qO- https://www.apache.org/dyn/closer.lua/spark/spark-${APACHE_SPARK_VERSION}/spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz\?as_json | \ wget -q $(wget -qO- https://www.apache.org/dyn/closer.lua/spark/spark-${APACHE_SPARK_VERSION}/spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz\?as_json | \
python -c "import sys, json; content=json.load(sys.stdin); print(content['preferred']+content['path_info'])") && \ python -c "import sys, json; content=json.load(sys.stdin); print(content['preferred']+content['path_info'])") && \
...@@ -24,23 +24,9 @@ RUN cd /tmp && \ ...@@ -24,23 +24,9 @@ RUN cd /tmp && \
rm spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz rm spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz
RUN cd /usr/local && ln -s spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION} spark RUN cd /usr/local && ln -s spark-${APACHE_SPARK_VERSION}-bin-hadoop${HADOOP_VERSION} spark
# Mesos dependencies # Configure Spark
# Install from the Xenial Mesosphere repository since there does not (yet)
# exist a Bionic repository and the dependencies seem to be compatible for now.
COPY mesos.key /tmp/
RUN apt-get -y update && \
apt-get install --no-install-recommends -y gnupg && \
apt-key add /tmp/mesos.key && \
echo "deb http://repos.mesosphere.io/ubuntu xenial main" > /etc/apt/sources.list.d/mesosphere.list && \
apt-get -y update && \
apt-get --no-install-recommends -y install mesos=1.2\* && \
apt-get purge --auto-remove -y gnupg && \
rm -rf /var/lib/apt/lists/*
# Spark and Mesos config
ENV SPARK_HOME=/usr/local/spark ENV SPARK_HOME=/usr/local/spark
ENV PYTHONPATH=$SPARK_HOME/python:$SPARK_HOME/python/lib/py4j-0.10.7-src.zip \ ENV PYTHONPATH=$SPARK_HOME/python:$SPARK_HOME/python/lib/py4j-0.10.7-src.zip \
MESOS_NATIVE_LIBRARY=/usr/local/lib/libmesos.so \
SPARK_OPTS="--driver-java-options=-Xms1024M --driver-java-options=-Xmx4096M --driver-java-options=-Dlog4j.logLevel=info" \ SPARK_OPTS="--driver-java-options=-Xms1024M --driver-java-options=-Xmx4096M --driver-java-options=-Dlog4j.logLevel=info" \
PATH=$PATH:$SPARK_HOME/bin PATH=$PATH:$SPARK_HOME/bin
......
[![docker pulls](https://img.shields.io/docker/pulls/jupyter/pyspark-notebook.svg)](https://hub.docker.com/r/jupyter/pyspark-notebook/) [![docker stars](https://img.shields.io/docker/stars/jupyter/pyspark-notebook.svg)](https://hub.docker.com/r/jupyter/pyspark-notebook/) [![image metadata](https://images.microbadger.com/badges/image/jupyter/pyspark-notebook.svg)](https://microbadger.com/images/jupyter/pyspark-notebook "jupyter/pyspark-notebook image metadata") [![docker pulls](https://img.shields.io/docker/pulls/jupyter/pyspark-notebook.svg)](https://hub.docker.com/r/jupyter/pyspark-notebook/) [![docker stars](https://img.shields.io/docker/stars/jupyter/pyspark-notebook.svg)](https://hub.docker.com/r/jupyter/pyspark-notebook/) [![image metadata](https://images.microbadger.com/badges/image/jupyter/pyspark-notebook.svg)](https://microbadger.com/images/jupyter/pyspark-notebook "jupyter/pyspark-notebook image metadata")
# Jupyter Notebook Python, Spark, Mesos Stack # Jupyter Notebook Python, Spark Stack
Please visit the documentation site for help using and contributing to this image and others. Please visit the documentation site for help using and contributing to this image and others.
......
...@@ -2,6 +2,7 @@ cat << EOF > "$MANIFEST_FILE" ...@@ -2,6 +2,7 @@ cat << EOF > "$MANIFEST_FILE"
* Build datetime: ${BUILD_TIMESTAMP} * Build datetime: ${BUILD_TIMESTAMP}
* DockerHub build code: ${BUILD_CODE} * DockerHub build code: ${BUILD_CODE}
* Docker image: ${DOCKER_REPO}:${GIT_SHA_TAG} * Docker image: ${DOCKER_REPO}:${GIT_SHA_TAG}
* Docker image size: $(docker images ${IMAGE_NAME} --format "{{.Size}}")
* Git commit SHA: [${SOURCE_COMMIT}](https://github.com/jupyter/docker-stacks/commit/${SOURCE_COMMIT}) * Git commit SHA: [${SOURCE_COMMIT}](https://github.com/jupyter/docker-stacks/commit/${SOURCE_COMMIT})
* Git commit message: * Git commit message:
\`\`\` \`\`\`
......
...@@ -2,6 +2,7 @@ cat << EOF > "$MANIFEST_FILE" ...@@ -2,6 +2,7 @@ cat << EOF > "$MANIFEST_FILE"
* Build datetime: ${BUILD_TIMESTAMP} * Build datetime: ${BUILD_TIMESTAMP}
* DockerHub build code: ${BUILD_CODE} * DockerHub build code: ${BUILD_CODE}
* Docker image: ${DOCKER_REPO}:${GIT_SHA_TAG} * Docker image: ${DOCKER_REPO}:${GIT_SHA_TAG}
* Docker image size: $(docker images ${IMAGE_NAME} --format "{{.Size}}")
* Git commit SHA: [${SOURCE_COMMIT}](https://github.com/jupyter/docker-stacks/commit/${SOURCE_COMMIT}) * Git commit SHA: [${SOURCE_COMMIT}](https://github.com/jupyter/docker-stacks/commit/${SOURCE_COMMIT})
* Git commit message: * Git commit message:
\`\`\` \`\`\`
......
...@@ -2,6 +2,7 @@ cat << EOF > "$MANIFEST_FILE" ...@@ -2,6 +2,7 @@ cat << EOF > "$MANIFEST_FILE"
* Build datetime: ${BUILD_TIMESTAMP} * Build datetime: ${BUILD_TIMESTAMP}
* DockerHub build code: ${BUILD_CODE} * DockerHub build code: ${BUILD_CODE}
* Docker image: ${DOCKER_REPO}:${GIT_SHA_TAG} * Docker image: ${DOCKER_REPO}:${GIT_SHA_TAG}
* Docker image size: $(docker images ${IMAGE_NAME} --format "{{.Size}}")
* Git commit SHA: [${SOURCE_COMMIT}](https://github.com/jupyter/docker-stacks/commit/${SOURCE_COMMIT}) * Git commit SHA: [${SOURCE_COMMIT}](https://github.com/jupyter/docker-stacks/commit/${SOURCE_COMMIT})
* Git commit message: * Git commit message:
\`\`\` \`\`\`
......
...@@ -2,6 +2,7 @@ cat << EOF > "$MANIFEST_FILE" ...@@ -2,6 +2,7 @@ cat << EOF > "$MANIFEST_FILE"
* Build datetime: ${BUILD_TIMESTAMP} * Build datetime: ${BUILD_TIMESTAMP}
* DockerHub build code: ${BUILD_CODE} * DockerHub build code: ${BUILD_CODE}
* Docker image: ${DOCKER_REPO}:${GIT_SHA_TAG} * Docker image: ${DOCKER_REPO}:${GIT_SHA_TAG}
* Docker image size: $(docker images ${IMAGE_NAME} --format "{{.Size}}")
* Git commit SHA: [${SOURCE_COMMIT}](https://github.com/jupyter/docker-stacks/commit/${SOURCE_COMMIT}) * Git commit SHA: [${SOURCE_COMMIT}](https://github.com/jupyter/docker-stacks/commit/${SOURCE_COMMIT})
* Git commit message: * Git commit message:
\`\`\` \`\`\`
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment