See the prerequisite and be sure to be familiar with Sake.re. In particular, you will need to retrieve the description of the experiment that you want to reproduce.
Here we distinguish between reproducing an experiment, i.e. generating the results of a particular experiment (through multi-agent simulation), and reproducing an analysis, i.e. performing the analysis of these results.
By replaying, we mean reruning the experiment in conditions as close as possible to the initial processing. If one wants to rerun an experiment, with different conditions, it is good to replay it first and then to alter the conditions. For this, see Rerunning experiments.
The process is summarised by:
$ . params.sh $ docker build -f ${OSVERS}.dkr -t lazylav:${OSVERS} . $ docker build -f ${LABEL}.dkr -t lazylav:${LABEL} . $ docker run --name ${LABEL} -v `pwd`/results:/workdir/results lazylav:${LABEL} processThe results will be found under the local results directory.
Below is a more detailed explanation. Each experiment comes with two 'docker files' specifying the runtime environment of the experiments. They can be downloaded by clicking on the first (gray) one (here , ${OSVERS}.dkr) declares the operating system, Java and Lazy lavender versions. It may be shared across experiments: if two experiments have the same one, this step can be skipped. The image corresponding to this environment is built by running (you may have to rename the files unfortunately):
$ docker build -f ${OSVERS}.dkr -t lazylav:${OSVERS} .The second (blue) docker file (here ) sets the actual Lazy lavender version for the experiment and records the commands implementing the experiment. Its image may be built by:
$ docker build -f ${LABEL}.dkr -t lazylav:${LABEL} .Once the images have been built, it is possible to replay the experiment. This is achieved by running:
$ docker run --name ${LABEL} -v `pwd`/results:/workdir/results lazylav:${LABEL} processThe resulting output will be found in the results directory. Since some of these experiments may take some time, it may be useful to monitor them through:
$ docker ps -a
It is also possible to run the analysis on the fly, by running:
$ docker run -p 7777:7777 --name ${LABEL} -v `pwd`/results:/workdir/results lazylav:${LABEL} full
First, clone the experiment to reproduce:
$ git clone --recurse-submodules cakes@felapton.inrialpes.fr:${LABEL}.git ${LABEL} $ cd ${LABEL} $ git checkout DESIGNED $ ant -f lazylav/build.xml compileall $ bash script.sh
Results should be found in the results directory.
First, download the zip then:
$ unzip ${LABEL}.zip $ cd ${LABEL} $ rm -rf results # if results are already there
Prepare the software:
$ git submodule update --init $ ant -f lazylav/build.xml compileall
Finally run the experiments:
$ bash script.sh
Results should be found in the results directory.
By rerunning, we mean performing the same experiment under different conditions. In general, rerunning an experiment when the conditions are very diferent amounts to retrieving experiment descriptions and designing new experiments from them. Here we consider only very specific alterations, namely software upgrade.
Because most of the conditions are fixed within the Docker image (and we do not plan to alter Docker files), the only opportunity that we have is to run the Docker container with an up-to-date software, i.e. Lazy lavender. This is particularly useful to check that a modification in software did not entail changes in the results.
The dockerfile for building the image for rerunning is the same as for the initial experiment. This ensures that we are indeed rerunning the same experiment. The new image is built with a specific argument (--build-arg version=latest:
$ docker build -f example-NOOR.dkr -t lazylav:rerun-example-NOOR --build-arg version=latest .Beware: Docker files generated between 2017-12 and 2018-06 contain an instruction to edit the `runexp.sh` script for avoiding it to pull the last version of Lazy lavender. This is now dealt with more elegantly, but this breaks the process. Hence, it is necessary to suppress the snippet `-e '/git pull/{s/^[^#]/# /}'` from the Docker file before building it.
Finally, the execution is launched exactly as before:
$ docker run --name rerun-example-NOOR -v `pwd`/results:/workdir/results lazylav:rerun-example-NOOR process
Performing the same with repositories amount to upgrade the software with the last version. This may be achieved with:
$ git submodule foreach 'git checkout master'
First ensure that the results are available in the results directory. These results may have been generated by you as above or may be retrieved from git archives linked from the web site. It is possible to just move the results directory of the archives to the expected place.
Simply build the image as above and connect to the Jupyter server:
$ . params.sh $ docker build -f busternb.dkr -t lazylav:busternb . $ docker build -f ${LABEL}.dkr -t lazylav:${LABEL} . $ docker run -p 7777:7777 --name ${LABEL} -v `pwd`/results:/workdir/results lazylav:${LABEL} analyse
Then you may open the browser at http://localhost:7777 with the given token to have access to the results.
You must have python, jupyter notebook and some extensions installed. If not already installed, this can be achieved with:
$ python3 -m pip install notebook $ python3 -m pip install jupyter_contrib_nbextensions $ jupyter nbextension enable python-markdown/main $ jupyter nbextension enable hide_input_all/main $ jupyter nbextension enable collapsible_headings/main $ jupyter nbextension enable livemdpreview/livemdpreview
$ git clone --recurse-submodules cakes@felapton.inrialpes.fr:${LABEL}.git ${LABEL} $ cd ${LABEL} $ git checkout ANALYSEDIf a requirements.txt file is available, then proceed:
$ python3 -m pip install -r requirements.txt
After ensuring that the results are in the results directory, launch jupyter:
$ jupyter trust notebook.ipynb $ jupyter notebook
Then open the notebook.ipynb notebook from the browser (usually http://localhost:8888).
Alternatively, it is possible to checkout the PERFORMED tag, but the notebook.ipynb may remain to be elaborated.
You must have installed the same python and jupyter as above. Then upload zip and software:
$ unzip ${LABEL}.zip $ cd ${LABEL} $ git submodule update --initIf a requirements.txt file is available, then proceed:
$ python3 -m pip install -r requirements.txt
Finally run jupyter:
$ jupyter trust notebook.ipynb $ jupyter notebook
If you have followed the instructions above, you may be able to publish the reproduced, or non reproduced results. You may also want to record the design of an alternative experiment.