Experiment 20180530-NOOR

Relaxation improves on expansion by reaching 100% precision
Repeated in [20190524-NOOR]

Experiment design

DockerOS DockerEXP

Date: 20180723

Hypotheses: Relaxation improves on expansion by reaching 100% precision

Variation of: 20170529-NOOR

4 agents; 10 runs; 10000 games

Adaptation operators: delete replace refine add addjoin refadd

Experimental setting: Same as [[20170529-NOOR]] replaying the same runs as [[20180308-NOOR]] (putatively) and after fixing addjoin and expansion.


controled variables: ['revisionModality']

dependent variables: ['srate', 'size', 'inc', 'prec', 'fmeas', 'rec', 'conv']


Date: 20180723

Performer: Jérôme Euzenat (INRIA)

Lazy lavender hash: d50e70f87bca76951ec2f149dc8ae1d42b9a1a28

Classpath: lib/lazylav/ll.jar:lib/slf4j/logback-classic-1.2.3.jar:lib/slf4j/logback-core-1.2.3.jar:.

OS: stretch

Parameter file: params.sh

Executed command (script.sh):


for op in ${OPS}
   bash scripts/runexp.sh -d 4-10000-${op}-clever-nr-im80 java -Dlog.level=INFO -cp ${JPATH} fr.inria.exmo.lazylavender.engine.Monitor ${OPT} ${LOADOPT} -DrevisionModality=${op} -DexpandAlignments=clever -DnonRedundancy -DimmediateRatio=80



Class used: NOOEnvironment, AlignmentAdjustingAgent, AlignmentRevisionExperiment, ActionLogger, AverageLogger, Monitor

Execution environment: Debian Linux virtual machine configured with four processors and 20GB of RAM running under a Dell PowerEdge T610 with 4*Intel Xeon Quad Core 1.9GHz E5-2420 processors, under Linux ProxMox 2 (Debian). - Java 1.8.0 HotSpot

Input: Input required for reproducibility can be retrieved from: https://files.inria.fr/sakere/input/expeRun.zip or DOI Then unzip expeRun.zip -d input

Raw results


All plots compare -clever-nr-im80 (this experiment, plain) with -clever-nr (20180529-NOOR, dotted) and im80 (20180311-NOOR, dashed).

Operator Success rate Size Incoherence Semantic precision Semantic F-measure Semantic recall Convergence
delete 0.98 14 0.00 1.00 0.23 0.13 2116
replace 0.97 25 0.00 1.00 0.39 0.24 3038
refine 0.96 34 0.00 1.00 0.52 0.35 1946
add 0.95 48 0.00 1.00 0.63 0.46 2792
addjoin 0.97 47 0.00 1.00 0.62 0.45 3191
refadd 0.96 70 0.00 1.00 0.85 0.73 8114


Analyst: Jérôme Euzenat (INRIA) (2018-08-16)

Key points:

  • As expected all operators reach 100% precision
  • It has a slightly lower (semantic) recall than -clever-nr but achieves a better F-measure than both -clever-nr and -im80
  • The modality converges more slowly
  • Globally we observe the same as 20170529-NOOR and 20170215b-NOOR (with lower performance for refine)

Further experiments:


This file can be retrieved from URL https://sake.re/20180530-NOOR

It is possible to check out the repository by cloning https://felapton.inrialpes.fr/cakes/20180530-NOOR.git

This experiment has been transferred from its initial location at https://gforge.inria.fr (not available any more)

The original, unaltered associated zip file can be obtained from https://files.inria.fr/sakere/gforge/20180530-NOOR.zip

See original markdown (20180530-NOOR.md) or HTML (20180530-NOOR.html) files.