Simulation of agent knowledge evolution
repository of REproducible experiments is a repository of multiagent simulations of knowledge evolution descriptions, results and analysis, so far conducted with the Lazy lavender simulator. The web site is operated by the mOeX INRIA team, but open to anyone wanting to publish such experiments: contact us.

As of January 2021, most experiments are still hosted by the mediawiki (and the links will drive you there), but we are migrating them to the new platform. Instead of the wiki, they will be generated by a Jupyter notebook. Expect this web site to evolve in the next few months.

This page lists experiments in cultural knowledge evolution performed with Lazy lavender in the way of a laboratory notebook. Experiment descriptions and analysis are maintained over git (hence somewhat accountable); results may be found here or in other zip archives.

Experiments with learning and adaptation of decision ontologies (DOLA)

These are two steps experiments in which agents first individualy learn to take decision and then adapt what they learnt during collective practice [Bourahla2021a].

Experiments with ontology refinement through communication (ORTC)

In these kind of experiments agents do not use alignments, they maintain a memory of their interaction and notice when other agents have more refined classes than theirs. They then exchange pieces of class definitions to include in their own ontologies.

Experiments with revision of networks of ontologies (NOOR)

In these kinds of experiments, agents revise alignments between their ontologies as a result of playing a simple interaction game [Euzenat2014b, c, 2017a, b]


       Valid             Subsumed     Under scrutiny     Partially valid       Invalidated     experiments