Agents are playing the knowledge-based agreement game using their symbolic knowledge base. Agents adapt their knowledge base by using a single adaptation operator.
Date: 2026-01-22
Designer: Richard Trézeux
Hypothesis: Agents using a single adaptation operator do not reach a consensus on decisions to take
10 agents; 2 environments; 5 runs per environment.
The independent variables have been varied as follows:
LEARNING_METHOD = [singleop1, singleop2, singleop3, singleop4, singleop5, singleop6]
SEED = [24, 2727] (randomly selected)
KB_INIT_METHOD = [random, consistent]
Constants of the experiment:
NBINTERACTIONS = 100000
NBPROPERTIES = 6
NBDECISIONS = 4
ADAPTING_AGENT_SELECTION = accuracy
Full results are available on Zenodo
NBPROPERTIES, NBCLASSES, NBAGENTS, and NBGENERATIONS have a strong impact on computational time.
Large values may lead to long simulations.
The directory folder structure we used for analysis is :
results
│
└───method1
│ └───seed1
│ │ logs1_gen1.jsonl
│ │ logs1_gen2.jsonl
│ │ ...
| | metrics.json
| └───seed2
| | logs1_gen1.jsonl
│ │ logs1_gen2.jsonl
│ │ ...
| | metrics.json
| └───seed3...
│
└───method2
│ └───seed1
│ │ logs1_gen1.jsonl
│ │ logs1_gen2.jsonl
│ │ ...
| | metrics.json
| └───seed2...
Consistent initialization
Random initialization
We observe that agents never reach 100% success rates under any of the single adaptation operator strategies.
The following trees represent the different $d^*$ function (environment objects generation) used.