Agent adapt ontologies to agree on decision taking
Date: 20201001 (Yasser Bourahla)
5 runs; 40000 games
Hypothesis: ['here there is no hypotheses, the goal is to situate how much agents improve compared to AMAIL']
Variation of: 20200623-DOLA
Environment: ../input/zoology
test set proportion: 0.1
Experimental setting: Agents learn decision trees (transformed into ontologies); get income from environment; adapt by splitting their leaf nodes; environment generated from dataset
independent variables: ['numberOfAgents']
dependent variables: ['accuracy', 'precision', 'recall', 'taccuracy', 'tprecision', 'trecall', 'fmeasure', 'tfmeasure']
Date: 20201001 (Yasser Bourahla)
LazyLavender hash: 6edb6ab1e77240dec68475d1263707f2086b22f3
Link to lazylavender
Parameter file: params.sh
Executed command (script.sh):
BEWARE: REPRODUCING THE ANALYSIS TAKES A CONSIDERABLE AMOUNT OF TIME.
The independent variables have been varied as follows:
average precision/recall/fmeasure/accuracy by number of agents
average precision/recall/fmeasure/accuracy for each targetted decision class
average precision/recall/fmeasure/accuracy for each test fold
Agents mostly started with already high precision/recall/accuracy. From few examples of the zoology dataset, agents are able to generalise and perform well on the rest of the dataset. There was not much room for improvement.
Results are on par with A-MAIL* a coordinative learning approach which had the following results.
(Comparison is solely indicative since the two approaches do not consider the same information)
on the training set:
Number of agents | Precision | F-measure | Recall | Accuracy |
---|---|---|---|---|
2 | 1.00 | 0.87 | 0.77 | 0.988 |
3 | 1.00 | 0.95 | 0.91 | 0.997 |
4 | 0.99 | 0.96 | 0.93 | 0.992 |
5 | 1.00 | 0.97 | 0.95 | 0.997 |
on the test set:
Number of agents | Precision | F-measure | Recall | Accuracy |
---|---|---|---|---|
2 | 0.97 | 0.85 | 0.75 | 0.950 |
3 | 0.98 | 0.89 | 0.81 | 0.968 |
4 | 0.97 | 0.90 | 0.84 | 0.966 |
5 | 0.98 | 0.93 | 0.88 | 0.980 |
* Santiago Ontañón and Enric Plaza. 2015. Coordinated Inductive Learning Using Argumentation-Based Communication. Autonomous Agents and Multi-Agent Systems 29, 2 (2015), 266–304.