20230110-MTOA
Date: 2023-01-10 (Andreas Kalaitzakis)
Agents will benefit from undertaking a limited set of tasks.
18 agents; 3 tasks; 6 features (2 independent features per task); 4 decision classes; 20 runs; 80000 games
Each agent initially trains on all tasks. The agent then carries out a limited set of tasks. When the agents disagree the following take place:
(a) The agent with the lower income will adapt its knowledge accordingly. If memory capacity limit is attained, the agent will try to generalize when this possible. For generalization to be an option, the decision for all undertaken tasks must be the same. Generalization takes place in a recursive way, i.e., after the first generalization, if generalization is possible at the previously higher level, we re-merge the now leaf nodes.
(b) The agent with the highest income will decide for both agents
Agents undertake 1-3 tasks having unlimited memory, enough for learning all tasks.
Variables independent variables: ['maxAdaptingRank']
dependent variables: ['avg_min_accuracy', 'avg_accuracy', 'avg_max_accuracy', 'success_rate', 'correct_decision_rate', 'delegation_rate']
Date: 2023-01-01 (Andreas Kalaitzakis)
Computer: Dell Precision-5540 (CC: 12 * Intel(R) Core(TM) i7-9850H CPU @ 2.60GHz with 16GB RAM OS: Linux 5.4.0-92-generic)
Duration : 120 minutes
Lazy lavender hash: ceb1c5d1ca8109373d293b687fc55953fce5241d
Parameter file: params.sh
Executed command (script.sh):
Table 1 consists of the final achieved average success rate values, i.e., the average success rate after the last iteration. Each column corresponds to a different number of adapting tasks, while each row corresponds to a different run, for the same size of scope.
Table 2 consists of the final average minimum accuracy values with respect to the worst task, i.e., the accuracy after the last iteration for the task for which the agent scores the lowest accuracies. Each column corresponds to a different number of undertaken tasks, while each row corresponds to a different run, for the same size of scope.
Table 3 consists of the final achieved average ontology accuracy with respect to all tasks, i.e., the accuracy after the last iteration averaged on all tasks and agents. Each column corresponds to a different number of adapting tasks, while each row corresponds to a different run, for the same size of scope.
Table 4 consists of the final average best task accuracy values with respect to the best task, i.e., the accuracy after the last iteration for the task for which the agent score the highest accuracies. Each column corresponds to a different number of undertaken tasks, while each row corresponds to a different run, for the same size of scope.
Table 5 consists of the final average scope accuracy values. Each column corresponds to a different number of undertaken tasks, while each row corresponds to a different run, for the same number of undertaken tasks.
Table 6 consists of the final achieved correct decision rate values, i.e., the average correct decision rate after the last iteration. Each column corresponds to a different number of undertaken tasks, while each row corresponds to a different run, for the same size of scope.
Table 7 consists of the final achieved delegation rate values, i.e., the average delegation rate after the last iteration. Each column corresponds to a different number of undertaken tasks, while each row corresponds to a different run, for the same number of undertaken tasks.
Table 8 consists of the final achieved total compensation values, i.e., the average total compensation of a population of agents after the last iteration. Each column corresponds to a different number of undertaken tasks, while each row corresponds to a different run, for the same number of undertaken tasks.
Table 9 consists of the final P90/10 decile ratio of average agent compensation values, after the last iteration. Each column corresponds to a different number of undertaken tasks, while each row corresponds to a different run, for the same number of undertaken tasks.
Table 10 consists of the final P90/10 decile ratio of average scope accuracy values, after the last iteration. Each column corresponds to a different number of undertaken tasks, while each row corresponds to a different run, for the same number of undertaken tasks.
Figure 1 displays the evolution of the average success rate (y-axis) as the number of iterations increases (x-axis), depending on the maximum adapting rank |maxAdaptingRank| = {0,1,2,3}.
This figure shows that a population of interacting agents will learn to agree on all their decisions, regardless of the size of their scope. This is justified by the fact that the agents represented here do not face any limitations. Therefore, they will gradually adopt the same properties in their ontologies, even if their leaf classes do not coincide. These observations confirm the results that were presented previously by Bourahla. Moreover, they indicate that the size of the scope has an impact on the success rate obtained. The larger the scope size, the more interactions are needed to agree on everything, thus the lower the average success rate at convergence.
Figure 2 portrays the evolution of the average worst task accuracy (y-axis), depending on the number of carried tasks. Each point x,y corresponds to the average minimum accuracy of all tackled tasks at the n^th interaction of each run.
Figure 3 portrays the evolution of the average accuracy (y-axis), depending on the number of carried tasks. Each point x,y corresponds to the average accuracy of all existing in the environment tasks at the n^th interaction of each run.
We assume that agents interacting on a limited scope of tasks will be more accurate on some tasks than others, at the expense of their average accuracy. However, we do not observe this here. Here it is shown that performing additional tasks significantly improves the average accuracy of agents.
Figure 4 portrays the evolution of the average maximum accuracy (y-axis), depending on the number of carried tasks. Each point x,y corresponds to the average maximum accuracy of all existing tasks at the n^th interaction of each run.
Two observations can be drawn. The first observation is that the average accuracy is always lower than the average accuracy on the best task of the same agents shown here. These agents are therefore able to specialize by restricting the scope of their tasks. The second observation is that agents that tackle fewer tasks have a lower average accuracy on their best task than agents that tackle all tasks. More precisely, the smaller the scope of the agents, the lower this precision is. Thus, we can conclude that specializing agents with unlimited memory does not provide any advantage in terms of accuracy. In the examined configuration, each configuration depends on different properties. Therefore, our observation is not related to the transferability of knowledge from one task to another, since learning the decision with respect to one task is not related to learning the decision for another task. On the contrary, our observation is justified by the fact that agents tackling all tasks build more complete ontologies, and therefore associate the learned decisions with more detailed classification.
Figure 5 portrays the evolution of the average scope accuracy (y-axis), depending on the number of carried tasks. Each point x,y corresponds to the average accuracy of all tackled tasks at the n^th interaction of each run.
Figure 6 displays the evolution of the average correct decision rate (y-axis) as the number of iterations increases (x-axis), depending on the maximum adapting rank |maxAdaptingRank| = {0,1,2}.
Figure 7 displays the evolution of the average delegation rate (y-axis) as the number of iterations increases (x-axis), depending on the maximum adapting rank |maxAdaptingRank| = {0,1,2}.
We perform one-way ANOVA, testing if the independent variable 'maxAdaptingRank' has a statistically significant effect on different dependent variables.