20231120-MTOA
Date: 2023-11-20 (Andreas Kalaitzakis)
Favoring the reproduction of agents with the lowest success rate will improve the maximum accuracy and re-balance the tasks representation.
36 agents; 3 tasks; 6 features (2 independent features per task); 4 decision classes; 10 runs; 80000 games; 20000 games per generation
Each agent initially trains on all tasks and then carries out one. When the agents disagree the following take place:
(a) The agent with the lower score will adapt its knowledge accordingly. If memory capacity limit is attained, the agent will try to forget knowledge (parent nodes with descendents that contain the same decisions are removed).
(b) The agent with the highest score will decide for both agents. If this agent's decision is correct, the agent will receive the points corresponding to both agents.
Agents undertake 1 task having a limited memory, enough for learning 3 tasks accurately (12 classes).
When this limit is attained, the agents will try to forget knowledge in favor of the undertaken task.
This is possible when two leaf nodes with the same parent have the same decisions for the task the agent undertakes.
Variables independent variables: ['birthCriteria','birthSortingOrder','knowledgeLimit','parentSelectionPolicy']
dependent variables: ['avg_min_accuracy', 'avg_accuracy', 'avg_max_accuracy', 'success_rate', 'correct_decision_rate', 'delegation_rate', 'accuracy_relative_standard_deviation', 'total_points', 'points_distribution', 'games_relative_standard_deviation']
Date: 2023-01-01 (Andreas Kalaitzakis)
Computer: Dell Precision-5540 (CC: 12 * Intel(R) Core(TM) i7-9850H CPU @ 2.60GHz with 16GB RAM OS: Linux 5.4.0-92-generic)
Duration : 1080 minutes
Lazy lavender hash: ceb1c5d1ca8109373d293b687fc55953fce5241d
Parameter file: params.sh
Executed command (script.sh):
Table 1 consists of the final achieved average success rate values, i.e., the average success rate after the last iteration.
Table 2 consists of the final average minimum accuracy values with respect to the worst task, i.e., the accuracy after the last iteration for the task for which the agent scores the lowest accuracies.
Table 3 consists of the final achieved average ontology accuracy with respect to all tasks, i.e., the accuracy after the last iteration averaged on all tasks and agents.
Table 4 consists of the final average best task accuracy values with respect to the best task, i.e., the accuracy after the last iteration for the task for which the agent score the highest accuracies.
Table 5 consists of the final average scope accuracy values.
Table 6 consists of the final achieved correct decision rate values, i.e., the average correct decision rate after the last iteration.
Figure 1 displays the evolution of the average success rate (y-axis) as the number of iterations increases (x-axis), depending on the (a) birth criteria, (b) birth sorting order, (c) knowledge limit and (d) parent selection policy. Two things are shown. First, that favoring the reproduction of agents that are less successful or have accumulated less points, leads to a significantly lower success rate. This is to be expected according to the pressure exerted during the selection. Second, in both subfigures it is shown that the success rate stabilizes, but does not converge to 1. This indicates that either the agents' final ontologies do not allow them to agree on all decisions.
Figure 2 displays the evolution of the average minimum accuracy (y-axis) as the number of iterations increases (x-axis), depending on the (a) birth criteria, (b) birth sorting order, (c) knowledge limit and (d) parent selection policy. It is shown that the (a) and (b) and (d) have no statistically significant impact on the observed average minimum accuracy.
Figure 3 displays the evolution of the average accuracy over all tasks (y-axis) as the number of iterations increases (x-axis), depending on the (a) birth criteria, (b) birth sorting order, (c) knowledge limit and (d) parent selection policy. It is shown that when agents memory consists of 12 classes, favoring the reproduction of agents that are less successful or have accumulated less points, leads to a significantly lower average accuracy for a given knowledge limit.
Figure 4 displays the evolution of the average maximum accuracy over all agents (y-axis) as the number of iterations increases (x-axis), depending on the (a) birth criteria, (b) birth sorting order, (c) knowledge limit and (d) parent selection policy. It is shown that when agents memory consists of 12 classes, favoring the reproduction of agents that are less successful or have accumulated less points, leads to a significantly lower average maximum accuracy for a given knowledge limit.
Figure 5 displays the evolution of the average correct decision rate (y-axis) as the number of iterations increases (x-axis), depending on the (a) birth criteria, (b) birth sorting order, (c) knowledge limit and (d) parent selection policy. It is shown that when agents memory consists of 12 classes, favoring the reproduction of agents that are less successful or have accumulated less points, leads to a significantly lower correct decision rate for a given knowledge limit.
Figure 6 displays the evolution of the average accumulated points by all agents (y-axis) as the number of iterations increases (x-axis), depending on the (a) birth criteria, (b) birth sorting order, (c) knowledge limit and (d) parent selection policy. It is shown that when agents memory consists of 12 classes, the reproduction of agents that have accumulated less points is favored, agent societies collectively accumulate less points.
Figure 7 displays the evolution of the distribution of the accumulated points to agents (y-axis) as the number of iterations increases (x-axis), depending on the (a) birth criteria, (b) birth sorting order, (c) knowledge limit and (d) parent selection policy. Several things are shown.First, within each generation, points equitability decreases with iterations. Second, points are more equitably distributed generation after generation. Third, after each new generation, the difference in final distribution values for different (a), (b) and (d) increases. Fourth, while the reproduction of less successful agents (in terms of points and communication) favors the relative standard deviation for the different tasks accuracy, leads into less equitable point distributions.
Figure 8 displays the evolution of the relative standard deviation for the average accuracy of each task over all agents (y-axis) as the number of iterations increases (x-axis), depending on the (a) birth criteria, (b) birth sorting order, (c) knowledge limit and (d) parent selection policy. The following are shown. First, when the reproduction of successful agents is favored, the relative standard deviation shows a slow increase. Contrariwise, when the reproduction of the less successful agents is favored, the standard deviation reaches a local maximum. It then decreases until it stabilizes.
Figure 9 displays the evolution of the games relative standard deviation (y-axis) as the number of iterations increases (x-axis), depending on the (a) birth criteria, (b) birth sorting order, (c) knowledge limit and (d) parent selection policy. The following are shown. First, when the reproduction of less successful agents is favred, after each generation there is a local low. The latter is not observed when the reproduction of the most successful agents is favored. In this case, the games relative standard deviation increases more steeply after each generation. Second, for all (a), (b) and (d), after a finite number of generations, no low locals are observed. This agrees with the evolution of the accuracy relative standard deviation. The latter is monotonically increasing generation after generation for all combinations of the independent variables.
Inevitably, given enough generations, all agents will carry out the same task. However, max accuracy will be lower the more iterations this convergence requires to take place.