Back to Search Start Over

The analysis and performance evaluation of the pheromone-Q-learning algorithm.

Authors :
Monekosso, N.
Remagnino, P.
Source :
Expert Systems; May2004, Vol. 21 Issue 2, p80-91, 12p, 1 Diagram, 4 Charts, 11 Graphs
Publication Year :
2004

Abstract

The paper presents the pheromone-Q-learning (Phe-Q) algorithm, a variation of Q-learning. The technique was developed to allow agents to communicate and jointly learn to solve a problem. Phe-Q learning combines the standard Q-learning technique with a synthetic pheromone that acts as a communication medium speeding up the learning process of cooperating agents. The Phe-Q update equation includes a belief factor that reflects the confidence an agent has in the pheromone (the communication medium) deposited in the environment by other agents. With the Phe-Q update equation, the speed of convergence towards an optimal solution depends on a number of parameters including the number of agents solving a problem, the amount of pheromone deposit, the diffusion into neighbouring cells and the evaporation rate. The main objective of this paper is to describe and evaluate the performance of the Phe-Q algorithm. The paper demonstrates the improved performance of cooperating Phe-Q agents over non-cooperating agents. The paper also shows how Phe-Q learning can be improved by optimizing all the parameters that control the use of the synthetic pheromone. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
02664720
Volume :
21
Issue :
2
Database :
Complementary Index
Journal :
Expert Systems
Publication Type :
Academic Journal
Accession number :
13030671
Full Text :
https://doi.org/10.1111/j.1468-0394.2004.00265.x