Parhizkar Abyaneh, Elham, Zilles, Sandra, Gerhard, David, Hamilton, Howard, Frankland, Martin, and Vassileva, Julita
A Thesis Submitted to the Faculty of Graduate Studies and Research In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy in Computer Science, University of Regina. xi, 110 p. In systems with multiple potentially deceptive agents, any single agent may have to assess the trustworthiness of other agents in order to decide with which agents to interact. To evaluate the trustworthiness of an agent in a multi-agent system, one often combines two types of trust information: direct trust information derived from one’s own interactions with that agent, and indirect trust information based on advice from other agents. Since the advisors themselves may be deceptive or unreliable, agents need a mechanism to assess and properly incorporate advice. In this thesis, we evaluate existing state-of-the-art methods for computing indirect trust in numerous simulations, demonstrating that the best ones tend to be of prohibitively large complexity. We propose a new and easy to implement method for computing indirect trust, based on a simple prediction with expert advice strategy as is often used in online learning. This method either competes with or outperforms all tested systems in the vast majority of the settings we simulated, while scaling substantially better. Our results demonstrate that existing systems for computing indirect trust are overly complex; the problem can be solved much more efficiently than the literature suggests. We also provide the first systematic study on when it is beneficial to combine the two types of trust as opposed to relying on only one of them. Our large-scale experimental study shows that strong methods for computing indirect trust make direct trust redundant in a surprisingly wide variety of scenarios. Further, a new method for the combination of the two trust types is proposed that, in the remaining scenarios, outperforms the ones known from the literature. Evaluating the trustworthiness of agents is particularly difficult if the agents change their behavior dynamically. The literature proposes Hidden Markov Models (HMMs) as the best solution to this problem, compared to standard Beta Reputation Systems (BRS) equipped with a simple decay mechanism to discount older interactions. We propose instead to use Page-Hinkley statistics in BRS to detect and dismiss an agent whose behavior worsens. Our experimental study demonstrates that our method outperforms HMMs and, in the vast majority of tested settings, either outperforms or is on par with other typically used BRS-type methods. ii Student yes