Back to Search Start Over

Normative Rule Extraction from Implicit Learning into Explicit Representation

Authors :
Mohd Rashdan Abdul Kadir
Ali Selamat
Ondrej Krejcar
Source :
SoMeT
Publication Year :
2020
Publisher :
IOS Press, 2020.

Abstract

Normative multi-agent research is an alternative viewpoint in the design of adaptive autonomous agent architecture. Norms specify the standards of behaviors such as which actions or states should be achieved or avoided. The concept of norm synthesis is the process of generating useful normative rules. This study proposes a model for normative rule extraction from implicit learning, namely using the Q-learning algorithm, into explicit norm representation by implementing Dynamic Deontics and Hierarchical Knowledge Base (HKB) to synthesize useful normative rules in the form of weighted state-action pairs with deontic modality. OpenAi Gym is used to simulate the agent environment. Our proposed model is able to generate both obligative and prohibitive norms as well as deliberate and execute said norms. Results show the generated norms are best used as prior knowledge to guide agent behavior and performs poorly if not complemented by another agent coordination mechanism. Performance increases when using both obligation and prohibition norms, and in general, norms do speed up optimum policy reachability.

Details

Database :
OpenAIRE
Journal :
SoMeT
Accession number :
edsair.doi...........f6041ac88527d607f66a45d221346cc4
Full Text :
https://doi.org/10.3233/faia200555