Date of Award

2023

Publication Type

Dissertation

Degree Name

Ph.D.

Department

Electrical and Computer Engineering

Supervisor

M. Saif

Keywords

Computational intelligence, Group decision-making, Machine learning, Reinforcement learning, Intelligent systems

Rights

info:eu-repo/semantics/openAccess

Creative Commons License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Abstract

The development of intelligent systems is progressing rapidly, thanks to advances in information technology that enable collective, automated, and effective decision-making based on information collected from diverse sources. Group decision-making (GDM) is a key part of intelligent decision-making (IDM), which has received considerable attention in recent years. IDM through GDM refers to a decision-making problem where a group of intelligent decision-makers (DMs) evaluate a set of alternatives with respect to specific attributes. Intelligent communication among DMs aims to give orders to the available alternatives. However, GDM models developed for IDM must incorporate consensus support models to effectively integrate input from each DM into the final decision.

Many efforts have been made to design consensus models to support IDM, depending on the decision problem or environment. Despite promising results, significant gaps remain in research on the design of such support models. One major drawback of existing consensus models is their dependence on the type of decision environment, making them less generalizable. Moreover, these models are often static and cannot respond to dynamic changes in the decision environment. Another limitation is that consensus models for large-scale decision environments lack an efficient communication regime to enable DM interactions.

To address these challenges, this dissertation proposes developing consensus models to support IDM through GDM. To address the generalization issue of existing consensus models, reinforcement learning (RL) is proposed. RL agents can be built on the Markov decision process to enable IDM, potentially removing the generalization issue of consensus support models. Contrary to most consensus models, which assume static decision environments, this dissertation proposes a computationally efficient dynamic consensus model to support dynamic IDM. Finally, to facilitate secure and efficient interactions among intelligent DMs in large-scale problems, Blockchain technology is proposed to speed up the consensus process. The proposed communication regime also includes trust-building mechanisms that employ Blockchain protocols to remove enduring and limitative assumptions on opinion similarity among agents.

COinS