Reinforcement Learning-Based Feedback and Weight-Adjustment Mechanisms for Consensus Reaching in Group Decision Making

Document Type

Article

Publication Date

4-1-2023

Publication Title

IEEE Transactions on Systems, Man, and Cybernetics: Systems

Volume

53

Issue

4

First Page

2456

Keywords

Consensus models, decision making, decision support systems, deep learning, reinforcement learning, Z-numbers}

Last Page

2468

Abstract

The number of discussion rounds and harmony degree of decision makers are two crucial efficiency measures to be considered in the design of the consensus-reaching process for the group decision-making problems. Adjusting the feedback parameter and importance weights of the decision makers in the recommendation mechanism has a great impact on these efficiency measures. This work aims to propose novel and efficient reinforcement learning-based adjustment mechanisms to address the tradeoff between the aforementioned measures. To employ these adjustment mechanisms, we propose to extract the dynamics of state transition from consensus models based on the distributed trust functions and Z-Numbers in order to convert the decision environment into a Markov decision process. Two independent reinforcement learning agents are then trained via a deep deterministic policy gradient algorithm to adjust the feedback parameter and importance weights of decision makers. The first agent is trained toward reducing the number of discussion rounds while ensuring the highest possible level of harmony degree among the decision makers. The second agent merely speeds up the consensus reaching process by adjusting the importance weights of the decision makers. Various experiments are designed to verify the applicability and scalability of the proposed feedback and weight-adjustment mechanisms in different decision environments.

DOI

10.1109/TSMC.2022.3214221

ISSN

21682216

E-ISSN

21682232

Share

COinS