Dynamic multi-objective optimisation using deep reinforcement learning: benchmark, algorithm and an application to identify vulnerable zones based on water quality

Hasan, Md Mahmudul and Lwin, Khin and Imani, Maryam and Shabut, Antesar M. and Bittencourt, Luiz F. and Hossain, Mohammed Alamgir (2019) Dynamic multi-objective optimisation using deep reinforcement learning: benchmark, algorithm and an application to identify vulnerable zones based on water quality. Engineering Applications of Artificial Intelligence, 86. pp. 107-135. ISSN 0952-1976

[img] Text
Accepted Version
Restricted to Repository staff only until 6 September 2020.
Available under the following license: Creative Commons Attribution Non-commercial No Derivatives.

Download (3MB) | Request a copy
Official URL: https://doi.org/10.1016/j.engappai.2019.08.014

Abstract

Dynamic multi-objective optimisation problem (DMOP) has brought a great challenge to the reinforcement learning (RL) research area due to its dynamic nature such as objective functions, constraints and problem parameters that may change over time. This study aims to identify the lacking in the existing benchmarks for multi-objective optimisation for the dynamic environment in the RL settings. Hence, a dynamic multi-objective testbed has been created which is a modified version of the conventional deep-sea treasure (DST) hunt testbed. This modified testbed fulfils the changing aspects of the dynamic environment in terms of the characteristics where the changes occur based on time. To the authors’ knowledge, this is the first dynamic multi-objective testbed for RL research, especially for deep reinforcement learning. In addition to that, a generic algorithm is proposed to solve the multi-objective optimisation problem in a dynamic constrained environment that maintains equilibrium by mapping different objectives simultaneously to provide the most compromised solution that closed to the true Pareto front (PF). As a proof of concept, the developed algorithm has been implemented to build an expert system for a real-world scenario using Markov decision process to identify the vulnerable zones based on water quality resilience in São Paulo, Brazil. The outcome of the implementation reveals that the proposed parity-Q deep Q network (PQDQN) algorithm is an efficient way to optimise the decision in a dynamic environment. Moreover, the result shows PQDQN algorithm performs better compared to the other state-of-the-art solutions both in the simulated and the real-world scenario.

Item Type: Journal Article
Keywords: Dynamic environment, reinforcement learning, deep Q network, water quality resilience, meta-policy selection, artificial intelligence
Faculty: Faculty of Science & Engineering
SWORD Depositor: Symplectic User
Depositing User: Symplectic User
Date Deposited: 11 Sep 2019 09:05
Last Modified: 14 Nov 2019 16:07
URI: http://arro.anglia.ac.uk/id/eprint/704727

Actions (login required)

Edit Item Edit Item