Using deep reinforcement learning to search reachability properties in systems specified through graph transformation

Mehrabi, Mohammad Javad and Rafe, Vahid. 2022. Using deep reinforcement learning to search reachability properties in systems specified through graph transformation. Soft Computing, 26(18), pp. 9635-9663. ISSN 1432-7643 [Article]

[img]
Preview
Text
2-Final.pdf - Accepted Version

Download (1MB) | Preview

Abstract or Description

Today, model checking is one of the essential techniques in the verification of software systems. This technique can verify some properties such as reachability in which the entire state space is searched to find the desired state. However, model checking may lead to the state space explosion problem in which all states cannot be generated due to the exponential resource usage. Although the results of recent model checking approaches are promising, there is still room for improvement in terms of accuracy and the number of explored states. In this paper, using deep reinforcement learning and two neural networks, we propose an approach to increase the accuracy of the generated witnesses and reduce the use of hardware resources. In this approach, at first, an agent starts to explore the state space without any knowledge and gradually identifies the proper and improper actions by receiving different rewards/penalties from the environment to achieve the goal. Once the dataset is fulfilled with the agent's experiences, two neural networks evaluate the quality of each operation in each state, and afterwards, the best action is selected. The significant difficulties and challenges in the implementation are encoding the states, feature engineering, feature selection, reward engineering, handling invalid actions, and configuring the neural network. Finally, the proposed approach has been implemented in the Groove toolset, and as a result, in most of the case studies, it overcame the problem of state space explosion. Also, this approach outperforms the existing solutions in terms of generating shorter witnesses and exploring fewer states. On average, the proposed approach is nearly 400% better than other approaches in exploring fewer states and 300% better than the others in generating shorter witnesses. Also, on average, the proposed approach is 37% more accurate than the others in terms of finding the goals state.

Item Type:

Article

Identification Number (DOI):

https://doi.org/10.1007/s00500-022-06815-4

Additional Information:

“This version of the article has been accepted for publication, after peer review (when applicable) but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: http://dx.doi.org/10.1007/s00500-022-06815-4. Use of this Accepted Version is subject to the publisher’s Accepted Manuscript terms of use https://www.springernature.com/gp/open-research/policies/accepted-manuscript-terms”

Data Access Statement:

Enquiries about data availability should be directed to the authors.

Keywords:

Reachability, Verification, Deep Reinforcement Learning, Model Checking, State, State Space Explosion, Graph, Graph Transformation System, Artificial Intelligence, State Space, Witness, Initial State, Goal State, Terminal State, Node, Edge, Rule, Transition, Exploration, Search, Deadlock

Departments, Centres and Research Units:

Computing

Dates:

DateEvent
20 January 2022Accepted
11 February 2022Published Online
September 2022Published

Item ID:

33412

Date Deposited:

24 Apr 2023 08:30

Last Modified:

25 Apr 2023 07:35

Peer Reviewed:

Yes, this version has been peer-reviewed.

URI:

https://research.gold.ac.uk/id/eprint/33412

View statistics for this item...

Edit Record Edit Record (login required)