- Title
- Elastic step DDPG : multi-step reinforcement learning for improved sample efficiency
- Creator
- Ly, Adrian; Dazeley, Richard; Vamplew, Peter; Cruz, Francisco; Aryal, Sunil
- Date
- 2023
- Type
- Text; Conference paper
- Identifier
- http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/196193
- Identifier
- vital:18667
- Identifier
-
https://doi.org/10.1109/IJCNN54540.2023.10191774
- Identifier
- ISBN:9781665488679 (ISBN)
- Abstract
- A major challenge in deep reinforcement learning is that it requires more data to converge to an policy for complex problems. One way to improve sample efficiency is to use n-step updates to reduce the number of samples required to converge to a good policy. However n-step updates are known to be brittle and difficult to tune. Elastic Step DQN has shown that it is possible to automate the value of n in DQN to solve problems involving discrete action spaces, however the efficacy of the technique when applied on more complex problems and against problems with continuous action spaces is yet to be shown. In this paper we adapt the innovations proposed by Elastic Step DQN onto the DDPG algorithm and show empirically that Elastic Step DDPG is able to achieve a much stronger final training policy and is more sample efficient than DDPG. © 2023 IEEE.
- Publisher
- Institute of Electrical and Electronics Engineers Inc.
- Relation
- 2023 International Joint Conference on Neural Networks, IJCNN 2023 Vol. 2023-June
- Rights
- All metadata describing materials held in, or linked to, the repository is freely available under a CC0 licence
- Rights
- Copyright © 2023 IEEE
- Subject
- DDPG; Deep reinforcement learning; Multi-step methods; N-step methods; Reinforcement learning; Sample efficiency
- Reviewed
- Hits: 960
- Visitors: 685
- Downloads: 0
Thumbnail | File | Description | Size | Format |
---|