Control of Magnetic Manipulator Using Reinforcement Learning Based on Incrementally Adapted Local Linear Models

dc.contributor.authorBrablc, Martincs
dc.contributor.authorŽegklitz, Jancs
dc.contributor.authorGrepl, Robertcs
dc.contributor.authorBabuška, Robertcs
dc.coverage.issue1cs
dc.coverage.volume2021cs
dc.date.accessioned2022-07-25T14:52:56Z
dc.date.available2022-07-25T14:52:56Z
dc.date.issued2021-12-20cs
dc.description.abstractReinforcement learning (RL) agents can learn to control a nonlinear system without using a model of the system. However, having a model brings benefits, mainly in terms of a reduced number of unsuccessful trials before achieving acceptable control performance. Several modelling approaches have been used in the RL domain, such as neural networks, local linear regression, or Gaussian processes. In this article, we focus on techniques that have not been used much so far: symbolic regression (SR), based on genetic programming and local modelling. Using measured data, symbolic regression yields a nonlinear, continuous-time analytic model. We benchmark two state-of-the-art methods, SNGP (single-node genetic programming) and MGGP (multigene genetic programming), against a standard incremental local regression method called RFWR (receptive field weighted regression). We have introduced modifications to the RFWR algorithm to better suit the low-dimensional continuous-time systems we are mostly dealing with. The benchmark is a nonlinear, dynamic magnetic manipulation system. The results show that using the RL framework and a suitable approximation method, it is possible to design a stable controller of such a complex system without the necessity of any haphazard learning. While all of the approximation methods were successful, MGGP achieved the best results at the cost of higher computational complexity. Index Terms-AI-based methods, local linear regression, nonlinear systems, magnetic manipulation, model learning for control, optimal control, reinforcement learning, symbolic regression.en
dc.formattextcs
dc.format.extent1-12cs
dc.format.mimetypeapplication/pdfcs
dc.identifier.citationCOMPLEXITY. 2021, vol. 2021, issue 1, p. 1-12.en
dc.identifier.doi10.1155/2021/6617309cs
dc.identifier.issn1076-2787cs
dc.identifier.other178291cs
dc.identifier.urihttp://hdl.handle.net/11012/208198
dc.language.isoencs
dc.publisherWILEY-HINDAWIcs
dc.relation.ispartofCOMPLEXITYcs
dc.relation.urihttps://www.hindawi.com/journals/complexity/2021/6617309/cs
dc.rightsCreative Commons Attribution 4.0 Internationalcs
dc.rights.accessopenAccesscs
dc.rights.sherpahttp://www.sherpa.ac.uk/romeo/issn/1076-2787/cs
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/cs
dc.subjectApproximation theoryen
dc.subjectComplex networksen
dc.subjectContinuous time systemsen
dc.subjectGenetic algorithmsen
dc.subjectGenetic programmingen
dc.subjectMagnetismen
dc.subjectManipulatorsen
dc.subjectNonlinear systemsen
dc.subjectApproximation methodsen
dc.subjectLocal linear modelsen
dc.subjectLocal linear regressionen
dc.subjectMagnetic manipulationen
dc.subjectMagnetic manipulatorsen
dc.subjectMulti-gene genetic programmingen
dc.subjectReceptive fieldsen
dc.subjectReinforcement learningen
dc.subjectSymbolic regressionen
dc.subjectWeighted regressionen
dc.titleControl of Magnetic Manipulator Using Reinforcement Learning Based on Incrementally Adapted Local Linear Modelsen
dc.type.driverarticleen
dc.type.statusPeer-revieweden
dc.type.versionpublishedVersionen
sync.item.dbidVAV-178291en
sync.item.dbtypeVAVen
sync.item.insts2022.07.25 16:52:56en
sync.item.modts2022.07.25 16:14:17en
thesis.grantorVysoké učení technické v Brně. Fakulta strojního inženýrství. Ústav mechaniky těles, mechatroniky a biomechanikycs
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
6617309.pdf
Size:
2.27 MB
Format:
Adobe Portable Document Format
Description:
6617309.pdf