TR2019-116

Near-optimal control of motor drives via approximate dynamic programming


    •  Wang, Y., Chakrabarty, A., Zhou, M., Zhang, J., "Near-optimal control of motor drives via approximate dynamic programming", IEEE International Conference on Systems, Man, and Cybernetics, DOI: 10.1109/​SMC.2019.8914595, October 2019, pp. 3679-3686.
      BibTeX TR2019-116 PDF
      • @inproceedings{Wang2019oct,
      • author = {Wang, Yebin and Chakrabarty, Ankush and Zhou, Mengchu and Zhang, Jinyun},
      • title = {Near-optimal control of motor drives via approximate dynamic programming},
      • booktitle = {IEEE International Conference on Systems, Man, and Cybernetics},
      • year = 2019,
      • pages = {3679--3686},
      • month = oct,
      • publisher = {IEEE},
      • doi = {10.1109/SMC.2019.8914595},
      • url = {https://www.merl.com/publications/TR2019-116}
      • }
  • MERL Contacts:
  • Research Areas:

    Control, Machine Learning

Abstract:

Data-driven methods for learning near-optimal control policies through approximate dynamic programming (ADP) have garnered widespread attention. In this paper, we investigate how data-driven control methods can be leveraged to imbue near-optimal performance in a core component in modern factory systems: the electric motor drive. We apply policy iteration-based ADP on an induction motor model in order to construct a state feedback control policy for a given cost functional. Approximate error convergence properties of policy iteration methods imply that the learned control policy is near-optimal. We demonstrate that carefully selecting a cost functional and initial control policy yields a near-optimal control policy that outperforms both a baseline nonlinear control policy based on backstepping, as well as the initial control policy.