TR2025-059

Electric Motor Cogging Torque Prediction with Vision Transformer Models


Abstract:

Motor performances such as cogging torque and torque ripple are difficult to predict accurately with surrogate models. In this work, we propose Vision Transformer (ViT) based models to tackle the problem. We adopt a ViT model pre-trained on image classification tasks, and fine-tune it with a dataset prepared for interior permanent magnet motor designs. Each motor design is represented by a 2d image and fed into the ViT model for making predictions on cogging torque. To further improve the data efficiency of the model, we customize it by utilizing the motor design parameter information to initialize the class token of the ViT model. We show that the proposed method significantly outperforms established deep convolutional neural network (CNN) based models, and achieves high accuracy on cogging torque prediction on the test dataset.