TransInferSim: Toward Fast and Accurate Evaluation of Embedded Hardware Accelerators for Transformer Networks

Loading...
Thumbnail Image

Authors

Klhůfek, Jan
Marchisio, Alberto
Mrázek, Vojtěch
Sekanina, Lukáš
Shafique, Muhammad

Advisor

Referee

Mark

Journal Title

Journal ISSN

Volume Title

Publisher

Altmetrics

Abstract

Transformers are neural network models that have gained popularity in various advanced AI systems including embedded/Edge-AI. Due to their architecture, hardware accelerators can leverage massive parallelism, especially when processing attention head operations. While accelerators for Transformers are being discussed in the literature, efficient scheduling of cache operations and detailed modeling of inference dynamics has not yet been addressed comprehensively. In this paper, we introduce TransInferSim, a novel tool that combines cycle-accurate simulation for performance estimation (including latency, memory usage, memory access counts, and computation counts) with a discrete-event-based scheduler that determines the execution order of compute and memory operations. By combining this tool with the Accelergy tool, the simulator enables accurate estimation of energy consumption and on-chip area, leveraging pre-characterized hardware parameters. The proposed tool allows for the accurate determination of cache misses at different levels and with different victim selection configurations. It supports different memory hierarchies and offers several strategies for scheduling operations on compute units. In addition, TransInferSim can extract the full execution plan generated during simulation, enabling its further use for behavioral Register Transfer Level validation or for deployment in real hardware implementations. This makes the tool applicable not only for high-level design space exploration, but also as a software front-end for hardware execution mapping. Finally, we can optimize the architecture for a particular network, as demonstrated through multiobjective design space exploration to adjust the size of processing arrays. In our experiments, the introduction of an on-chip memory hierarchy improved the inference speed by 3.5× and reduced energy by 1.9× for the RoBERTaBase Transformer model, while design space exploration achieved up to 10× latency reduction and 6× area savings for the ViTTiny vision Transformer. The tool is available online at https://github.com/ehw-fit/TransInferSim.
Transformers are neural network models that have gained popularity in various advanced AI systems including embedded/Edge-AI. Due to their architecture, hardware accelerators can leverage massive parallelism, especially when processing attention head operations. While accelerators for Transformers are being discussed in the literature, efficient scheduling of cache operations and detailed modeling of inference dynamics has not yet been addressed comprehensively. In this paper, we introduce TransInferSim, a novel tool that combines cycle-accurate simulation for performance estimation (including latency, memory usage, memory access counts, and computation counts) with a discrete-event-based scheduler that determines the execution order of compute and memory operations. By combining this tool with the Accelergy tool, the simulator enables accurate estimation of energy consumption and on-chip area, leveraging pre-characterized hardware parameters. The proposed tool allows for the accurate determination of cache misses at different levels and with different victim selection configurations. It supports different memory hierarchies and offers several strategies for scheduling operations on compute units. In addition, TransInferSim can extract the full execution plan generated during simulation, enabling its further use for behavioral Register Transfer Level validation or for deployment in real hardware implementations. This makes the tool applicable not only for high-level design space exploration, but also as a software front-end for hardware execution mapping. Finally, we can optimize the architecture for a particular network, as demonstrated through multiobjective design space exploration to adjust the size of processing arrays. In our experiments, the introduction of an on-chip memory hierarchy improved the inference speed by 3.5× and reduced energy by 1.9× for the RoBERTaBase Transformer model, while design space exploration achieved up to 10× latency reduction and 6× area savings for the ViTTiny vision Transformer. The tool is available online at https://github.com/ehw-fit/TransInferSim.

Description

Citation

IEEE Access. 2025, vol. 13, issue October, p. 177215-177226.
https://ieeexplore.ieee.org/document/11202474

Document type

Peer-reviewed

Document version

Published version

Date of access to the full text

Language of document

en

Study field

Comittee

Date of acceptance

Defence

Result of defence

Endorsement

Review

Supplemented By

Referenced By

Creative Commons license

Except where otherwised noted, this item's license is described as Creative Commons Attribution 4.0 International
Citace PRO