Radar Lab at UA Develops TransRAD, Transformer-based 3D Radar Object Detection
The UA Radar Group publishes TransRAD, a Retentive Vision Transformer framework advancing 3D radar object detection in Range-Azimuth-Doppler space.
The University of Arizona's radar research group led by Dr. Siyang Cao has developed TransRAD, a novel transformer-based model for 3D radar object detection that operates in Range-Azimuth-Doppler (RAD) space.
TransRAD leverages a Retentive Vision Transformer architecture and introduces a Retentive Manhattan Self-Attention (MaSA) mechanism to embed spatial priors and better align with radar target saliencies. The model includes a Location-Aware non-max suppression (NMS) module to reduce redundant bounding boxes.
Key Achievements
- Outperforms prior state-of-the-art methods in accuracy
- Significantly improved inference speed
- Better spatial prior embedding with MaSA mechanism
- Enhanced bounding box precision with Location-Aware NMS
This work strengthens the lab's portfolio in radar imaging, signal processing, and machine learning, and contributes to robust perception in autonomous systems under adverse conditions such as low visibility.