S2TPVFormer: Improving 3D Semantic Occupancy Prediction using Spatiotemporal Transformers
Sathira Silva*, Savindu Wannigama*, Prof. Roshan Ragel $\dagger$, Gihan Jayatilaka $\ddagger$
* Equal contribution $\dagger$ Project Supervisor $\ddagger$ Project Co-supervisor
Introduction
Temporal reasoning holds equal importance to spatial reasoning in a cognitive perception system. In human perception, temporal information is crucial for identifying occluded objects and determining the motion state of entities. A system proficient in spatiotemporal reasoning excels in making inferences with high temporal coherence. While previous works emphasize the significance of temporal fusion in 3D object detection, earlier attempts at 3D Semantic Occupancy Prediction (3D SOP) often overlooked the value of incorporating temporal information. The current state-of-the-art in 3D SOP literature seldom exploits temporal cues. This is evident in TPVFormer’s SOP visualizations, where adjacent prediction frames lack temporal coherence as they rely solely on the current time step for semantic predictions.
This work introduces S2TPVFormer, a variant of TPVFormer, which utilizes a spatiotemporal transformer architecture inspired by BEVFormer, for dense and temporally coherent 3D semantic occupancy prediction. Leveraging TPV (Top View and Voxel) representation, the model’s spatiotemporal encoder generates temporally rich embeddings, fostering coherent predictions. The study proposes a novel Temporal Cross-View Hybrid Attention mechanism, enabling the exchange of spatiotemporal information across different views. To illustrate the efficacy of temporal information incorporation and the potential of the new attention mechanism, the research explores three distinct temporal fusion paradigms.
Overview of our Contributions
To summarize, this work contributes in the following ways,
- We pioneer the use of TPV representation for embedding spatiotemporal information in 3D scenes within the domain of vision-centric SOP and the broader 3D perception literature.
- We introduce a novel temporal fusion workflow for TPV representation, analyzing how CVHA facilitates the sharing of spatiotemporal information across the three planes.
- The lower parameter model of our method achieves a significant 3.1% improvement in mIoU for 3D SOP when evaluated on the nuScenes validation dataset with TPVFormer’s sparse pseudo-voxel ground truth, compared to TPVFormer.
Results
Team
-
E/17/331 - SILVA H.S.C. - e17331@eng.pdn.ac.lk
-
E/17/369 - WANNIGAMA S.B. - e17369@eng.pdn.ac.lk
Supervisors
-
Prof. Roshan Ragel - roshanr@eng.pdn.ac.lk
-
Gihan Jayatilake - gihan@umd.edu
-
Geesara Prathap - ggeesara@gmail.com
[⭐Bookmarks] Related Articles, Blogs
- CVPR2023-3D-Occupancy-Prediction
- Monocular Bird’s-Eye-View Semantic Segmentation for Autonomous Driving
- Monocular BEV Perception with Transformers in Autonomous Driving
- Monocular 3D Object Detection in Autonomous Driving — A Review
- Deep Understanding Tesla FSD Part 1: HydraNet
- Deep Understanding Tesla FSD Part 2: Vector Space
- Andrej Karpathy’s Interpretation of Transformer: Communicate Phase & Compute Phase
- DeepSORT: Deep Learning to Track Custom Objects in a Video
- Open-MMLab repos
- What are Intrinsic and Extrinsic Camera Parameters in Computer Vision?
- Vision-centric Semantic Occupancy Prediction for Autonomous Driving
- What is Gradient Accumulation in Deep Learning?
- Master the overall construction process of MMDetection
- Awesome-Occupancy-Prediction-Multi-Cameras
Timeline
- [Apr 27th, 2023] Literature review started (reading the papers Attention is All you Need, NEAT and TCP).
- [May 12th, 2023] First meeting with the supervisors, pitching the ideas.
- [Week starting on May 14th, 2023] Started reading BEVFormer, TPVFormer, SurroundOcc, OccFormer, and other related papers/articles about vision-centric 3D Occupancy prediction for autonomous driving.
- [Week starting on May 21st, 2023] Downloading the nuScenes dataset to the department server. 💡New idea popped out, added to the doc. Getting familar with mmcv. “Understand the runner class, then you will understand everything.” ~gihan jayatilake
Ran a training loop of the TPVFormer with the 3D Occupancy head. - Rest of the timeline
Quick Links
- Project Repository (public)
- Project Repository (private)
- UMD logs (private)
- gihans-repo-copy (private)
- Project Diary
- Meeting Notes
- IDEAs💡
- Literature Review Summary
- Paper Summaries
- Jamboard
- WANDB.AI
- Experiment Results