BEVFormer is a transformer-based architecture that is heavily used as a component of many other architectures. It learns bird-eye-view representations of scenes for self-driving cars. When studying its source code to understand how it works I came across these lines:
can_bus = bev_queries.new_tensor(
[each['can_bus'] for each in kwargs['img_metas']]) # [:, :]
can_bus = self.can_bus_mlp(can_bus)[None, :, :]
bev_queries = bev_queries + can_bus * self.use_can_bus
source: https://github.com/fundamentalvision/BEVFormer/blob/66b65f3a1f58caf0507cb2a971b9c0e7f842376c/projects/mmdet3d_plugin/bevformer/modules/transformer.py#L159
The can_bus contains 18 values, as shown by the input to can_bus_mlp. However, the dataset preparation code seems to suggest that can_bus only contains 8 values. These 8 values consist of ego2global rotations, translations, and patch angles.
source : https://github.com/fundamentalvision/BEVFormer/blob/66b65f3a1f58caf0507cb2a971b9c0e7f842376c/projects/mmdet3d_plugin/datasets/nuscenes_dataset.py#L158
I have the feeling that the can_bus values are used as positional encodings to give the model information about how the pose of the ego vehicle relates to the global scene (similar to how in NLP positional encodings are used to give the model information about the relative positions of words in sentences). However it is a bit hard for me to wrap my head around how this idea came into practice. Or if my intuition is even correct in the first place.
My questions are the following:
- What are the 18 values in the can_bus?
- Where do they come from?
- Why are these values used as positional encodings?