Relative Content

Tag Archive for pytorchartificial-intelligencetransformer-model

DeepLearning – Positional embedding of transformer model

I have a question when should I use positional embedding in Transformers. I want to make cross attention layer with m queries and n keys. More specifically, I will reduce the image to 512x7x7 using resnet and use 49 vectors as keys. In this case, where should I add positional embedding? Queries? Keys? Or both? Or neither?