I am using the code found in this tutorial and the GitHub to visualize the inner mechanisms of the ViT
model. z
model = torch.hub.load('facebookresearch/deit:main', 'deit_tiny_patch16_224', pretrained=True)
This tutorial make use of the deit_tiny_patch16_224
and performs two different explainable methods (to extract attention maps) namely rollout
and grad_rollout
. In both methods a forward and backward hook
is introduced in the attn_drop
module (that is part of each block or encoder layer of the ViT
) and then when the model is applied to an input image the attention maps for this layer and the gradient is returned . The input image size is 224x224x3
that is transformed to 14x14 = 196
patches.
When the model with the hook is applied to the input image then it returns a list of attention maps (each one has size of torch.Size([1, 3, 197, 197])
) and a list of gradients (each one has size of torch.Size([1, 3, 197, 197])
).
I am trying to do the same thing but for another model and more specifically for the ViT_B_16_Weights
model:
retrained_vit_weights = torchvision.models.ViT_B_16_Weights.DEFAULT # requires torchvision >= 0.13, "DEFAULT" means best available
pretrained_vit = torchvision.models.vit_b_16(weights=retrained_vit_weights).to(device)
Note that both models have been developed using different implementations and the embedding_dim
is different. It is 64
for the former and 768
for the latter. Moreover, the naming of the layers is different. However, should not be a problem and the code of the tutorial should work smoothly for this model as well. I am trying to use the ViT_B_16_Weights
. In this case, I am trying to apply the hook in the dropout layer
that is right after the self_attention
layer.
However, the problem now is that the attention maps that I am receiving and the gradient does not have a proper size and they are as follows: torch.Size([197, 1, 768])
for the gradient and torch.Size([1, 197, 768])
for the attention maps. That does not make sense though to correspond to the code in the tutorial and the hoop is attached in the only layer that seems to be relevant. All the other options do not seem to work.
Here is the list with all my options for the deit_tiny_patch16_224
model:
blocks.11.norm1
blocks.11.attn
blocks.11.attn.qkv
blocks.11.attn.q_norm
blocks.11.attn.k_norm
blocks.11.attn.attn_drop # the hook is attached here
blocks.11.attn.proj
blocks.11.attn.proj_drop
blocks.11.ls1
blocks.11.drop_path1
blocks.11.mlp
blocks.11.mlp.fc2
blocks.11.drop_path2
while for the latter ViT_B_16_Weights
model is
encoder.layers.encoder_layer_10.ln_1
encoder.layers.encoder_layer_10.self_attention
Registering hook for encoder.layers.encoder_layer_10.self_attention
encoder.layers.encoder_layer_10.self_attention.out_proj
Registering hook for encoder.layers.encoder_layer_10.self_attention.out_proj
encoder.layers.encoder_layer_10.dropout # the hook is attached here
encoder.layers.encoder_layer_10.ln_2
encoder.layers.encoder_layer_10.mlp
encoder.layers.encoder_layer_10.mlp.0
encoder.layers.encoder_layer_10.mlp.1
encoder.layers.encoder_layer_10.mlp.2
encoder.layers.encoder_layer_10.mlp.3
encoder.layers.encoder_layer_10.mlp.4
What can it be wrong here? I was expecting to have a square tensor as in the case of the first model, is it something special with the implementation?