I’m working on a project using the transformers library from Hugging Face and PyTorch to compute log probabilities from a causal language model. I have two inputs: a single sentence and a batch of two sentences. However, I’m encountering an issue where the sum of log probabilities for the single sentence differs when computed in a batch versus individually.
Here is a simplified version of my code:
import transformers
import torch
import random
import numpy as np
torch.backends.cudnn.deterministic = True
def set_seed(seed):
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
if torch.cuda.is_available():
torch.cuda.manual_seed_all(seed)
set_seed(42)
hf_name = "cerebras/Cerebras-GPT-111M"
tokenizer_arguments = {"truncation": True,
"max_length": 1096,
"padding_side": "left",
"add_special_tokens": False}
tokenizer = transformers.AutoTokenizer.from_pretrained(hf_name, **tokenizer_arguments)
tokenizer.pad_token = tokenizer.eos_token
model = transformers.AutoModelForCausalLM.from_pretrained(hf_name, torch_dtype="bfloat16")
one_sentence = "Hello this is John"
longer_sentence = "Hello guys, my name is John, nice to meet you"
no_batch = [one_sentence]
batch = [one_sentence, longer_sentence]
def tokenize(content):
return tokenizer(content,
return_tensors="pt",
truncation=False,
padding="longest",
add_special_tokens=True)
input_no_batch = tokenize(no_batch)
input_batch = tokenize(batch)
with torch.no_grad():
outputs_no_batch = model(**input_no_batch)
outputs_batch = model(**input_batch)
def get_log_probs(logits, _masks):
logits = logits * _masks.unsqueeze(-1)
log_probs = torch.log_softmax(logits, dim=-1)
return log_probs
att_masks = input_no_batch['attention_mask']
att_masks2 = input_batch['attention_mask']
no_batch_probs = get_log_probs(outputs_no_batch.logits, att_masks)
batch_probs = get_log_probs(outputs_batch.logits, att_masks2)
after_masks_no_batch = no_batch_probs * att_masks.unsqueeze(-1)
after_masks_batch = batch_probs * att_masks2.unsqueeze(-1)
print(torch.sum(after_masks_no_batch).item()) # Sum of log probabilities for single sentence
print(torch.sum(after_masks_batch[0, :, :]).item()) # Sum of log probabilities for the same sentence in batch
The output values from the two print statements are not equal, even though they should theoretically be the same since the sentence is the same in both cases.
Expected behavior: I expect torch.sum(after_masks_no_batch).item() and torch.sum(after_masks_batch[0, :, :]).item() to produce equal values because they both refer to the log probabilities of the same sentence.
What I’ve tried:
- Ensuring the model is in evaluation mode by using torch.
- Double-checking that the tokenizer parameters are consistent.
- Printing intermediate shapes and values to track any discrepancies.
Any insights on why this discrepancy might be occurring and how to resolve it would be greatly appreciated!