I am a web developer and I don’t know anything about AI, but one of my frinds send me some python code and asked why it’s not working
from transformers import GPT2LMHeadModel, GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = GPT2LMHeadModel.from_pretrained("gpt2")
def chat_with_gpt(input_text, max_length=100):
ids = tokenizer.encode(input_text, return_tensors="pt")
output = model.generate(ids, max_length=max_length, num_return_sequences=1)
response = tokenizer.decode(output[0], skip_special_tokens=True)
return response
print(chat_with_gpt("what is your favorite book?"))
it take an input from a user, encode it, and then return a response from GPT2 (as I understand) but the problem is it return an Incomprehensible response each time, in this example I ask it “what is your favorite book?” and the response was “I love the book, but I don’t think it’s the best. I think it’s the best book I’ve read. I think it’s the best book I’ve read. I think it’s the best book I’ve read.” and it continue repeating
I try to search about this transformers library but didn’t find anything useful