I’m trying to figure out a way to augment data for Seq2Seq model. But i have a limit in number of training samples. I have 200 samples.
I have an idea that using chat GPT to generates similar output sequences to that one input. Is this approach confused the model?
New contributor
tjns is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.