Transformer-based Yorùbá to English language processor : requirements issues
Following the successful implementation of the first transduction model that leveraged attention mechanism, I want to implement similar model applying it to a Nigerian language e.g. Yorùbá. However, I do not have a corpus that can help actualize this experiment. Is there a way of getting it considering the idea in Vaswani et al (2017) in their article ‘Attention Is All You Need’? It is expected that the model achieves high Bilingual Evaluation Understudy metric. Help is needed here regarding the corpus as mentioned.