I trained an Arabic NER with SpaCy and got the model-best folder. But the thing is, the data used for training was already normalized. So, characters such as “أ, ؤ, ة” were already normalized to “ا,ء,ه”, respectively. My goal is to upload the model to HuggingFace where people can test the model using their own texts. But users are going to use the standard format (e.g., “أ, ؤ, ة”) not the normalized version. Is there anyway I can integrate this normalization to the model without having to retrain it, perhaps through editing the config.cfg file?
I tried creating a python script “custom_components.py”
def normalize_characters(doc):
for token in doc:
if token.text == "أ":
token.text = "ا"
elif token.text == "إ":
token.text = "ا"
elif token.text == "آ":
token.text = "ا"
elif token.text == "ٱ":
token.text = "ا"
elif token.text == "ى":
token.text = "ي"
elif token.text in "ًٌَُ‘ٍِـ،ْ":
token.text = ""
elif token.text == "ؤ":
token.text = "ء"
elif token.text == "ئ":
token.text = "ء"
elif token.text == "ة":
token.text = "ه"
return doc
and editing the config.cfg file:
[system]
gpu_allocator = null
seed = 0
[nlp]
lang = "ar"
pipeline = ["tok2vec","ner"]
batch_size = 50
disabled = []
before_creation = null
after_creation = null
after_pipeline_creation = null
tokenizer = {"@tokenizers":"spacy.Tokenizer.v1"}
vectors = {"@vectors":"spacy.Vectors.v1"}
[components]
[components.custom_normalizer]
factory = "custom_components.normalize_characters"
before_update = "normalize_characters"
[components.ner]
factory = "ner"
incorrect_spans_key = null
moves = null
scorer = {"@scorers":"spacy.ner_scorer.v1"}
update_with_oracle_cut_size = 100
But that did not work.