How can I get my custom ES tokeniser to work?
I’m trying to tweak my Elasticsearch default tokeniser (for stemmed English, stemmed other languages and also for unstemmed analysis) since I noticed that dot (“.”) isn’t by default a token separator, i.e. with the standard analyser. Other questions have suggested answers to how to achieve that… but my problem is more fundamental: I don’t seem to be able to apply ANY change of tokeniser to my index, even to unstemmed fields.