I remember reading that there are no existing data structures which allow for random-access into a variable length encoding, like UTF-8, without requiring additional lookup tables.
The main question I have is, is this even a useful property? I mean, to look up and replace random single codepoints in O(1) time.
I would give the traditional, and really quite boring answer of it depends.
Is random access to individual characters (glyphs) in a string a useful property? Yes, definitely.
Do you need access to individual code points? I guess that could be useful in certain situations that aren’t too contrived if you are doing extensive handling of text data, such as in for example word processing or text rendering. Data (text encoding) normalization is another possible use-case that I can think of. I’m sure there are other good uses as well.
Does it need to be in O(1) time? Really, with a few exceptions that are unlikely to apply in the general case, not necessarily. If O(1) time access is a requirement, it’s probably easier to just use a fixed-length encoding such as UTF-32. (And you will still be dealing with cache misses and swap space fetches, so for sufficiently long strings it won’t be O(1) anyway… :))
10
The Swift standard library has gone around this problem. You can access the first, second, third item in a string, but it takes linear time.
But it is rather rare that this is needed. You work with indices, which represent positions inside a string. If you ask “where’s the position of the latter z in this string, the answer is not say “6th character” but say “index 9”.
PS The indexes seem to be indexes of UTF-16 words (or bytes if a string is plain ASCII), but that is an implementation detail. Units can be UTF-8 bytes, UTF-16 words, code points or characters = grapheme clusters.
1
Off the top of my head:
Since the vast majority of code points from eurocentric perspective are single-byte, perhaps a good O(1) for those cases and O(N) for longer characters would be useful. Perhaps you store the primary byte of the code point in the main fixed-length array, but if the byte indicates an extended char (1 in the first bit IIRC for utf-8) then that tells the algorithm those extended chars are in a secondary storage array that is not uniform length.
This secondary storage array would track the extended bytes as well as the index in the primary fixed-byte array the character appeared in. For a cold lookup, you’d have O(N) where N is the number of extra byte characters. If you were iterating through the array, you could have a following pointer in the extended chars storage and not have any real impact on the extra byte chars. Cache lines shouldn’t be horrible since they are both arrays.