I have some old HTML files which were downloaded and saved on disk around 2004. The text content is in greek but in various encodings. I want them all converted into UTF-8
. Some have the charset declared in a <meta>
tag. Some don’t.
I can see all of them if I open them in Firefox.
In Linux, I use 2 methods to detect the encoding from the command line:
file -i file.html
and
enca -Lnone file.html
And after that I convert them to UTF-8
with:
iconv -f fromenc -t 'UTF-8' file.html > fixed.html
However, some of the output files do not look correctly neither on the command line nor in Firefox.
When the corresponding input files are opened via Firefox they render correctly. And Firefox reports their encoding as windows-1253
(via Firefox’s Tools->Page Info
menu).
However, the encoding of these input files is reported by both enca
and file
as iso-8859-1
. Is this incorrect detection?
If I try to convert them to UTF-8
with iconv
with this iso-8859-1
as the from-encoding, I get cyrillic characters.
But when I try to convert them with windows-1253
as the from-encoding, they look OK both on command line and Firefox.
What’s more, those files have iso-8859-1
as their charset in a <meta>
tag. And Firefox still renders them fine (and then reports windows-1253
encoding via the menu).
So, I am looking for a more reliable method of detecting the encoding of these files on the Linux command line or even via js, Python or Perl.
Here is example content from such an input file which Firefox reports its encoding as windows-1253
(but its meta tag, file -i
and enca
report it — all three of them — as iso-8859-1
):
Δεν είναι μόνο
and as a hexdump:
e5c4 20ed dfe5 e1ed 20e9 fcec efed 000a
wrongly rendered with cyrillic characters after the iconv command:
ƒен еянбй мьнп
2
encoding is not a resolved thing – and it is doubly hard if there is incorrect information, such as the one present in the meta
tag of some of your files.
Maybe you are better building a small Python script which can help you with your particular case –
The advantages are:
– Python has a simple interface to read files with an encoding and writting in other one
– The Python string API and other common tools can make it easy to find-out if you got an unexpected encoding when reading, and try another one.
What will require you some testing there is that: there is no Cyrillic character that could be decoded when usin “iso-8859-1” – so they will never be detected as an error – this charset includes accented latin letters, plus Ð
, Ç
, µ
and a few mathematical symbols – but no Cyrllic. Decoding to iso-8859-1
won’t ever raise an error, as it is a “charmap” encoding: there is one character defined for each byte.
You might try decoding from “cp-1253” first and check if any Greek letters found form meaningful words. That would require a spell-checking file to match words against.
Or – a better idea is to check do iso-8859-1 and check if there are any 4 or more run-together non-ascii characters – the Western Latin languages use these symbols sparingly, and words like ‘ìüíï’ don’t exist (that would be the result of incorrectly decoding μόνο
as iso-8859-1)
So, for your case, a program that first tries utf-8
, on error tries iso-8859-1
and if the result of that has any Cyrillic tries cp-1253
and check for Greek – could be (don’t be scared by the code size – most of it are comments):
import re, sys
from pathlib import Path
def writeout(data, output_dir):
(output_dir / filepath.name).write_text(data, encoding="utf-8")
def doit(filepath, output_dir):
raw = filepath.read_bytes()
try:
data = raw.decode("utf-8")
except UnicodeDecodeError:
pass
else: # no_error using utf-8
writeout(data, output_dir)
return
data = raw.decode("iso-8859-1") # this is a charmap encoding that won't raise
# check if there are any occurences of 4 or more runtogether
# non ASCII chars in the 128-255 range:
if not re.findall("[x80-xff]{4}", data):
# ok, no meaningless sequences of 4 or more accented characters
# probably iso-8859-1 was just good:
writeout(data, output_dir)
return
# use "cp1253":
writeout(raw.decode("cp1253"), output_dir)
return # no need, but
def main(input_dir, output_dir):
output_dir = Path(output_dir)
print(f"converting files from {input_dir} - outputting to {output_dir}")
for filepath in Path(input_dir).iterdir():
doit(filepath, output_dir)
if len(sys.argv) < 3:
print("Usage: python3 {__file__} <input-dir> <output-dir>", file=sys.stderr)
sys.exit(1)
main(sys.argv[1], sys.argv[2])