Kairos

  • 98 Posts
  • 3.23K Comments
Joined 1 year ago
cake
Cake day: December 26th, 2023

help-circle














  • UTF-8 text is inherently wasteful.

    Say you have binary data and you want to encode it with UTF-8. For simplicity let’s say the spec goes up to 2^16 codepoints.

    Now each one of these codepoints (a unique character) this can be able to be encoded in 2 bytes directly, but because UTF-8 encoding is inherently wasteful, it needs more bytes than that on average. The reason is UTF-8 ensures that a valid string doesn’t contain a null byte (eight 0 bits, byte aligned). Useful for things like filenames, databases, etc. This means that some bit strings are nonsensical to UTF-8. The majority of them, actually.

    Its the same thing with English text, except instead of 1s and 0s we have letters and punctuation. English uses multiple letters per syllable, with certain combonations of letters being nonsensical, although a valid string. It’s inherently wasteful but its nice for reading.

    GZip compression will minimize the effects of both of these. Although, because of laws of entropy, you will always need to store some kind of information which will let you decompress it into the original English Text+UTF-8 string.

    Basically, it’s a fancy computer science way to store, in less bytes a UTF-8 + English text using “this is UTF-8 text” and “this is English text” and some information to detangle it all. Although it is not stored that way. It’s all just bits to GZip, both input and output. Both UTF-8 and English text inherently create patterns, And GZip compresses away patters. Rather well too.

    This also means that random data is incompressible because there’s no pattern. Unless you want to do lossily which is the literal only way Internet video streaming works so well.