February 09, 2021

Falsehoods programmers believe about plain text

Falsehoods Programmers Believe About Plain Text

Show/Hide counter-examples and discussion.

Yes, the title is hyperbolic. It's also standard, following a pattern set by many examples. A less formulaic and clickbaity explanation: this is a list of assumptions that programmers have sometimes relied on in their code. The assumptions allowed the code to be simpler and more efficient, right up until the code broke upon encountering a counter-example. Bugs in real systems have been caused by all of these assumptions.

All of these assumptions are wrong.

    Non-technical

  1. The Latin alphabet has 26 letters.

    Except they all have an uppercase form and a lowercase form, so there are really 52 different letters.

  2. Ignoring case, the Latin alphabet has 26 letters.

    What about á, ç, è, ǐ, ñ, ô, ü, etc.?

  3. Ignoring case and accents, the Latin alphabet has 26 letters.

    What about ø, ŋ, ł, ı, ħ, đ, ð, etc.?

  4. Yes, but ignoring variants, the Latin alphabet has 26 base letters.

    What about ligatures such as æ and œ, and digraphs such as ch, ng, and th, which are treated as base letters in some languages?

  5. Seriously, ignoring variants and combinations, the Latin alphabet has 26 base letters.

    English and several other languages used to have the letter thorn (Þ, þ). Although it has been replaced by the digraph th in most languages, modern Icelandic still uses it. Technically, this letter isn't from the Latin alphabet because it was borrowed from a runic alphabet. But the Icelandic alphabet is still a Latin alphabet, and contains this letter.

  6. Modern English doesn't use any of those except case.

    English has borrowed lots of words from languages which do use those, especially French, and hasn't always dropped the foreign bits. Borrowed words such as "exposé" and "résumé" would be confused with native words if the accents were removed. And let's not forget that proper names are commonly spelled with their native accents and ligatures intact.

  7. Ignoring foreign names and borrowings, Modern English doesn't use any of those except case.

    English used to use a diaeresis to mark a vowel pronounced separately from its neighbor, as in "Noël", "coöperate", and "naïve". Also, very rarely, a grave accent was used in English to mark a vowel as being non-silent, as in the adjective "learnèd". In some scattered places these traditions are preserved, despite the lack of a diaeresis key or a grave accent key on standard US keyboards.

  8. There's no such thing as "the English/French/German/Spanish/etc. alphabet": they use the Latin alphabet.

    Except they're all different from each other.

  9. There's no such thing as "the English alphabet": English uses the Latin alphabet.

    Technically, the modern English alphabet is identical to the basic Latin alphabet.

  10. Accented letters are never used as distinct letters in an alphabet.

    Counterexamples: Ñ in the Spanish alphabet, five letters in the Romanian alphabet.

  11. Ligatures are never used as distinct letters in an alphabet.

    Counterexample: Æ in the Danish/Norwegian alphabet.

  12. Digraphs are never used as distinct letters in an alphabet.

    Counterexample: seven letters in the Welsh alphabet.

  13. Trigraphs are never used as distinct letters in an alphabet.

    Counterexample: dzs in the Hungarian alphabet.

  14. Letter variants used in an alphabet always immediately follow their base letter in the alphabetic order.

    Counterexamples: the Swedish alphabet, where all the letter variants are at the end of the alphabet.

  15. Every alphabet derived from the Latin alphabet puts the base letters in the same order.

    Counterexamples: in the Estonian alphabet Z is between S and T, and in the Hawaiian alphabet all the vowels come first, then the consonants.

  16. The alphabet of each language is fixed and unchanging.

    Counterexample: capital ẞ was declared a new official letter of the German alphabet in 2017. There's still some ambivalence about whether accented characters and ligatures are truly part of the German alphabet or not.

    There's often a historical progression from digraph to ligature to base letter, or from digraph to accented letter to base letter. The only remaining sign that W used to be a ligature is its name.

    Several European languages, including English, used to have a "Long s" derived from Roman cursive writing: s in the middle of a word was joined with the following letter, while s at the end of a word wasn't, so they looked different enough to develop into two different letters. (The usage rules were more complicated than that, but that's the essential origin.) In English the mid-word variant was eventually replaced by the word-final variant. In German the mid-word variant merged with s to form the ligature ß. In a parallel development in Greek, the letter σ is written ς at the end of a word. In all three cases, the corresponding upper-case letter was written the same way no matter where it was in a word.

  17. Latin alphabets are used to write every language.

    Also known as "Everyone both knows the Latin alphabet and knows how to Romanize their own language."

    Some other widely used alphabets are Cyrillic, Hangul, Armenian, Greek, and Georgian.

  18. All writing systems are alphabets.

    List of writing systems by number of users. The top ten contain only three alphabets, and cover five major types of writing system.

  19. All writing systems have at most a few dozen characters.

    Syllabaries are a type of writing system where every syllable has its own symbol, instead of every phoneme. They typically have 50 to 500 unique symbols.

  20. All writing systems have at most a few hundred characters.
  21. All writing systems have a fixed inventory of characters.

    The Chinese, Ancient Egyptians, and Mayans, among others, use or used different characters for each word and affix. These logographic writing systems are open: new characters are continuously being invented to write new words. Full literacy is generally thought to require knowledge of 3-to-4 thousand characters, but, as in English, comprehensive dictionaries may contain tens of thousands of entries, most of which have their own character or characters. Characters also regularly fall out of use in logographic writing systems.

  22. One language won't be written using two or more writing systems.

    Counterexamples:

    • Serbian, Azeri, and several other languages are written in either Cyrillic or Latin.
    • Uzbek is written in Cyrillic, Latin, or Arabic script.
    • Hindustani is usually written using Devanagari in India and using Arabic script in Pakistan. Programmers aren't the only ones confused by this: some western sources refer to Hindi (Hindustani written with Devanagari) and Urdu (Hindustani written with Arabic) as separate languages, even though the spoken forms are mutually intelligible.
  23. Mutually unintelligible spoken languages can't use a mutually intelligible writing system.

    The same sources which call Hindi and Urdu different languages also tend to call "Chinese" one language, because Mandarin, Wu, Min, Yue (a.k.a. Cantonese), etc. are all written with the same writing system. It's like lumping all the Romance languages together as "Latin". To be fair: the written forms are mutually intelligible, mostly, even though the spoken languages are as different from each other as the Romance languages are. This is the big advantage of logographic writing systems. (If you're having trouble understanding this, mathematical notation is technically a logographic writing system: "1 + 1 = 2" means the same thing no matter which language you pronounce it in.)

  24. Authors won't need to quote text using other writing systems.

    Counter-examples: language teaching materials, multi-lingual manuals, annotated foreign literature, scholarly articles about writing systems, languages, history, or archeology, mathematics texts. (Mathematical notation is technically a different writing system.) Articles about the history of mathematics.

  25. One mono-lingual text won't contain multiple writing systems.

    Japanese mixes four: characters borrowed from Chinese (kanji) for many word roots, a syllabary (hiragana) for native Japanese words, affixes, and grammatical particles, a second syllabary (katakana) for onomatopoeia and foreign borrowings, and Latin characters for acronyms and foreign names.

  26. Characters go from left to right in horizontal lines and lines go from top to bottom on a page.

    Counter-examples: Arabic and Hebrew characters go from right to left in horizontal lines. Traditionally Chinese, Japanese, and Korean characters go from top to bottom in vertical lines from right to left on a page, though they're flexible enough to use other directions. Mongolian uses vertical lines going from left to right on a page. Bottom to top characters and lines are rare, but there are some obscure examples. See Wikipedia for more, including boustrophedon and mirror text.

  27. Quoted text is always written in the same direction as surrounding text.

    Nope. It's displayed in the quote's native direction, even if that's different from the surrounding text. This can get gnarly in multi-lingual, multi-level nested quotes, not to mention when horizontal text is quoted in vertical text or vice-versa.

    More evidence that mathematical notation is a different writing system: digits are read left-to-right even in right-to-left writing systems.

    Technical

  28. Characters are bytes (or ASCII + code page)
  29. Characters are two bytes (or UTF-16 code units).
  30. Characters are integers (or Unicode code points).
  31. Characters are the basic parts of a writing system (or graphemes).
  32. Characters in <programming language> are <one of the above>.

    In the beginning was ASCII, and every grapheme was encoded by one byte, so there was no difference between byte, code unit, code point, and grapheme: they were all called "character". As time passed, all four things became different from each other, yet "character" continued to refer to all of them, and there was much confusion.

    All of the programming languages below have built-in data types or libraries available for working with all four things. They are classified here based on the most common built-in data type named "character" or "char", or, if no such type exists, on what's counted by the most common string length function. (One of many sources.)

    Characters are bytes:
    C
    C++
    Go
    Lua
    PHP
    Ruby
    Characters are UTF-16 code units (two bytes each):
    C#
    Java
    JavaScript
    Objective-C
    Python 3.2 and earlier "narrow" builds
    Visual Basic
    Characters are Unicode code points:
    Perl 5
    Python 3.3+, 3.2 and earlier "wide" builds
    R
    Characters are graphemes:
    Perl 6
    Swift

    JavaScript example: '\u{1f41b}'.length
    Result: 2

    The example above constructs a string from a single Unicode code point (the BUG emoji), but the default string is a list of UTF-16 code units, so the default length function reports the number of two-byte code units in the string. Two-byte code unit strings are particularly insidious because almost all characters are represented by one code unit, so counter-examples are rarely encountered.

    Note that databases also vary in this way, so a string column with a maximum length might not be using the same definition of "length" as your programming language.

  33. Text files can be opened and processed without an encoding.

    Most programming languages appear to provide a way to do this, but they're really opening files using a default encoding set by the operating system, compiler, interpreter, or virtual machine. This often leads to programs which behave differently on different computers (e.g. a file is saved on one computer, emailed to someone else, and corrupted when opened using the same software on a computer with a different default encoding), so now it's strongly recommended that you explicitly set an encoding either for the whole application or for every text file you touch.

  34. The encoding of plain text can be guessed.

    Unfortunately, there are dozens of different encodings in common use, each of which map the same patterns of bits to different characters. 95% of problems with encodings seem to be from software trying to decode text using the wrong encoding, usually leading to mojibake.

  35. The encoding of plain text can be discovered by examining the text.

    Plain text does not contain a simple message stating its encoding. (Rich text, e.g. HTML, PDF, MS Word files, etc., should and sometimes does.) 100% reliable automatic charset detection is impossible in principle, and highly reliable charset detection seems to be impractical in practice too.

  36. Text in a database doesn't have an encoding.
  37. Text in a database has the same encoding as the rest of the system.

    Every database has an encoding that's used for all its text. Libraries used to access databases usually deal with this transparently (e.g. automatically encoding text going into the database and decoding text coming out of the database). Difficulties can occur when an application and a database are using incompatible encodings, or when a low-level programmer assumes they're using the same encoding when they aren't.

  38. Unicode has an elegant and harmonious design, otherwise it wouldn't be the most widely used encoding.

    Unicode is not, technically, an encoding, it's a standard which includes some encodings. UTF-8 and UTF-16 are Unicode encodings of the Unicode character set.

    The primary goal of Unicode is to make it easy to convert text from any other encoding into a Unicode encoding and back again without loss of information. The Unicode standard describes a lot of design principles which they would like to follow (chapter 2, section 2.2), but in practice some encodings and character sets break these principles, so Unicode has incorporated into itself all the design flaws of every text encoding system ever created. Even when encodings and character sets are designed without internal flaws, different encoding systems have used incompatible design principles. So to succeed in its primary goal, Unicode has incorporated several incompatible systems into itself, as well as all the flaws of the individual systems. This has resulted in a combinatorial explosion of interactions between the different systems. Really, it's a little surprising that Unicode usually just works.

  39. All bytes are characters.
  40. All sequences of bytes are strings.

    The old "ASCII + code page" system had the enviable property that all bytes were characters and all characters were bytes. When Unicode was conceived, the original plan was just to replace each byte with two bytes, so every two-byte code-unit would be a character and every character would be two bytes. Surely 65,536 characters would be enough for everyone? This encoding system is now referred to as "UCS-2", but was originally just called "Unicode". A lot of the systems which use two-byte code units internally were conceived of during this time, notably Java, JavaScript, and Windows.

    Once it became clear that 65,536 characters would not be enough, thanks to the requirement to be compatible with all pre-existing character sets, rather than force everyone to switch to a 4-byte code unit (what a waste of space!) variable-length encodings were introduced. UTF-8 was designed this way from the ground up, but UTF-16 is essentially a hack to add variable-length encoding to UCS-2 so systems using two-byte code units wouldn't also have to be redesigned from the ground up. The hack was to reserve two groups of 1,024 characters each as "high surrogates" and "low surrogates" to encode 1,024 x 1,024 = 1,048,576 more characters as pairs of two-byte code units. The UTF-16 hack is why Unicode code points are limited to 65,356 + 1,048,576 = 1,114,112 code-points, making U+10FFFF the largest legal code-point.

    Variable-length encodings make it impossible to maintain the 1-to-1 relationship between code-units and characters, and the UTF-16 hack creates some additional issues for UTF-8 and other Unicode encodings. Here's a list of common issues:

    • Out-of-range code points, e.g. anything > U+10FFFF could technically be encoded in UTF-8 but cannot be encoded in UTF-16.
    • Truncated characters, e.g. a high surrogate not followed by a low surrogate, or a UTF-8 start byte not followed by enough continuation bytes.
    • Decapitated characters, e.g. a low surrogate not preceded by a high surrogate, or isolated or excess UTF-8 continuation bytes.
    • Surrogates in anything except UTF-16, e.g. when converting a surrogate pair from UTF-16 to UTF-8, the result should be a single UTF-8 character, not two surrogates.
    • Overlong encodings, e.g. anything under U+80 should be encoded as one byte in UTF-8, not 2, 3, or 4.

    Most of these things weren't illegal in the original specifications, so implementations which accidentally - or in some cases intentionally - emitted now-invalid characters and strings were common. As they were found to cause compatibility issues and were invalidated by updates to the specs, some systems were repaired but others remain in such wide-spread use that their particular brand of invalidness has been documented and named.

  41. A code point represents exactly one character.
  42. A code point represents at most two or three characters.
  43. A code point never represents a whole word, phrase, or sentence.
  44. There's a limit to the amount of text which can be represented by one code point.

    Ligatures and digraphs are part of some character sets. So are occasional weird things like Roman numerals (e.g. VIII as a single code point). Therefore Unicode has code points for them too. Technically there exists a Unicode code point which encodes the largest number of characters, but you never know what might be added to the next version. The runner-up is 8 characters in one code point (U+FDFB), a phrase in Arabic meaning roughly "may His glory be glorified". The current record-holder is 18 characters, in U+FDFA, a phrase in Arabic meaning roughly "may God honor him and grant him peace". But only because U+FDFD is specially treated as a symbol which can't be decomposed. If it weren't, it would decompose into about 35 characters.

  45. A code point represents at least one whole character.

    Some characters have parts, for example a base letter and an accent or a base letter and an attached or overlaid mark. Some encodings encode each part as its own code point, and represent a character as a list of its parts.

  46. There's a limit to the number of code points needed to represent a whole character.

    Have you encountered Zalgo text yet? (A classic example.)

    Seriously though, there are widely used writing systems where every character has many parts, like the Brahmic scripts (used widely in and around India), which attach vowel symbols to consonant symbols, or the Korean alphabet, which combines consonants and vowels into syllable blocks. The maximum seems to be three four five N parts per character.

  47. A code point represents a character or part of a character.

    Arguable counter-examples: the space "character" and other whitespace code points, unassigned code points (which may represent a character in the future), and private-use code points (which may-or-may-not be used to represent a character).

    Counter-examples which may still have a visible effect: zero-width characters, line and paragraph markers, layout and format control characters, the replacement character.

    Definite counter-examples: control codes, UTF-16 surrogate code points, byte-order marks, and the unassigned code points which have been pre-designated as non-character code points.

  48. A code point represents something.

    There are three control codes (80, 81, and 99) which were proposed in a draft standard, but discarded as ill-advised and thus were never agreed upon or implemented by anyone. But in a bit of bad luck, the draft escaped into the wild, and we're stuck with them forever.

  49. Code points are unambiguous about which character they represent.

    Typewriters saved keys by merging several similar-looking characters into one, early encodings replicated these hacks, and now we're stuck with them forever. E.g. U+002D for hyphens, dashes, and minus signs, U+0027 for apostrophes, left single quotes, and right single quotes, and U+0022 for left and right double quotation marks.

    This causes bugs through the incorrect assumption that all instances of a character represent the same thing. For example, code which treats hyphens and dashes as minus signs or vice versa, or code which treats apostrophes as single quotes or vice versa.

  50. Different code points represent different characters.

    Counter-examples: Greek capital letter Omega 'Ω' (U+03A9) and the Ohm sign 'Ω' (U+2126), Latin capital letter A with ring above 'Å' (U+00C5) and Angstrom sign 'Å' (U+212B), semicolon ';' (U+003B) and Greek question mark ';' (U+037E).

    This isn't just about the fact there are code points to unambiguously represent left and right quotation marks, while at the same time the ambiguous typewriter quotations marks still exist. Nor are these characters from different writing systems which happen to be visually identical. They're examples of the same characters being encoded in different ways depending on how they're used, e.g. in a word vs. as a symbol. Some may argue these are, in fact, distinct characters, but in most character sets they aren't. Even if they are, see the next entry.

  51. A character can be represented in one and only one way.

    Counter-example: Ą́ (U+0041 U+0328 U+0301) and Ą́ (U+0041 U+0301 U+0328)

    The parts of some characters can be encoded in different orders. In this case, the accent and the hook may be encoded in either order. In some writing systems, almost all characters have multiple parts and this issue is common.

  52. Strings with different lengths can't be equal.

    Counter-example: Á (U+0041 U+0301) and Á (U+00C1).

    Parts can be encoded separately, or each combination of parts can get its own code point. The latter makes sense when there are only a few legal combinations of parts, but the former can be more efficient when many combinations are possible.

  53. Text can be processed without normalization.

    Many character sets are designed so any given character can be encoded in one and only one way. However, because Unicode's primary goal is to make it equally easy to convert text from any character set into Unicode, it necessarily deals with character sets which were not designed this way. Even if every character set were designed this way, Unicode would still have multiple ways to encode the same character, because different character sets use incompatible strategies. For example, when one character set encodes 'ñ' as a single code point and another as 'n' + ' ̃' then Unicode must allow both ways, or else fail its primary goal.

    In other words, in Unicode there are often several different ways to encode the same character. This makes the algorithm to decide whether two Unicode strings are equal or not much more complicated.

    To deal with this, the Unicode standard defines several "normalization forms", each of which uses one and only one way to encode each character, and an algorithm ("normalization") to convert an arbitrary string into one of the standard forms. Before comparing two strings, you must convert them both to the same normalization form or else the comparison won't work correctly on some pairs of strings.

  54. Canonical normalization of text isn't necessary.

    The main problem with characters which can be encoded in multiple ways is that when you display them, they look exactly alike but behave differently in a lot of ways. For example, identical strings may be sorted into different positions in a list, de-duplication won't remove all duplicates, and search will return some results but not others. Converting every string which enters your system into a canonical normalization form avoids all these problems.

  55. Compatibility normalization of text isn't necessary.

    Most character sets are designed to leave the exact form of each character to some other layer(s) of the system. Details like size, serifs, style (e.g. italic or bold), subscripts and superscripts, ligatures, and so on, are controlled by HTML tags, CSS, fonts, text markup formats, and so on. However, some character sets don't adhere to this practice and include, for example, different code points for the numeral 2 and superscript 2, different variants of a character for use at the beginning, middle, or end of a word, or color variants of a character.

    Naturally, Unicode includes all of them.

    Compatibility character variants can cause many of the same problems as canonically identical characters, except the differences between these characters are (usually) visible as well. Converting strings to a compatibility normalization form solves these problems, although it must only be done behind the scenes since important visual differences are removed. This is analogous to converting a set of strings to lowercase (or uppercase) for case-insensitive sorting or matching.

  56. Compatibility normalization fixes all problems with look-alike characters.

    Compatibility normalization only renders two versions of the same character identical. It doesn't merge similar-looking characters from the same writing system (e.g. 1iIlL or oO0) or identical-looking characters from different writing systems, e.g. Cyrillic О and Greek Ο look just like Latin O (U+041E, U+039F, and U+004F respectively), as do some other uppercase letters (ABCEHIKMOPTX).

  57. Concatenating normalized strings results in a normalized string.

    Beware cases where the first code point in the second string represents part of a character. However, only the characters immediately adjacent to the join ever need to be renormalized, so concatenating normalized strings can at least be done efficiently, unless you're dealing with Zalgo text.

  58. Changing the case of a normalized string results in a normalized string.

    Counterexample: ǰ̣ (U+01F0 U+0323) uppercases to J̣̌ (U+004A U+030C U+0323), whose canonical normal form is J̣̌ (U+004A U+0323 U+030C).

  59. Strings don't need to be normalized before changing their case.

    Counterexample: ancient Greek's iota subscript can cause trouble. E.g. ᾷ (U+03B1 U+0345 U+0342) uppercases to ΑΙ͂ (U+0391 U+0399 U+0342). The normal form would keep U+0342 on the Alpha. (Admittedly, this is an extremely narrow edge case.)

  60. Text can be processed without a locale.

    Also known as: "None of our code is internationalized."

    To a first approximation, a "locale" is a code specifying the language of a text. However, there are also well-documented differences in how the same language is used in different places (e.g. between British and American English), so locale optionally encodes regional dialects (e.g. en-GB for English Great Britain and en-US for English United States). The writing system used may also be encoded, e.g. sr-Cyrl or sr-Latn for Serbian in Cyrillic or Latin. Locale codes can get even more specific, e.g. de-DE-u-co-phonebk is German (Deutsch) in Germany (Deutschland) with the Unicode sorting algorithm (collation) for German phonebooks, which is different than the sorting algorithm used in German dictionaries.

    Most computers have a default locale along with a default encoding. In many programming languages, text processing algorithms use the computer's default locale, but can optionally use a locale passed in explicitly. This causes the same problems as using a default encoding: programmers forget the default exists, and bugs occur when text is transferred between computers with different default locales.

    In effect, the default encoding and default locale are hidden global variables, with all the associated problems. Text processing functions which do not accept an encoding or a locale should use a global constant encoding and locale.

  61. Locale isn't necessary for changing case.

    In Turkish and Azeri, the uppercase of 'i' is 'İ' and the lowercase of 'I' is 'ı'. In Lithuanian, the lowercase of 'Ĩ' isn't 'ĩ', it's an 'i' with a tilde above the dot, with similar rules for other accents on both 'i' and 'j'. Both of these could be considered simplifications of the weird exceptional way the dot on 'i' and 'j' is treated in most languages.

    In other parts of Unicode, problems of this sort are handled differently. E.g. D with stroke, Eth, and retroflex D are uppercase characters which look identical but have different lowercase mappings. It's as though there were unique Turkish I and i characters instead of the current situation where I and i have different properties depending on the locale.

    An example of how this can go wrong, in Java: languageCode.toLowerCase()

    Normally this works, and an uppercase language code like "EN" gets converted to "en". However, toLowerCase() uses the system's default locale, so when this software is run on a Turkish system, suddenly "IT" (Italian) becomes "ıt". The solution is to explicitly specify a Locale, so the same thing will happen on every system: languageCode.toLowerCase(Locale.ROOT)

  62. Locale isn't necessary for sorting and searching text.

    Counterexample: German sorts 'ä', 'ö', and 'ü' either as if the diacritic wasn't there (for regular words, e.g. in dictionaries) or as if they were 'ae', 'oe', and 'ue' respectively (for names, e.g. in atlases and telephone books).

    More generally, different languages alphabetize the same letters in different orders and treat different letters as equal or unequal during search.

    Beyond locale, there are all sorts of special cases for sorting and searching text. You can't rely on default string equality and comparison operators.

    bird < Bird < birds, cafe < café < cafes (case- and accent-insensitive search and sort)
    "page 1" < "page 2" < "page 10" < "page 20" (lexicographic vs numeric, especially when they're in the same text)
    French: cote < côte < coté < côté (can't just compare one letter at a time)
    Japanese: カー < カア, but キア < キー (non-alphabetic writing systems may have exotic-looking rules that make sense in context)

    Chinese character strings are often sorted phonetically. The same character can have different pronunciations in different parts of the same string, and in different languages and dialects, so typically an independent phonetic representation of each string is required.

  63. Locale isn't necessary for splitting text into characters.

    Digraphs such as 'ch' in Czech and Slovak, 'ij' in Dutch, or 'ng' in Tagalog have their own keyboard keys, occupy their own spots in their alphabets, and must not be split despite being encoded in the same way as text which must be split in other locales.

  64. Locale isn't necessary for splitting text into words.

    Some languages don't use whitespace between words. Closer to home, Early Modern English and Swedish use colons instead of apostrophes in contractions, and English considers contractions to be one word while French considers them separate words.

  65. Locale isn't necessary for line-breaking.

    ... unless you want to know where hyphens can be inserted into words. This isn't just about differences between languages, e.g. en-GB is more strict about where hyphens are allowed than en-US, while in East Asian text line breaks may occur between any character - even in the middle of Latin words embedded in the text. Another example is Korean, which uses different line-breaking styles in formal and informal documents.

    There's also the issue of knowing when it's safe to remove hyphens from words. Consider the difference between "resort" and "re-sort".

  66. Locale isn't necessary to quote text.

    There are a bewildering number of different quotation marks, not to mention different conventions for using the same quotation marks.

  67. Locale isn't necessary for punctuation marks.

    Also known as: "Punctuation marks are cross-linguistic, so I don't have to internationalize/translate them."

    Just one counter-example: French often places narrow non-breaking spaces between punctuation marks and words, e.g. between between a colon and the preceding word, or between quotation marks and the adjacent quoted words. There are many, many more counter-examples, including languages which use particular punctuation marks for completely different purposes, languages which use only a subset of the "standard" Latin punctuation marks, and languages which use completely unfamiliar punctuation marks.

  68. There are two cases: upper-case and lower-case.

    Thanks to ligatures (one character representing two or more letters), there's a third case, title-case, where the first letter of the ligature is upper-case and the rest are lower-case. Technically there could be a variety of other upper-case/lower-case combinations, but title-case is the only one which has made it into Unicode. (So far!)

    Also, many writing systems and individual characters are unicase, lacking case entirely.

  69. There's a one-to-one correspondence between upper- and lower-case characters.

    Also known as "This is guaranteed to fit back in the database."

    Counterexample: the upper-case of German 'ß' is 'SS'. ('ß' is a ligature of a long s and a "round" s. In 2017 an upper-case ligature, ẞ, was officially adopted.)

    Counterexample: the lower-case of Greek 'Σ' is 'ς' at the end of a word and 'σ' elsewhere.

    By convention, all superscript and subscript letters in Unicode are considered lower-case, despite looking like upper-case letters. Even when the corresponding lower-case superscript or subscript letters are also in Unicode. This makes it tricky to change the case of superscripts and subscripts.

    Other examples can be found in the IPA block, where some Unicode characters are considered lower-case despite lacking matching upper-case characters.

  70. Only letters have case.

    Counter-examples: Roman numerals (numbers), circled letters (symbols), and superscript or subscript letters (combining marks).

    Technically numerals have case, in the sense that text figures are the same height as lowercase letters and have ascenders and descenders but lining figures are all the same height as uppercase letters. However, it's rare to see both used in the same text. Even Unicode doesn't have both types. (Yet!)

    Bonus: Regular expressions

  71. [a-zA-Z] will match any letter.

    Note that accented letters, ligatures, and non-Latin writing systems exist.

  72. [0-9] will match any numeral.

    Writing systems often come with their own numeral systems.

  73. [ \t\n\r] or \s will match any whitespace character.

    Don't forget the non-breaking space, thin spaces and their non-breaking variants, other newline characters, etc. Try matching the Unicode property \p{White_Space} instead.

  74. \p{L} or \p{Letter} will match any letter.
  75. \p{Lu} or \p{Uppercase_Letter} will match any uppercase character.
  76. \p{Ll} or \p{Lowercase_Letter} will match any lowercase character.
  77. Matching the Unicode General_Category property is the right thing to do.

    \p{L} is actually syntactic sugar for \p{GC=L}, which in turn is equivalent to \p{General_Category=Letter}. \p{Alphabetic}, on the other hand, means \p{Alphabetic=Yes}. Note how L and Letter are on the right side of the equals sign, while Alphabetic is on the left.

    Each Unicode code point belongs to exactly one General_Category, so all the weird edge cases which rightly belong to multiple categories are arbitrarily assigned to just one of the categories they could have been assigned to. Sometimes partitioning code points like this makes sense, but Unicode provides a long list of boolean properties which handle all the edge cases for the times when you want to match everything which looks even remotely like what you want. Examples include \p{White_Space}, \p{Alphabetic} for letters, \p{Uppercase}, \p{Lowercase}, properties for various types of punctuation, and so on. Unfortunately, thanks to syntactic sugar these are easily confused with General_Category values.

Show/Hide counter-examples and discussion.

September 20, 2011

New Breeding Records for Iqaluit

Tonight I'm making a short presentation to the Brodie Club, a naturalists' club that meets once a month to hear an invited speaker and observations made by its members. The September meeting is traditionally given over entirely to short presentations by the members about things they've seen over the summer (field-work season for many professional biologists).

Over the course of three days in July 2011, while doing field work for David Hussell, I observed and photographed three species of birds feeding their young near Iqaluit, all well to the north of their standard breeding range. None of the three species have breeding records near Iqaluit.

(All the presentation's images in a single album).

White-crowned Sparrow (WCSP), Zonotrichia leucophrys:
WCSP near Iqaluit, 2011/07/23
WCSP range map, with Iqaluit marked
WCSP chick in Iqaluit, 2010/07/08
WCSP and fledgling, 2010/07/28

I saw adults feeding young in 2011 as well, but didn't get a photograph of the young that year.

Savannah Sparrow (SAVS), Passerculus sandwichensis:
SAVS near Iqaluit, 2011/07/21
SAVS range map, with Iqaluit marked
SAVS chick near Iqaluit, 2011/07/21

The adult was feeding that young. The chick is just hiding in a hollow in the tundra; that isn't the nest.

Dark-eyed Junco (DEJU), Junco hyemalis:
DEJU near Iqaluit, 2011/07/22
DEJU range map, with Iqaluit marked
DEJU with food, near Iqaluit, 2011/07/22

Again, I saw the young, and the adults feeding the young, but was unable to get a photograph of a young Junco.

Breeding records for all three species are being prepared for submission to the NWT/Nunavut Bird Checklist Survey, and may eventually be published in The Canadian Field-Naturalist.

August 24, 2011

Remote Outpost

Nunavut has an area of two million square kilometers. If it were a country, it would rank 14th largest in the world, just ahead of Mexico and Indonesia. Yet it has a population of only 33 thousand. Iqaluit, the capitol of and largest population center in Nunavut, has a permanent population of less than 7 thousand.

In places that are more connected to civilization we take a lot of things for granted, like access to a variety of foods, clothing, and other consumer goods, infrastructure like roads, highways, the power grid, and running water, and an abundance of services and specialists who can be hired to do just about anything. In Iqaluit, it's quite difficult to take any of these things for granted.

During the summer, when Frobisher Bay is ice-free, cargo ships bring in everything from toys to heavy machinery. Iqaluit doesn't have a deep-water harbor, so the ships anchor off-shore and the cargo is brought in by barges to the dock. More cargo, mostly food, comes in on year-round daily flights landing on Iqaluit's WWII military-grade runway. Prices for anything not available locally tends to be somewhere between double and triple what you'd expect in the south, and the only things available locally are fish (Arctic Char, yummy!) and stone (there's a quarry for gravel, dirt, and other building materials which can be used in road-beds, and a small industry of stone-carvers creating figurines to sell to tourists).

Another thing the ships bring in is fuel: there's a pipeline leading from the shore nearest the anchorage to a group of storage tanks, then through town to the power plant. The power plant is right next to the water-treatment plant, just below a dam that creates the town's reservoir of drinking water. Water pipes are also visible throughout the town, since they're often above ground. (I got a nice photo of a pipe-bridge. The footbridge on top was just an afterthought. A little scary.)

Despite having a population of 7 thousand, Iqaluit has all the services you'd expect in a capitol, and then some. There's an RCMP building, a judiciary building, the Nunavut Legislative Assembly building, separate men's and women's prisons, a campus of the Nunavut Arctic College (including dorms), a museum, a library, a cathedral and several churches, schools, a hospital, a sports arena, banks, hotels, and various government offices scattered throughout. Yet there are only two supermarkets in town, the Northern Store and Arctic Ventures.

The roads in Iqaluit are paved using local materials, if they're paved at all, and have branches extending about a kilometer from town in various directions (to a nearby park, to a quarry, etc.). The taxi service does the bulk of its business during the summer, when snowmobiles aren't the dominant form of transportation. Many taxi drivers are temporary residents hired from the south for one summer at a time.

In truth, Iqaluit wouldn't have ever developed beyond a small fishing community with a Hudson's Bay trading outpost if it hadn't been for the airport runway, which was built in 1942 as part of a route for the US to fly aircraft to the UK.

August 18, 2011

Iqaluit Signage

The North-West Territories introduced a unique bear-shaped license plate in 1970. When Nunavut split off in 1999, both territories opted to keep the design. Recently, the NWT announced that the machinery used to manufacture the plates needs to be replaced, so both territories are taking the opportunity to update their designs. The bear-shape is popular, so it seems likely that at least one of the territories will keep it.

The flag of Nunavut is quite unusual, from a vexillological point of view. (Vexillology: the study of flags. New word!) Flags in former British colonies (including Canada) tend to follow heraldic rules, and this one doesn't at all. In particular, having white and yellow as adjacent parts of the background and having a black border around the inukshuk are quite unusual. The position of the star has also been criticized, since it tends to be concealed when a flag is hanging (the original design had it on the other side, by the flag-pole). On flags that have been outdoors for some time, sometimes one only sees the inukshuk on a light background, with the other colors faded.

Until 2003, Iqaluit had no street names. Every building in Iqaluit has a unique number, rather than the same numbers being reused on every street. The newest buildings are in the 5200s. Most streets and neighborhoods have sequential numbers; for example, the 3000s are all in Apex, the downtown core is all in the lower 1000s, and particular blocks of 100 are usually used for adjacent buildings on one or two streets. There's a city-wide taxi service (phone from anywhere, and a taxi will arrive in under 5 minutes), whose drivers are trained to memorize the locations of all the building numbers. Many of the larger buildings have a unique name as well (some in Inuktituk, some in English), so one can say "Take me to Long View, please." or "I'm going to 517A.", and no street name is needed. (Or understood: giving the street name will just get you a blank look.) Nonetheless, in 2003 street names were assigned, and street signs put up. All the street signs, including stop signs, use both the Roman alphabet and the Inuktitut syllabary. (Image) There's one road called (accurately) 'The Road to Nowhere'. Unfortunately, the sign has already been stolen, so I don't have a picture for you.

Signs in the Sylvia Grinnell park use figures dressed in traditional clothing to convey messages such as "Don't litter", "Beware, bears", and "Stay on the path". Wish I'd taken a picture of the Beware of Bears one.

Random amusing sign.

August 07, 2011

Architectural Quirks

Many of the buildings in Iqaluit, especially the newer houses and apartments, are built on raised foundations, which allows the bottom of the house to be insulated and out of contact with the permafrost, and is easier than digging a foundation in extremely rocky ground.

Many of the buildings are also painted in bright and cheery colors. The school in Apex. A colorful house. I'm not quite sure why this is done. The older buildings aren't quite so colorful. Possibly to help counteract winter depression?

Also, although I don't have a picture of one, many buildings have blizzard lights near the doorways: low-power red lights, running continuously even during the summer. I've been told the red light is easiest to spot during white-out conditions (or in fog).

The entrance-ways in homes are set up like an airlock. There's the outer door, a small room where you can take off your boots and hang up your coat, almost always with a heater or hot-air vent running full blast, and an inner door leading to the rest of the house. I saw one which had a slightly sloped floor with a drain in the lowest part, presumably to deal with melting snow shaken off of clothing or pooling under the boot rack.

August 04, 2011

Northern Wheatear

Here's a banded male Northern Wheatear, posing nicely in front of some tidal flats. Normally it's more difficult to spot them: they look like this before you get binoculars on them, and like this afterwards. It's a good thing they make loud alarm calls as soon as they spot you, or we'd hardly spot any.

August 02, 2011

Windy = Good, sometimes

There were rainy windy days (bad), rainy calm days (not so bad), sunny windy days (good), and sunny calm days (mosquitos!).

August 01, 2011

Iqaluit Quarry

Just to show that my trips to Iqaluit aren't always sparkling weather and stunning vistas, here's a photo of Iqaluit's quarry, junkyard, and industrial zone, on an overcast day. You can see the airport runway in the background. Taken June 25th, 2010.

I'm going to try to post a photo every day for a while, along with a couple of longer posts explaining what I was doing in Iqaluit.

July 28, 2011

Iqaluit Panorama

I've been sorting my pictures from Iqaluit, and experimenting with Hugin (open-source software for stitching together photographs into panoramas). I present to you the first result of my efforts: Iqaluit Panorama #1. Taken July 18th, 2011 (a day with really nice weather), from the end of the point on the other side of Koojesse inlet from Iqaluit proper.

July 25, 2011

First Impressions of Iqaluit

I've just returned from a week and a half in Iqaluit, Baffin Island, Nunavut, Canada. It's the capital of Canada's territory of Nunavut, with a population of only 8 thousand. It's below the arctic circle, barely, at about the same latitude as Reykjavík, Iceland. While I was there, there was continuous light from 2am to midnight every day, temperatures ranged from 5 to 20 °C (41 to 68 °F), and there were still patches of snow and ice at the bottom of north-facing cliffs.

The first thing a southerner notices when stepping out onto the tundra is the complete lack of trees, bushes, or any other sort of vegetation more than 10cm high. This is not to say that the terrain is flat or uninteresting. Baffin Island is a rocky and rugged place, with boulders, bedrock outcrops, hills, and cliffs everywhere you look. The melting snow during the spring and summer causes streams and lakes to form almost everywhere, and wherever there's water, there's tundra vegetation. Despite all the plants hugging the ground, there's an incredible profusion of vegetation forming the tundra: lichens, mosses, grasses, dozens of flowering plants, dwarf shrubs and trees, and even mushrooms, all woven together into a thick, spongy mat that covers every low spot on the ground, from the valley bottoms to small hollows on bare rock. In places, the stuff is meters thick, and has completely overgrown sizable waterways, concealing them so well that one can walk across some of them without seeing anything.

Away from town, it was incredibly quiet. I could easily hear those concealed streams burbling away beneath the tundra, and on several occasions I heard a faint swishing sound, looked around, and saw a raven gliding by. I had been hearing the sound of the air ruffling the feathers at the tips of their wings.