“Blwyddyn Newydd Dda” and other chances to think about linguistic relativity

For those unfamiliar with Welsh, that’s “Happy New Year,” and to my knowledge it’s pronounced /blʊ͡iðɨ̞n nɛ͡ʊiðːɑː/.  Yeah, good luck with that.

I’ve been remiss about posting for quite some time. I’ve written quite a few drafts, some of which I might finally get out soon™, but I thought at least I could get this note in first.  To me, the end of the year, or really any internationally celebrated holiday, is a great chance to think about linguistic relativity.  That sounds unforgivably nerdy, but it is actually very interesting, I assure you.

Proponents of linguistic relativity posit that the syntactic structure, lexicon, semantics, pragmatics, and other properties of the languages we speak influence the way we think about, and ultimately the way we see and understand, the world around us.  For example, people have suggested that the reason why English speakers perceive brown and orange as different colours but dark blue and light blue as different shades of the same colour is because English treats them that way.  It’s true that we have words for different shades of blue like cerulean, azure, and zaffre, but in Italian or Russian, for example, dark blue and light blue have different labels that represent different colour categories. Inversely, in many languages all around the world, orange and brown might have separate words but they are considered two shades of the same category, or even both part of a yet larger category of “red.”  Reading this post, you could ask yourself what colour you think the side border on this webpage is.  In this case, linguistic relativity is the idea that people’s answers to that question–and other, more interesting questions–may differ depending on the properties of the language(s) they speak.

In the late 1800s and early 1900s, some linguists, most famously Edward Sapir and Benjamin Lee Whorf, went further to suggest that people who grow up learning a language where there is no word or expression for a certain concept might not be able to conceive of that idea at all, or at least until they learn words for it.  In other words, your language determines and delimits your thought processes.  That idea, which has since been called the “strong” version of the Sapir-Whorf Hypothesis, has been pretty resoundingly refuted by experimental data from studies in cognitive psychology, anthropology, and linguistics.  As for the example from above, native speakers of languages that have only three words for colour in the entire language (“light,” “dark,” and “red”), performed very similarly to Italians and English speakers in colour matching and identification tasks administered in the early 1900s, and by the 1980s, cognitive psychologists had accumulated masses of evidence suggesting that colour perception, while in fact differing between individual people, depends on factors that are more biological than linguistic. However, the “weak” version of the Sapir-Whorf Hypothesis–that our language and our thoughts mutually influence each other within larger, neuro-biological parameters–has had quite a lot of empirical support.  In this example, the weak hypothesis would hold that you might choose different labels for the colour on the sidebar like orange or red (when in fact, of course, it is vermilion, haha), and those differences in labeling might cause you to associate the bar with different objects and events and respond to the colour differently, but the actual colour you perceive, while potentially different from the one I or other people perceive, would not be different from the one you yourself would have perceived had you grown up speaking a different language.

This all speaks to a timeless question: how do our thoughts determine what we say, and does what we say influence our thoughts in that moment or thereafter?  How are cultural differences represented, manifested, or even created through language?  Certain events, and holidays most prominently, bring that question to the forefront for me.

As for the New Year, like many major holidays, it has a set expression to celebrate it.  In English, that expression is “Happy New Year,” but in other languages, the exact wording often subtly differs.  For me, that raises the question of whether that makes a difference or not.  Do speakers of other languages have different ideas in their mind when they say the equivalent of “Happy New Year” in their own language?  I thought I’d look at the two languages with which I’m most familiar, French and Japanese. First, French:

Bonne Année – word for word: “good year.”  There’s probably no meaningful difference between “good” and “happy” here.  The “happy” we say in English is, as linguists say, semantically bleached.  We don’t really think about contentedness specifically when we count down to midnight and start chanting Auld Lang Syne.  Rather, “happy” is used as a placeholder term in that and all sorts of other situations when we wish to congratulate someone, as in “Happy Birthday,” “Happy Valentine’s Day,” or “Happy Anniversary,” and that’s actually very similar to the French “Bonne.

Also like French, some English expressions obligatorily use synonymous adjectives in combination with other holidays, as in English’s “Merry Christmas” or French’s “Joyeux Noël.”  Is it really because we feel merriness at Christmas and happiness during other holidays that saying “Merry Valentine’s Day” or “Joyeux Année” is impossible?  Not really, no; it’s just a set expression. Further, in Britain and several other English-speaking countries, you can say “Happy Christmas” and not sound like you messed the expression up.  These are arbitrary, fixed chunks of language that can be reduced to [positive term for congratulations]+[name of the event], and you could say that the underlying semantics of both expressions are very similar or even no different in any meaningful sense.

“But wait!” you say. There is, of course, the matter of “New.”  The French expression is, linguistically at least, ambiguous. The “Année in “Bonne Année could refer to the current year that just ended or the year to come, but from talking to French-speaking people, and from celebrating one New Year’s in Montreal where the signs said “Bonne Année 2006” on January 1, 2006, I think the latter is much more likely.  Does having the “new” in English really make the newness of it any more salient or important?  Do French people not think about the year to come as new or think of it as somehow less new because they do not say the word “nouvelle” at midnight?  As my former syntax professor would say, “I’m not sure we’d want to say that,” by which he really means “That’d be flat-out wrong.”  So let’s move on.  What about Japanese?

明けましておめでとう – akemashite omedetou – a word for word translation is tough, but I’ll start with the second word. “Omedetou” straightforwardly means “congratulations,” and in this context it has the same function as English’s “Happy” or French’s “Bonne.”  The only difference is that, in Japanese, you can connect the word omedetou directly to the event worthy of congratulations to make a whole phrase.  You can say “tanjoubi omedetou” (birthday congratulations) to mean “Happy Birthday,” or “gosotsugyou omedetou” (graduation congratulations) to celebrate someone’s graduation.  In English and French, the phrase “Congratulations” is usually said on its own as a complete utterance.  We could in principle, I suppose, say “Congratulations on your birthday,” but most people functioning within normal parameters wouldn’t.  On the other hand, we couldn’t just say “Happy!” and be done with the sentence.  “Bonne” and “Happy” are only used as part of larger expressions almost as if to make up for the ungrammaticality of “Congratulations” (or Félicitations) in that position, whereas “Omedetou” can be said on its own or as part of a whole.  In sum, we could say that the word “omedetou” is similar enough to English or French where we wouldn’t predict any differences in the underlying thoughts and feelings it represents.

Akemashite,” on the other hand, is quite a bit different.  It’s the participle form of the verb akeru, so a preliminary translation of the whole expression would be “Congratulations on (the) [akeru]ing.”  Akeru can mean “to reveal,” as in the following expression:

  • 真実を打ち明ける
  • shinjitsu-o uchi-akeru
  • truth-ACC forceful-AKERU
  • “to reveal the truth,” “to divulge the truth”

Akeru can also mean “to dawn,” as in the following:

  • 夜が明けた
  • yoru-ga ake-ta
  • Evening-NOM AKERU-past
  • “dawn has come”

As you can see, here’s where it gets tricky.  Yoru means “evening,” and it is the subject of that sentence, but we don’t think of the evening as “dawning,” because the evening is ending, not beginning, when the sun rises.  Rather, just like “the truth” in the first example, the verb means something closer to “opening up to reveal something new.”  For New Year’s, we can think of 2012 as unfolding, dissolving, or even better, giving way to the next year.  The Japanese expression then, is something close to “Congratulations on [the previous year’s] giving way [to the new one].”  Going back to the “New” from the English-French comparison, in this case, one might even say that the Japanese expression is explicitly referring to the year that is coming to a close, except that wouldn’t be entirely correct, either.  In Japanese, you can also say the following:

  • 年が明けた
  • toshi-ga ake-ta
  • year-NOM AKERU-past
  • “a new year has dawned”

So, instead of being limited to the past, it refers to the transition between the period that is ending and the next that is beginning.  In other words, it refers to a specific transition event–in this case, the turning of the calendar–and that’s analogous to the Western versions.  Now, the Japanese expression is perhaps much more illustrative than the French, English, or Welsh versions, but I’d doubt that Japanese people are much more likely to wax all poetic about the year that is closing as a result of those linguistic differences alone.  In fact, there’s linguistic evidence to suggest otherwise.  First and foremost, on January 1st, Japanese people often extend the New Year’s greeting to say 新年明けましておめでとう (shinnen akemashite omedetou), where “shinnen” means the “New Year.”  This, I think, constitutes strong evidence of semantic bleaching, as it seems more likely that the phrase commemorates the event itself in an abstract sense and not that the phrase is literally referring to the new year and the past year together consecutively in some incoherent jumble.  Given, the grammaticality of the longer phrase is the source of some debate among Japanese people, but its widespread use is undeniable.  Second, the verb akeru written 明ける is also homophonous with other verbs like 開ける which means “to open” or, figuratively, “to begin,” and Japanese children commonly confuse which akeru belongs in the New Year’s celebratory address.  In other words, people know what the expression is, but that doesn’t mean that they’re analyzing it for its deeper meaning, including what time it specifically refers to, every time they utter it, just like we don’t inevitably think of happiness per se.  Again, I’m tempted to reduce the whole utterance to [positive term for congratulations]+[name of the event], and I challenge you to suggest otherwise!

So, that was probably a lot more information than you wanted about collocations for New Year’s celebrations, but hopefully the message you’re taking away from this is similar to the one I have: there are subtle, interesting distinctions between the ways that different peoples talk about the New Year, but the underlying sentiment behind many of those expressions is remarkably similar.

That’s not to say that English, French, and Japanese people have the same associations with the New Year or even with that expression, of course.  Our customs, traditions, and vicariously our personal experiences are vastly different.  Compare, for example, this:

NewYearsTimesSquare2

with this:

Joyanokane

The bottom picture features an event called 除夜の鐘, joya no kane, “The New Year’s Eve bell.”  On New Year’s Eve, monks ring the largest bell in the monastery 108 times to signify the 108 sins of Buddhism.  New Year’s as a whole is a more somber, family-oriented, quasi-religious holiday in Japan, something starkly different from the festive, youth-oriented, mostly secular Western tradition.  I’d even go so far as to say that it’s remarkable that our customs can differ so much and that the language is, in many ways, so similar.

Having experienced New Year’s in France, Japan, and various places in the US and Canada, my personal associations with the holiday are diverse, so using one of those perfunctory expressions might just be confusing and esoteric.  Instead, I’ll be more explicit:

A very happy 2013 to you and yours.  I wish you the very best!

Advertisements

my top 5 “most difficult spelling systems” list

English orthography is a beautifully inconsistent, inconsistently beautiful, sadistic excuse for a “system.”  My favorite example is what happens when you add letters to the word “tough.”  (The symbols on the right are how the words are pronounced, and even if you don’t understand what they mean exactly, you can see that more changes than just a single character)
  • tough /tʌf/
  • though /ðoʊ/
  • through /θɹu:/  (notice, so far, not a single sound in common)
  • thought /θɔ:t/ (or in my dialect, /θɑ:t/)
  • thorough /ˈθʊɹoʊ/
So, there are at least four different vowels and sometimes an /f/ represented by <ough>, and there are two different sounds represented by the <th>.  Add words like “laugh,” “taught,” “draught,” and “drought” and you start to see how this spirals out of control really quickly.  Part of the problem is that we have, depending on the dialect, anywhere between about 35 and 45 distinct sounds (called phonemes), and yet a mere 26 letters to write them. Linguists, very appropriately, call that a “defective script,” and English is astonishingly defective.  Of the 26 letters we do have, several overlap in strange Venn-diagram-like sound mappings such as s-c-k-q-x.  It’s kind of mind-boggling really to think about this, but there is literally one sound in the entire English language that is written with one and only one group of letters, and that’s the so-called “soft th” /ð/ in “though” from above.  The letters <th> can represent multiple different sounds (e.g. Thomas), but the phoneme /ð/ can be written only one way.  Every other phoneme in English has at least two, sometimes more than 10, and, in the case of the sound /eɪ/ as in bay, pain, base, bass, ballet, obey, dossier, resume, or even résumé, etc., more than 20 different graphical representations!  So you can imagine that, if you were a speaker of another language trying to learn English, you might assume that native speakers had devised the system as some sort of cruel joke to keep foreigners from ever mastering it.
To every rule, there is some exception.  For every word like “photograph,” there are words like “haphazard” and “Stephen.”  Chore?  Choir and charade.  Singer?  Finger and angst.  Bureau?  Bureaucracy and beauty. Of course there are hundreds of homophones that are spelled (or spelt) differently but are pronounced the same, like write/rite/right, prince/prints, or soared/sword.  Those are decently common across languages.  It is quite rare, however, to find a language with anywhere near English’s number of heteronyms that are spelled the same and yet pronounced differently, such as having a gaping wound as opposed to being wound up, the bow of a ship or a bow and arrow.  It is not a moderate task to moderate discussions of English spelling.  Everyone knows about our silent “e,” but then there are things like silent “b” (debt, comb), silent “p” (psychology, pneumonia), silent “t” (castle, listen, soften, not to mention words that vary like “often”), silent “s” (island, debris), silent “l” (salmon, talk), and it just goes on and on like that.  It is genuinely pretty hellish, but are other languages any easier?  Are there even more horrific systems?  How does English orthography really stack up against the rest of the world’s written codes?  Well, first we need to survey the landscape and see what’s out there.
Before that, though, if you’re hearing the word “orthography” for the first time, it simply means the whole system of writing.  It comes from the Greek stems orthós-, meaning “standard,” “legal,” or “correct” (think “orthodox”), and -graphéin, meaning “to write.”  Linguists tend to use the term “orthography” rather than “spelling” for two main reasons: first and foremost, it makes us sound smarter and thus feel better about ourselves.  Second and almost equally important is the notion that a strict definition of “spelling” is relatively narrow in scope.  Spelling means the way in which words are written using letters and diacritics (which are accent marks like in résumé), and therefore doesn’t necessarily apply to all languages that have writing.  In contrast, orthography is a very broad term that encompasses many aspects of writing.  Here’s a simple graph:
Put simply, spelling is one part of orthography, but spelling is not everything you’d have to learn in trying to master English writing.  If I “spell out” some of the additional features that orthography covers, it might give a good idea of how daunting it could appear to would-be learners:
  • Punctuation and other unpronounced characters
    • Some languages don’t (or at least, didn’t) use punctuation at all, which might make an adjustment to a fully-punctuated system more taxing. Where there is punctuation, rules on usage are often quite complex, and they differ across languages, even between sister languages like French « Allons-y! » and Spanish «¡Vamos!»
    • Languages that use punctuation often use it to contrast meaningfully different phrases, like the famous “A woman without her man is useless” and “A woman: without her, man is useless.”  Those differences aren’t universal; they have to be learned
    • Emoticons and other non-letter characters like ellipses and dashes often carry meaning or perform important discourse functions, especially in informal contexts.  Anyone who has ever tried learning the stupefying array of Japanese emoticons knows the true meaning of despair. I mean, for crying out loud, there’s a specific emoticon for the act of playing volleyball! (/o^)/ °⊥ \(^o\)  When I first got that in a text message, I didn’t read that as it was intended: an invitation to go to the beach. I thought someone had to go to the hospital
  • Orientation
    • Just to list some examples, English, Russian, and Inuktitut are written horizontally from left to right
    • Arabic and Hebrew are written horizontally right to left
    • Classical Mongolian is written vertically left to right
    • Chữ Nôm (Old Vietnamese) is written vertically right to left
    • Modern Mandarin and Japanese are written in multiple different orientations depending on the format and level of formality
    • Egyptian Hieroglyphs were prototypically horizontal and right to left, but varied based on other stylistic factors and some characters were read in their own special order
    • There are yet other ways of orienting writing, too
  • Penmanship, scripts, and written styles
    • There are sometimes large differences between the way (or ways) in which things are written in the same language depending on the people, place, purpose, and time.  When people study a Chinese language, for example, they have to learn that type-font characters like 書道 can appear radically different in hand-written forms, but they have to recognize that the intent is the same.  Beginning learners of English are often confused with the Times New Roman “g” and “a/ɑ”
    • Block-print and cursive writing are other good examples in English, but even within cursive, there are different styles like Spencerian and D’Nealian, and there are often idiosyncrasies across languages
      • French cursive “1” often has the hook extending all the way down such that, to North American eyes, it looks almost like “Λ”
      • Japanese teachers of English were taught for many years to write “s” with a hook such that many still write it as “ʂ,” which is a different letter in other languages
  • Alternate characters and characters for special purposes
    • In English, accountants and mathematicians often write 0 (as well as 7 and z) with additional slashes to disambiguate or prevent fraud, but Ø is a separate letter in Swedish, for example, so Swedish accountants tend to write 0 with a dot in the center instead.  Similarly, many Japanese and Chinese legal documents write the numbers 1, 2, and 3 as “壱、弐、参” instead of the more general “一、二、三”
    • Roman numerals (like MMXII) are another part of our writing system that has to be learned in order to be fully literate
For some languages, parts of orthography fall into spelling, too, such as:
  • Spacing
    • Some languages put spaces between words like modern Korean, others like Japanese and Chinese do not
    • Some languages or language varieties that put spaces between words write compound words as one continuous sequence. German is probably the most famous example with its words like “Geschwindigkeitsbegrenzung,” which English spaces out into “speed limit”
    • English is wildly inconsistent on this one, though. We write fireman and highway in unbroken strings, fire truck and high school with spaces. Rollerskate, roller-skate, and roller skate are all well-attested in corpora, and even the snootiest of dictionaries often have multiple listings
  • Capitalization
    • Most languages are unicameral, meaning that they don’t have upper- and lower-case letters, but even closely-related languages that are bicameral often differ wildly in this respect; English doesn’t capitalize every noun anymore, but German still does
    • English is perhaps unique among languages in requiring that the pronoun “I,” but not “we,” “you,” or any other pronoun for that matter, is always capitalized
    • Conventions change over time. If you have an old version of Word (which is capitalized) and you run a spellcheck (no space), you might get prompted to capitalize words like “internet,” but that’s no longer the case (if you got that last pun, you can join me in tears of shame)
    • We can capitalize words mid-sentence for emphasis or to make a contrast, such as religion/Religion, or we can sometimes use ALL CAPS
    • French, English, and several other languages also have an interesting trend in the opposite direction, writing common acronyms in all lower-case (e.g. HIV/AIDS in French is “sida”; the word “radar” started as an acronym)
Even after all that, the most important, and in some ways the most obvious difference between the terms “orthography” and “spelling” is that “orthography” can refer to more types of grapheme systems. (A grapheme is just a unit of writing)  For example, it would be a little strange to talk about “spelling” for logographic writing systems like Chinese characters.  There are different ways to represent characters in Chinese, and to be sure, there are strict rules for how to write them with correct stroke order and such, but the characters don’t directly or consistently represent sounds per se.  In fact, there are quite a number of different types of written systems:
  • Alphabets like Georgian (there are actually three different Georgian alphabets) or Korean, where each character (or component part of the character) represents usually only one sound, although even the purest ones like Korean have exceptions and situational rules
    • English is alphabetic in a sense, but there are only two letters that consistently represent only one sound, <v> and <q>, both of those represent sounds that are sometimes also written using other letters, and there are an increasingly large number of foreign loanwords that break even those patterns
  • Abjads like Arabic or Hebrew, where the consonant sounds are written but some or all of the vowels are left unspecified, and the reader has to figure it out from context
  • Syllabaries where each character represents a whole syllable, not just a sound, which can be further divided into:
    • Abugidas like Ge’ez in Ethiopia where relations between related syllables are shown by added marks or changes to the same base character.  For example, in Inuktitut, the language of the Inuit people in North America, /ki/, /ka/, /ku/ are written ᑭ, ᑲ, ᑯ
    • “Arbitrary syllabaries” like Cherokee or Japanese kana where there is no relationship between similar sounds. Using the same syllables, /ki/ /ka/ /ku/ in Japanese hiragana are written き,か,く
  • There are yet other types of absolutely crazy hybrid systems like Mayan that used logograms and syllabics and plentiful rebuses (think of things like “gr8” for “great”) all together in the same characters.  How Mayans were able to read without having an aneurysm is beyond me.
So after all that, there’s quite a lot to consider, really, when we try to compare written systems in terms of difficulty level.  In truth, any objective comparison is flat-out impossible.  How could we compare English to a system like Hong Kong Cantonese that has more than 20,000 characters in common usage, some of which are just brutal like 戲劇 (which means “movie”), but which are nevertheless regular, represent units of meaning and not always sound, are pronounced almost always in only one way, and are composed of a limited set of simpler parts?   Put simply, we can’t.  They’re completely different challenges.  So, instead, I’ve arbitrarily limited my list to languages that either use syllabaries or alphabets, I’ve kept the list to languages that are alive today (otherwise, in my opinion, Mayan is hands-down the most astonishingly complex orthography ever devised), and I’ve excluded exceedingly rare languages or scripts like Afaka.  The remaining list is, I think understandably, not perfect, but I challenge you to come up with a better one, haha!
Last note: I have a few runners-up.  First, Thai orthography is pretty rough.  There are several characters borrowed from Sanskrit that are pronounced the same as other Thai letters but which are not interchangeable.  Thai is also interesting in that it marks tone in writing.  Its Eastern neighbour, Khmer, from which large parts of Thai script are derived, is also highly complex and irregular, with lots of variability depending on the surrounding sounds.  Both of these are, however, not quite so irregular and sadistic to make it into my subjectively-judged top 5.  The reason is simple, and also explains why English does make the list: spelling reform.
Most languages have undergone at least one, and often multiple spelling reforms, usually because the government or another authoritative body wants to standardize the language and modernize it.  Languages change over time, and pronunciation in particular is highly variable, but ink blots printed on a page don’t often change shape in response to social trends.  In French, the Académie Française was created during the time of Louis XIV to standardize the language.  They publish dictionaries, make recommendations for school curricula, and have helped to rein in the chaos (note: rein, not reign, or rain).  The French language gets a lot of flack for words like “ils accueillent” where half the letters are silent, and then words like “lent” where most of those same letters are pronounced, but once you understand a few rules about grammatical categories, reading is not so bad.  Linguists call this a distinction between “encoding” (writing) and “decoding” (making sense of that writing, usually reading), and while French is difficult to encode, the decoding process is much more doable.  That doable-ness is largely thanks to the reforms enforced (sometimes even through violence!) by the French powers that be.  Khmer, Thai, Spanish, Russian, Italian, Swedish, Mandarin, Korean, Hindi, Mongolian, indeed a great many of the world’s languages have undergone systemic reforms to try to make the writing system more regular than it was before.  In fact, another “runner-up” for me would be Danish, which is a particularly interesting example since it’s so closely related to Swedish but hasn’t gone through the same kinds of spelling reforms.  As for English, there have been multiple attempts, but the only major influential spelling reform (Noah Webster’s) only succeeded in creating a chasm between two separate standards with equally absurd inconsistencies and idiosyncrasies.  Hopefully, that provides enough context and caveats.  Here, finally, is my top 5 crazy spelling systems:
  • Uyghur has four completely separate alphabets that are all standard in modern usage, and historically there have been quite a few others. It’s one of very few languages based on Persian to obligatorily represent vowels, and there are quite a few of them: /y/ /ɪ/ /ø/ /æ/ /ɑ/ /u/ /e/ /o/, but the real irregularities come from its many Chinese loanwords that are often quite difficult to distinguish from other words and follow their own set of phonological rules
4) Burmese
  • The fact that this language’s orthography doesn’t match up with its pronunciation is a point of pride for some Burmese nationals.  There are even different words for “written language” and “spoken language” that mark the distinction.  In many ways, literate Burmese people can be said to practice diglossia, the command of multiple different dialects for different functions.  The spelling system is more or less regular viewed from the inside, but its difficulties and irregularities come from being written in stone hundreds of years ago and far removed from what has happened with the language since then
3) Irish Gaelic
  • The language is growing after it had declined in previous generations, but the writing system reflects its diverse and chaotic history.  There are digraphs and complicated allophonic rules up the wazoo, such that the word for Prime Minister, Taoiseach, is pronounced /ˈt̪ˠiːʃəx/.  A combination of having no standard spelling until the mid-20th century and huge dialectal variation, especially in vowels, has created cases where the orthography is regular in some regions for some words, and in others for other words, but nowhere for all of them
2) English
  • Truly, English is among the craziest spelling systems in the world.  Ruth Shemesh and Sheila Waller’s book explains a great deal of the subtle regularities, and I highly recommend it, but even they can’t make sense of the huge number of exceptions that continue to grow by the day.  There are some theorists who believe that English is becoming more and more like a logographic system, where basically each word has to be memorized as one chunky symbol with component parts, rather than analyzing each word internally from left to right.  That’s not far off, and yet English, I think, still only gets the silver medal in my book
1) Japanese
  • Interestingly, many of the same historical reasons for English’s, shall we say, “diversity” of spelling rules are shared by Japanese: several, chronologically-disparate waves of mass importation of foreign words, several of which use entirely foreign writing systems that were only sometimes, and then only partially, regularized.  Japanese uses three largely independent writing systems together, or increasingly four if you include roma-ji, which many academics do because of words like t-shirt, “Tシャツ,” and acronyms.  Two of those systems are mostly faithful phonetic syllabaries, but there are exceptions like particles.  The third system has more than 2,000 characters in common usage each with numerous, largely unpredictable readings depending on when and where the word originated.  All three are commonly used together in the same sentence, and combinations of two are often together in the same word: サボる (to skip class), アメリカ的 (American-style).  Pitch accent, which is contrastive for most dialects, isn’t marked anywhere (e.g. the “three hashi”: 端、橋、箸), and all the while many other characters are written with several different variants despite no meaningful phonological or even semantic contrast in any dialect (again, I’ll use “hashi”: 橋、槗).  Like English, there are tons of heteronyms, sometimes among quite frequent words like 甘い(umai, delicious / amai, sweet), 辛い(karai, spicy / tsurai, painful, or here’s a better definition), and then we get into proper nouns and fossilized expressions and what little regularity was left in the system breaks down completely
So, in my opinion, when we ask how bad English spelling is compared to other languages, the answer is: among the worst.  There are other, frighteningly complex systems out there, to be sure, but English finds a way to take its deceptively simple 26 letters and make the absolute most it can out of them.  As a final note, I think it’s important to say that, by “worst,” I don’t mean to say that we should look down on English orthography, or Japanese orthography for that matter.  In fact, in my mind, I could have equally replaced the “most difficult” in the title with “most interesting.”  Irish, English, Uyghur, Burmese, Japanese, and even French are fascinatingly rich and complex, and indeed in many ways I think they are tremendously valuable in their idiosyncrasies.  The spelling of a word in these languages contains an immense amount of information; we can know just by looking at a word like “know” that it came from German, whereas a word like “ascertain” came from Latin through French, which in turn tells us more about the connotations of those words, the usage patterns, and the history of English-speaking people.  We’d have “no” way (or at least, no immediately apparent visual way) of doing that if we spelled no and know identically.  These “top 5” are a testament to society and language’s ability to evolve and thrive within the infinitely complex interactions of people and peoples, and while that does imply a lot of baggage, I for one am not upset about having all that stuff to cart around.  I like stuff.
So yes, English spelling is a handful, but it’s not alone, and that’s a good thing.  Perhaps, instead of denouncing others for their poor spelling of a choice few words, we should in fact celebrate the fact that people get so many other highly irregular, largely nonsense spellings correct!  (Like, for instance, “people”)  In the very least, if you deal with foreign learners of English, I hope you can sympathize (or even sympathise) with their struggle.  They truly do have it pretty bad.

why ‘funner’ should be an accepted word

For most people, the litmus test for whether a word is a “real word” or not is its inclusion in or exclusion from a dictionary, and especially a big, fat, haughty-looking paper dictionary.  Erin McKean eloquently describes how the lexicographers who make those dictionaries disagree with that approach here, but it wouldn’t take long for even complete lexicographical amateurs to start to see the holes in that line of logic.  New words are added to the dictionary every year; were they just figments of the imagination until that time?  More to the point, the act of printing a word in a way immortalizes it, such that it remains in the dictionary long after it ceases to have any meaning at all.  For example, it is both tragic and frankly absurd that the Oxford English Dictionary accepts the words “funniment” and even, I kid you not, “funniosity,” but not funner or funnest.  Are those words, which have fewer examples of use in written or spoken English than the number of letters they contain, somehow better or more real?

But we mustn’t forget, those who deny “funner” often state, that “fun” is only a noun, not an adjective.  We only have comparative and superlative forms for adjectives, not for nouns, so therefore “funner” and “funnest” must only be figments of our imagination.  To their credit, certainly the word “fun” is used as a noun quite often.  We can have a lot of fun; we can’t have a lot of enjoyable.  We can “make fun of” someone, just like other noun constructions like “making sense of” something.  On the other hand, there’s nothing about the status of “fun” as a noun that makes it any less viable as an adjective; there are literally hundreds of noun-adjective homonyms in English (e.g. every colour word in the language: a red firetruck or the deep red associated with it).

Still, some dictionaries like Oxford’s American online dictionary or the American Heritage Dictionary stubbornly pigeon-hole “fun” as a noun, often accepting its exceedingly rare use as a verb (meaning something akin to “tease” or “joke with”) while labeling the adjectival form “informal” or “slang.”  That’s a little, well, funny, given that the very same American Heritage Dictionary’s citation for informal use of fun as an adjective is a quote from Margaret Truman, daughter of the US President, in a public speech in the 1950s.  It’s weirder still given that the word “fun,” even when used as a noun, is not exactly among the snootier choices for that concept in the English lexicon.  How is “I’m having fun” any more formal than “this is a fun party”?  Oxford goes even further, though, stating that “the comparative and superlative forms funner and funnest are sometimes used but should be restricted to very informal contexts.”  Notice: “should be.”  Who the hell do those Oxford braggarts think they are?

While I was looking this up, I found several comments from anonymous online contributors saying that adjectival “fun” was some sort of “new development” and that it was only because it was new that it hadn’t been accepted yet.  Well, frankly no.  There are plenty of newer words that have been accepted, like “gramophone” and “photograph,” and even newer adjectives like “toasty,” “photographic,” and even words like “fugly.”  More importantly, however, it would be a mistake to say that the adjectival reading is some sort of neologism, and that it’s only in today’s materialist, consumerist culture that uneducated young people (and Margaret Truman) have started using the word improperly.  First off, that argument’s hard to reconcile with uses like “this is a fun little item,” spoken by a professor during an academic lecture.  Indeed, in that same corpus (called MICASE), the word “fun” was used almost a third of the time in contexts that only allow for an adjectival reading.  Second, in terms of the word’s etymology, funner and funnest both date back to the 18th century by conservative estimates.  So far as anyone can tell, “fun” probably comes from the Middle English word “fon,” from which we get other words like “fondle” and expressions like being “fond of” something.  Interestingly, that word was a verb, noun, and adjective, and even had the -ly suffix attached to it to make an adverb. The adjective form is at least as old as the nominal reading.  (If you ever need entertainment, just try adding -ly to random nouns around you and then try to make sense of the result.  You’ll find that words like “bedly,” “pillowly,” and “windowly” don’t quite roll off the tongue.)

Here’s the real kicker, though.  Since 2010, Merriam-Webster has listed the comparative and superlative forms as legitimate words, and it’s not alone.  The Scrabble Dictionary, which, if not exactly the pinnacle of lexicographic achievement, often plays a key role in word/non-word disputes, and has included both funner and funnest since 2008.  In sum, a dictionary is a piss-poor means of determining a word’s status as “real” or not, but even under the dictionary rule, funner and funnest are in a grey area.

Why, then, do ill-informed pedants still swagger about denouncing users of funner as “stupid” or “uncultured”? (Real quotes)  Some insist on “correcting” phrases like “the funnest party ever” to the distinctly less natural “the most fun party ever.”  At least that rules out the possibility of a noun interpretation.  You can’t have “the most enjoyment party ever.”  For adjectives, when we make comparative and superlative forms in English, we follow a pretty straightforward mechanism.  If it’s a single-syllable adjective, it gets -er/-est; if it has three or more syllables, then it’s always more/the most X; if it has two syllables, it’s more complicated and depends on the endings and stress position (cf. narrower vs. *politer), but that’s a side point.  Fun has one syllable.  So why would “fun” behave differently than every other single-syllable adjective in the English language?  (That’s actually an overstatement. There are examples like “bored” that act as adjectives, but I think it’s obvious how that’s quite different)
One might say that “it sounds wrong,” but in my opinion that’s probably attributable to our experience of being told that it’s supposed to sound wrong.  I doubt that there’s a native English speaker alive today who hasn’t at some point, probably when they were quite young, said funner or funnest and felt that it was perfectly natural.  We learn that it’s wrong, but we also learn all sorts of things that are later proven incorrect or, at least, vastly oversimplified.  Most of us also learn (from Shirley Jackson or somewhere else) that blindly, unquestioningly following the status quo is not a recipe for success.
The last bastion of hope for the naysayers is to resort to the argument that “only children say funner.”  At least then we could say that the word is an age-based dialectal marker.  On first glance, that might be appealing, but it turns out that it’s not just children who use the word in informal settings, either.  Bono said “the funnest thing” in his interview with 60 minutes, and multiple speakers have used funner and/or funnest on NBC’s Meet the Press–hardly the equivalent of schoolyard chats.  In written correspondence, the word “funnest” had a brief period of fairly widespread use as early as the 1820s according to Google Ngrams.  According to COCA, newspapers like The New York Times and USAToday and even academic articles have printed the words dozens of times, but always, of course, quoting someone else saying it, and usually a teenager or young person.  If professionals were to use the word without being facetious or campy, then they would probably be ridiculed for it.  But why?  On what grounds?  So far as I can see, the only reason over-zealous editors continue to stamp out the word is because they’ve had it stamped out of them.
I’m not saying that the words are well-suited to formal academic writing.  They’re clearly not, but first, that’s probably more due to semantics than morphology; it’s very rare that formal writing calls for comparative/superlative, subjective judgments of amusement levels anyway.  Second and more to the point, the word “toasty” (among plenty others) is even less well-suited to formal written contexts and neither do dictionaries put a derogatory “informal/slang” label on it, nor do people seem to have a problem with that word’s existence.  Funner and funnest are picked on because they’re frequent examples, but it’s precisely because they are so frequent that we should just get over it already and accept the words for what they are: highly useful communicative tools.
As Erin McKean says, if we embrace our language for the diverse, chaotic wonder it is instead of trying to police it, we’d probably lead happier lives.  It’s no wonder that she’s such a bubbly personality; she gets paid to study how words work, and she sees words for what they are: a means of “windowly” viewing into the infinite variability of the human experience.  Viewed in that light, lexicography sounds like one of the funnest professions I could imagine.

a follow-up on American pragmatics

[Edit:  It took some time for me to figure out that replies weren’t being posted until they were “approved,” which is frustrating. Anyway, I encourage reading the responses.  One is longer than my post, but worth it.]

It’s been a while.  I’ve had a lot to say, but a lot has happened.  Anyway, let’s pick up where I left off.  Last time, I had a rather long rant on why I dislike it when Americans say “uh-huh” in response to “thank you.”  I’m very grateful that it sparked quite a lot of conversation, mostly with people who disagreed with me, haha!  Now, it is very rare that, after taking a rather extreme stance on an issue and having subsequent conversations with people who disagree with me on the subject, I end up at an even more extreme stance than I was previously.  Normally, I’m forced to acknowledge my oversights and adopt a more moderate, nuanced position.  But exactly the opposite has happened in terms of my previous rant on that most annoying of Americanisms: uh-huh.  It is rude, gawsh darn’t, and let me tell you why.

In part, my new stance comes from more data collection.  In the past few weeks, I’ve noticed that Americans often say “yeah” and “uh-huh” not just in response to thank you, but also in response to “sorry.”  I looked around on google scholar and my university’s library research cite trying to find anything on this, and I haven’t found anything yet (Suszczynska (1999) gets pretty close, though).  I’m making a tentative conclusion that it’s either a new phenomenon, too obscure to get published, or boring to most people.  It might be obscure and/or new, but it is anything but boring.  It gives us a linguistic window into how society works, and how societies might differ, in particular with respect to politeness.

What is “politeness”?  To be more to the point, what is “rudeness”?  Last post, I said that it would be difficult to call a commonly-used phrase “impolite” because, in so many words, its use within a community determines its meaning and value.  The fact that “uh-huh” is socially accepted by Americans as a response to thank you means that, at least within that community, it does not have a negative effect on interpersonal relations and, by extension, we would have to cede that it is not impolite.  That’s not the same thing as being “polite,” because that would imply engendering positive interpersonal relations, and this “uh-huh” seems more neutral.  Now, the American “uh-huh” is noteworthy because the social need for marking deference that many theorists postulated as universal is, at first glance, being ignored.  If we look further, though, we might see that the meaning of a casual “thank you” in American usage might not imply the same types of social correlates that it does in other English varieties, and that the reason why “uh-huh” is acceptable might be because the previous phrase “thank you” is being used differently.

Or so I thought.

But then how do we explain away “uh-huh” as a response to “sorry”?  Frankly, when I say “sorry” or “excuse me” to a person and get “yep” in reply, I want to retract my apology and punch the person.  It’s not so bad that I’d go around punching people in the face, but I might punch them in the part of the chest that connects to the shoulder pretending that I meant to hit them in the shoulder where it hurts less and is considered just a joke.  No joke.

To make this abstract just for a while, Bergman and Kasper (1993) and Suszczynska (1999) both give excellent outlines of what an apology constitutes.  In essence, it is an attempt to compensate for or mitigate the perceived negative effects of a prior action for which the speaker takes at least partial responsibility.  When we apologize, we are in essence saying that something for which we are at least partially to blame was wrong, contrary to social etiquette, unintended, or any number of other vaguely negative things.  To reduce it to its extreme, we’re in a very real sense saying “my bad.”

On what planet is affirmation an appropriate response to that!?  In pragmatics research, affirming the transgression would be called an “aggressor,” which is socially a very valuable tool when we disagree with people, but it has zero politeness value.  Now, you could say, “but that’s not what I mean when I say uh-huh,” and you might very well be right, but that doesn’t shield the phrase from criticism.

The way we communicate reflects many things: the way we think about ourselves and our relationships with other people, the way we think about society, and the way society itself is organized.  If I use the word “bro” (and it is a word, not an abbreviation, when used in that context, don’t even get me started) to refer to a friend, that implies a great deal about me, my friend, and it also situates me in a time and place in history where that would occur.  There might come a time when everyone in a particular society starts using the word “bro,” even in formal contexts, and if that time were to come to pass, my use of it would indicate something different about me than it would now, and it would indicate something different about society than it would now.

Another example. The Japanese word “omae” (from 御前, roughly: “honoring that which is before me) used to be a formal second person address term (i.e. “you” when speaking to someone socially above you, and usually way above you) up until the Meiji era.  The word was used almost exclusively by the upper class, so the simple use of that word really did imply quite a lot about both the speaker and addressee.  However, it implied something very different from what it does now, because in modern Japanese, the word is extremely widespread, casual, and even antagonistic when used with people who are not friends (almost like using the word “bitch” to refer to a female friend in modern English, except it’s genderless and used by and for all age groups).  The dramatic changes that the word “omae” has gone through reflect larger social changes in Japan, and it is not controversial to talk about how the word’s journey through Japanese society mirrors changes in that society itself.  It’s no different for English.   “Uh-huh” is an anomaly; very few societies at very few points in time would use such a construction in a similar context.  That fact is emphatically not beyond the scope of analysis, or criticism.

Yes, my punches would be misdirected. When I think about it, I don’t mean to blame the person who says “uh-huh” in response to the speech acts of thanking or apologizing.  That person, who is likely American, is equally likely to harbour no ill will against me, and likely does not intend to communicate something akin to “you’re right to be sorry.”  I absolutely do mean, however, to criticize the society that tolerates and promotes the use of that construction.  It gives the (in my opinion, largely correct) impression that American society tolerates a passive indifference and even disregard for the person who is thanking or apologizing, or in the very least the act of thanking or apologizing in that context, and I take issue with that.  There’s an excellent discussion by a hip-hop artist on the use of the word “bitch” among other things here that I think is really apropos.  It’s another example, and I encourage you to read it, but that’s a different rant.

So.  I’ll have to amend my earlier position to one that’s way more extreme than the previous iteration.  Right now, when I say that I think it’s rude to say “uh-huh” to acts of thanking or apologizing, I am in fact saying that American society is rude, and I’d go further to say that I don’t think criticism of society should be off the table.  I sincerely hope that, over future discussions, I’m forced to retreat from this position, but for now, I’ll just offer an apology to any Americans who might be offended by this post, to which I imagine you might respond:

Uh-huh.

in defence (with a ‘c’) of ranting

In responce to cajoling from my peers, I’m starting up a blog as a public location where I can hopefully kickstart some productive discussion and thinking about language, its use, and the teaching and learning thereof.  If nothing else, this blog would serve some purpoce by allowing me a cathartic outlet to my often irrational, often disproportionate reactions to the various happenstanses, positive and negative, I experiense.  I can’t promice any cogent thematic elements or a regular schedule of posting–such is not really the spirit of ranting, and vicariously such is not the intended spirit of this blog.  Ranting by definition entails the semi-spontaneous, often emotionally charged, admittedly often even egocentric expression of in-the-moment thoughts. When exactly those moments occur, and whether they will inspire me to seek solase through this blog, I couldn’t say.  But if past behaviour is even a remotely useful predictor, I might post rather often.

By now, hopefully you’ve notised the pattern of the italicised words.  I had originally planned a different entry altogether, but then when I started writing the title, I got caught up in the longstanding litigious business of “Defence v. Defense.”  Let’s clarify a few things on this one, sinse, as with all too many things concerning language, there is a great deal of misinformation out there, and a great many laymen who, despite knowing very little, argue very passionately for one side or the other.

The word’s etymology follows an extremely common pattern for English words.  It traces back to the Latin “defensum,” which meant “something forbidden, defended against.”  From there it entered French, where that connotation is still very strong (i.e. “défense de fumer” for “No smoking”), and eventually made its way into English.  In French it’s still spelled with an ‘s’ at the end, and in both English and French it has historically had quite a large variety of spellings: deffans, deffenz, desfens are all attested amongst the various langues d’oïl, whereas the Brits have used diffens, diffense, diffence, and difence.  That raises the question, though, of why Anglophones seemingly uniquely decided to spell the word with a ‘c’ instead of the ‘s’ that all the other Romanse languages were using.

As with pretty much everything else in English orthography, there’s a rather simple underlying logic that ended up causing a highly complicated and idiosyncratic pattern.  We use -s in English as an affix to mark both plural nouns and third person singular subject-verb agreement (as in one pen, two pens; I pen a novel, he/she pens a novel).  When we add -s to words that end in a nasal like ‘n,’ it typically is pronounced [z].  That’s unlike French or Latin, where the additional -s either isn’t pronounced at all, or when it is, it’s pronounced with the hard [s] sound.  In English, though, defense/diffens/diffense and its ilk doesn’t end with [nz].  At some point, English writers noticed that the overwhelming majority of words that ended in -ce were unambiguously pronounced with the [s] sound, and so for many of the potentially troublesome pairs, we came up with the idea to use -ce or -s in contrast to mark the word’s intended rendering.  See, for example, the following pairs:

pens / pence — ones / once — hens / hence — fens / fence

We also have singular nouns that end in [nz] like “lens,” and singular nouns like “dance” that end with the sound [ns].  There are additionally pairs like lands / lance, and that last one is particularly relevant.

In addition to the noun “defence,” English has the verb “to defend,” which becomes “he/she/it defends,” and now we have our pair: defends/defence.  (That “d” is almost imperceptible, and often we insert a [d] or [t] sound in between an [n] and [s]/[z] naturally as the result of the tongue’s movement between the two positions. When English people listen to the two words, we pay much more attention to the voicing ([s] v. [z]) than we do to the “presense” or “absense” of the ‘d’.)

So, problem solved! Right?  Now we have two clear spellings for two different sounds.  Not quite though.  We continued borrowing words from French and Latin, which almost exclusively used the ‘s’ version, like “tense,” “sense,” and “suspense,” and those don’t use the -ce innovation.  We add additional morphology to words, creating things like “defensive,” which not even the OED spells “defencive.”  So the distinction gets really blurred.  We end up making decisions on an almost word-by-word, morpheme-by-morpheme basis, which change over time.  Very famously, the American spelling of the word in question was influenced by Webster’s dictionary, where he came down on the side of -se.

And so it remains today, where Americans usually spell the noun “defense,” and the rest of the English-speaking world sticks with “defence,” excepting instances where both groups are, understandably, confused.  As a proud Canuck, I’m going to stick with the “Canadian” spelling of the word, defence, in so much as there even is such a thing as a unified Canadian spelling system.  (The careful reader, or at least the overly nationalistic reader, probably already noticed spellings like “behaviour” and “italicised”)

If someone starts going on about how their particular choice of “defence” or “defense” is in any way “better,” “more systematic,” or especially, as I was once told, “more academic” than the other, at least now you can know that it’s a bunch of self-righteous hot air.  Neither spelling rule really works out in the end, and it becomes, as with so much else in language, a matter of choice to conform to group norms, whether the group or the norm actually exists or only exists in the mind of the language user.  In my mind, that’s justification enough.  Excuse me for being defencive about it.