how linguists can deal with grammatical “mistakes”

Thank you for the feedback on my previous post about the difference between linguists and grammar snobs!  This time, all of the feedback, positive and negative, was through personal correspondence, and I don’t have permission to make that public, so you’ll have to take my word on this next bit.  In talking about grammar snobbery, one question that came up more than once was (and I’m summarizing crudely) what I’d do if I came across a mistake, and by extension what I think other linguists would do and other people should do.  In other words, if grammary snobbery is wrong, then what’s right?  That’s a fair question.

While the answers (yes, plural) to that question might differ depending on the context and the mistake, I think the means of answering that question can be summarized with a single, overarching statement.  As a good friend of mine often says, “it all comes down to consequences.”

Everything we do has consequences, and our use of language is no different.  What we say, write, or communicate non-verbally can have positive or negative effects on other people, on our relationships with those people, and in some cases, on our relationships with other people we don’t even know.  A simple sentence can rally a city or a whole nation, as in “Tear down this wall,” or “I’m taking my talents to South Beach.”  On a personal level, if I’m consistently able to come up with perfectly worded witty statements on the spot, I might win more arguments, or win over a girl, or win respect from my peers.  On the other hand, if I put my foot in my mouth all the time and say asinine things as a matter of course, then I might become the Governor of Texas.  If I tack on a “just kidding, lol” to the previous sentence’s little paraprosdokian, then it’s likely that fewer people will take issue with my slighting Rick Perry, but other people will take issue with my use of “lol,” and my friends will think I’m writing this while vaguely drunk.

In sum, any language use has consequences, and there are consequences to using forms that snobs deem to be “bad grammar” as well.  Linguists wouldn’t use the term “bad grammar,” but we should be able to understand what those consequences likely will be and respond appropriately.

Before I try to explain what “responding appropriately” means to me, I should first explain why we wouldn’t say “bad grammar.”  In my last post, I tried to clarify why the very phrase “bad grammar” is confusing and almost comical to linguists; it would be like a chemist saying that there are “good” molecules and “bad” molecules. If some chemistry textbook called strychnine a “bad molecule,” you would have to assume that either the author was being facetious or that “bad” was being used figuratively to mean something more like “potentially fatal if ingested by humans.” In other words, the molecule itself isn’t “bad” per se, but its presence might have consequences that we consider negative.  And there you have it.  The way linguists would respond to instances of non-standard language use is in many ways very similar to that.

To reiterate, grammar is the system of context-dependent form-meaning associations in a speech community. In X circumstances, speakers of language Y say A (or B, C, etc.) in order to communicate Z idea. If something is “ungrammatical,” that means either that there is an important mismatch between the intended meaning and the meaning that is understood, or that the phrase wouldn’t be understood consistently or at all.  The key here, though, is that we have to conceive of the phrase “intended meaning” very broadly so that it includes not just the literal meaning of the words, but also the intended social effect.  That’s a bit tricky, so here’s an example.

I work with many international students who have trouble with English noun phrases, especially when talking about abstract nouns.  Sometimes, the presence of a plural-looking marker doesn’t make that much of a difference, as in the following:

  • I like strawberry.
  • I like strawberries.

On some level, there is a difference between liking the flavor of the fruit and liking the fruit itself.  One could imagine some person who likes strawberry-flavored candy but who doesn’t like eating the berries or vice versa.  On the whole, though, if you were talking with a person who said either of those sentences, you probably wouldn’t be confused.  Sometimes, though, there can be a very large difference, as in these two sentences:

  • I like dog.
  • I like dogs.

Suddenly, the difference between liking the thing itself and liking the flavor of it is more consequential.  If you meant to say one and said the other, then there could be a fairly large and important mismatch between what I would understand and what you intended for me to understand.  Now, if you were one of my international students, and you saw me walking my dog on the street and said “I like dog,” I would assume that you were going to pet her and not that you were imagining how my dog would taste, but if we met each other at a potluck and you said “I like dog” while chowing down on some unlabeled casserole, I might think twice before putting some on my own plate.

"I like dog. By the way, this is good.  Want some?"

“You have a dog?  That’s great.  I like dog. By the way, this is good. Want some?”

Typical grammar snobs would probably respond to someone’s uttering “I like dog” by calling it a “mistake.”  Next, they might add “It should be ‘I like dogs’ because…” and then that’s where it gets really tricky.  What could they say after that?

Some of them might say, “In English, we need the plural in that position.”  Well, no we don’t.  No one says “I like ice creams,” or to take John McWhorter’s example, “I like corns.”

So maybe instead they say, “In English, count nouns require the plural in that position.”  You can count dogs: one dog, two dogs.  You can’t count corns.  You can count kernels of corn, or rows of corn, or something else of corn, but we don’t say one corn, two corns.  That would be closer, although there would still be problems.  Certain nouns can be either count or non-count, like strawberries.  “I like democracy,” and “I like democracies” are both possible sentences; it’s just that they mean slightly different things.  And that’s when they’d finally get at what I think is the right answer:

The sentences “I like dog” and “I like dogs” mean different things to native speakers.  Saying “I like dog” is a “mistake” if you intend to say that you like canines as pets because the way that is typically expressed in English is “I like dogs.”  At this point, we’ve largely abandoned any “prescriptivist” argumentation (i.e. “thou shalt do X and thou shalt not do Y”).  We’ve simply described the observable reality of the world, and laid out the consequences of the language so that the language user can make a choice.  If one option is more desirable, then the language user should choose the form that corresponds to that option.  We can only say “you should say ‘I like dogs'” by understanding that behind that “should” is a much more important “If what you mean to communicate is ___.”

At this point, you might say, “But surely you can figure it out from context.  Wouldn’t saying ‘I like dog’ to mean that labradors are cute and lovable be just fine by linguists as long as it’s said in a context where the intended meaning is clear?”  Well… in fact, there are some scholars who make a very similar argument, but I personally wouldn’t go that far, no, for two reasons.

A)  The context often doesn’t disambiguate, or at least doesn’t do so in an objectively clear way.  I work with a law student from China.  I help copy-edit his papers, and I remember once running into trouble with one of the subheadings he’d used.  He knew that he had problems with this very same form, so he had even suggested multiple different options, as if to give me a multiple choice question!  I was immensely entertained, but as it turns out, it wasn’t an easy choice:

  1. A Democracy in China
  2. Democracy in China
  3. The Democracy in China
  4. The Democracy of China

Most people, I think, would agree that (3) doesn’t sound like a plausible English phrase.  It’s  unclear what that one would mean, but the other three options are all possible section titles.  Which one he should use really depends on very subtle distinctions like whether or not he is claiming democracy already exists, whether he is referring to democratic principles and structures in society or to a larger government body, whether or not he is being ironic or cute, and so on.  He may or may not be aware that these functions apply to these language structures, and may or may not be intending to use any one or combination of them.  I might be able to disentangle those things by reading the rest of his paper, but I might get that wrong.  I might accidentally attach a meaning to his subheading that he didn’t intend on being there.  He might even have meant to say something completely different, like “Democracies in China,” and as it turns out, after conferencing with him, we decided on “Chinese-style Democracy,” which sounds far removed from any of those previous options unless you add quotation marks as in “The ‘Democracy’ of China.”

The point is that, in many contexts, we can’t simply rely on the audience to figure it out for us.  If we use a form whose meaning is unclear, then our chances of successfully communicating our intent decrease.  Using the clearest, most unambiguous language possible might still not result in a perfect match; indeed, some theorists argue that a perfect match is an impossibility and we can only hope to approximate understanding.  Even if we assume it is possible in principle, listeners might just not get it, some might not be fully paying attention in the first place, or they might not have native-like listening comprehension abilities.  In other words, we almost never have ideal circumstances for communication, but even under ideal circumstances, communication is a probabilistic, messy process, so we’d do well to maximize our chances.

B) “Intended meaning” doesn’t just concern the literal denotation of any one particular set of words.  Let me tell a story to illustrate.  During my first few days living in Japan, I was happy if the food a waitress gave me vaguely corresponded to what I wanted to eat, let alone what I thought I’d ordered.  I had coughed up some garbled string of Japanese-sounding syllables, and victuals were brought to my seat, very likely in part because of my linguistic efforts or at least because they took pity on me and knew I had money.  Still, success!  I was happy and enjoyed my meal until the time came to think about what I had to say to pay the bill.

That sense of satisfaction for completing simple tasks didn’t last long, though.  After all, I didn’t just want physical sustenance.  I also desired to be thought of as a capable, functioning adult.  Often when people start learning a second language, they get frustrated or embarrassed because they feel like they sound like a child (and, in fact, we all do).  We don’t start out in Japanese 101 expounding on the perils of complementary schismogenesis.  We start by learning how to say simple phrases, but the eventual goal for most people is to be able to say what they want to say, how they want to say it.  For me personally, I wanted to be seen as capable of saying what I wanted to say, how I wanted to say it in Japanese, and if that is my intent, then only a very specific set of linguistic forms will do.  (It’s 相補分裂生成の危難, in case you were wondering… although I imagine you weren’t)

So, going back to the example of “I like dog,” a linguist would have a couple different answers depending on the situation.  If one of my students is writing to a potential host family abroad and wants to make a good first impression when introducing his likes and dislikes, I could tell him that “I like dogs” is better.  If he asks or cares to know why, I could even add that saying “I like dog” means that he likes the flavor of dog meat, and that while his future host family could probably figure out that he doesn’t mean to say that, he will sound less competent in English than if he had used the other form.  If he doesn’t care or isn’t present for me to terrorize, I’d probably just “correct” it without making a big deal of the situation.  After all, it looks like a simple sentence, but it’s actually a tough distinction to learn, with lots of exceptions and subtleties, and so I can’t reasonably get angry that he hasn’t learned it despite however long it is that he’s been trying to do so.

If another student says “I like dog” to me as I’m walking my dog on the sidewalk, I’ll probably just smile, ignore it, and move on in the conversation.  It’s not really an appropriate time to launch into an exposition on English noun phrases, and the student’s desire to communicate is probably more central than the desire to achieve native-like accuracy in that specific moment.  If I chose to comment on the statement, that might not help the student remember the form anyway.  More likely, switching the topic of conversation from my dog to English grammar would just cause embarrassment and frustration, and maybe even engender some small amount of resentment.  I could always bring it up at some later time when I thought it would be more helpful, but nitpicking then and there might give the false impression that I care only about formal clarity over meaningful exchange of ideas.  In short, it would make me look like a grammar snob, and that is a consequence I definitely want to avoid.


on the difference between linguists and grammar snobs

People often believe that I am the type of person to whom it would be unsafe to write anything containing a grammatical mistake, and while that pains me, I get why. I study Applied Linguistics, and as such I am passionate about language. I think about it often, and I talk about it in casual conversation as if that were a normal thing to do. Moreover–besides being the type of person who would try to get away with using a word like “moreover”–like many in my field, I teach English, which lends even more credence to the notion that I am a linguistic control freak. However, I and, more importantly, most applied linguists would be deeply offended to be grouped together with people who cry in horror at split infinitives, missing apostrophes, and dangling participles (or, for that matter, the “serial comma” I just used), and I think the distinction between what I do and what they do is important to understand.  (For an introduction to this topic in general and to the serial comma specifically, this NPR editorial is a good read)

There are many different names for people who do that sort of thing such as “Grammar Nazis” or “the grammar police.” As you can see in the picture below, they even have their own merchandise, and some attempt to put a positive spin on things by calling themselves “Grammar nerds” or “Grammar geeks.” In her hilarious and wonderfully written book, June Casagrande calls them “Grammar snobs,” and for the sake of consistency, I’ll use the term “snobs” to refer to them here, but really, we all know who (excuse me: “whom”) I’m talking about (excuse me: “about whom I’m talking”). We’ve all at some point either been witness to, victim of, and/or perhaps even complicit in their tirades against “improper usage” or simply “bad grammar,” and on the surface it seems like their passion for correct form resembles the work of linguists, but that is really far from the case. Linguists and grammar snobs are in many ways diametrically opposed.  I’d like to try to show why.

Grammar Police Mug

Mantra of the snob, not linguist

First, let me say that correction itself doesn’t bother me. I edit my own writing (and self-correct in speech) quite often. Even in informal written contexts, I sometimes delete and rewrite my Facebook comments as fast as I can in a futile attempt to make it look like I had originally written what I eventually decided was better. I believe there is such a thing as a better, clearer, more powerful means of expression, and that, all else equal, we should pursue it. After all, language is a powerful tool, and Spiderman has taught us that “with great power comes great responsibility.” I do strongly value comprehensibility, force, deftness, and even beauty in language, but that’s not the same thing as conformity to arbitrary, self-contradictory stylistic edicts of the self-proclaimed elite. In so many words, I’m not against all of the snobs’ “corrections,” but I end up disagreeing with most of them because I strongly disagree with their means of judging language as grammatical or ungrammatical, and what they mean by both of those terms. So call those grammar snobs what you may–nerds, nazis, or nitpickers–just don’t call them linguists. They aren’t.

Grammar Snob Cat

I’m being rather nonchalant in using the word “they,” as if grammar snobs were some unified, homogeneous cult, but I’m comfortable doing so here because, no matter how diverse the individual snobs may be, “their” handiwork tends to follow a very distinct, uniform pattern. The following is what I see as the typical modus operandi of a grammar snob:

  • Decide before reading or listening to something that formal accuracy is more important than successful communication
  • Read or listen to a given language sample, paying special attention to particular forms, often (though not always) at the expense of the message itself
  • Ignoring the content, label any form that differs from their conception of the norm as wrong, when possible using linguistic-looking jargon
  • (Optional) Add some haughty-sounding phrase and assert that it constitutes what the original speaker or writer “should have said”
  • (Optional, and less frequent) Insult the original speaker or writer

I think it’s clear from that overview why I don’t much respect that whole process. The first two in that list are objectionable enough such that whatever happens after that is moot, but that’s actually not the only reason why linguists and grammar snobs differ in their judgments.  My biggest pet peeve with grammar snobs is that, in a surprisingly large number of cases, in the act of trying to “correct” someone else’s “grammar,” snobs commit three separate but related offenses.

  1. They invoke an argument that has nothing to do with linguistics, grammar, or sometimes even language
  2. The argument itself often isn’t true or even internally consistent
  3. The exchange that results distracts from real, underlying issues in language use and deflects people away from otherwise readily available information on language that is genuinely interesting, empowering, and meaningful

That was long and complicated, so let me break it down in a simple example. In a previous post I ranted about the word “funnest.” Use of that word can push grammar snobs into a long diatribe about how funnest isn’t a word because it’s not in a dictionary. Well, the three problems I just mentioned are well-evidenced in this example:

  1. Dictionaries (especially paper ones) aren’t an appropriate source of determining word/non-word status. That’s just not the kind of argumentation you’d use in linguistics at all.
  2. “Funnest” is listed in several prestigious dictionaries. They assume it isn’t in the dictionary because they think it shouldn’t be there, but in fact the opposite is true.
  3. The issue of what constitutes a “real word” is a fascinatingly icky problem, but if we look at real usage and gather data from (often free, often online) sources like corpora, it shows that “funnest” is as real a word as any other. If people knew about these relatively straightforward tools, they could find out all sorts of things about their own language and how it works.

Now, “funnest” is one example of this 1-2-3 pattern (bad argument; false anyway; a better argument says the opposite and is insightful), but there are countless others. I’ll describe just one of those “countless others” below, but before that I thought you might be wondering why I’m riled up about this. After all, maybe grammar snobs have nothing better to do with their time. Maybe–indeed, quite probably–they derive a sadistic pleasure from making snide remarks about other people’s language, and who am I to deny other people pleasure? It’s a free internet, and all that.

The problem is, though, language teachers have to deal with the aftermath of their handiwork. The legacy of this conflation of (often misbegotten) “style guidelines” with real “English grammar” (which, properly understood, are two very different things) is such that our students believe not only that they have to learn these “rules,” but that those rules have some intrinsic value. Some of my own students believe that teachers can (and even should) be evaluated based on their mastery of those rules and their ability to foster mastery thereof in their students, and that’s frankly appalling. While belittling someone for a missing apostrophe is trite and objectionable on its own grounds, for me there is the added grievance that these snobs are interfering with good teaching practice. Grammar snobs make my job, my colleagues’ jobs, and the work of my students harder and more complicated, and for that, they have become the subject of my rant.

There are so many examples of fundamentally flawed grammar snob arguments that it’s tough to choose just one. Who/whom, lie/lay, there/they’re/their, sentence-final prepositions, and effect/affect are each worthy of separate rants, but for here, in my thinly veiled attempt to reach out to people who might think grammar snobbery is a good thing, I’ll talk about one that’s less close to the typical grammar snob’s heart. It’s called “impersonal they.”

Grammar snobs will tell you that the following sentence is malformed:

  • If someone comes looking for me, tell them I’ll be back soon.

Instead, some of them will insist, straight-faced, that the sentence should be:

  • If someone comes looking for me, tell him or her (that) I’ll be back soon.
  • (Alternatively) If people come looking for me, tell them I’ll be back soon.

Their argument is typically that “they” refers to plural subjects, and “someone” refers to singular subjects, so the pronouns don’t agree.  They typically add on that “young people” or “people nowadays” say “they” but that traditionally, English strictly maintained that distinction.  Well, we can go down that checklist I proposed earlier: 1) Non-linguistic? Yes. 2) Untrue anyway? Yes. 3) Obfuscates interesting language-related issues? Yes. 4) Makes my job harder? Yes. Here’s the breakdown:

1) The argument sounds linguistically based, but it really isn’t. The snobs simply assert that “they” refers to plural subjects only–because snobs said so–and not because that’s how the pronoun actually behaves in the language. Linguists don’t just get to call the shots and say how a language should operate. Our job is to figure out how it actually operates and pass on the relevant parts of that knowledge to our students. If the word “they” is used frequently in the context of an unspecified singular third person–as in fact it is–then that’s part of the grammar of English. Grammar is, very simply, the system of form-meaning associations in a speech community. In certain contexts, certain forms mean certain things and have certain communicative value, and others do not. A linguistic argument against impersonal “they” would have to be phrased like “In formal written contexts, use of ‘they’ or its other case forms to refer to a singular referent is stigmatized and may result in unfavorable reception.”  Even that argument, in my opinion, is flawed (there are quite a few examples of its use in articles published in academic journals like TESOL Quarterly and national newspapers, but they’re usually not salient enough to attract ire). Really, though, that’s not their argument anyway. Their argument can be called any number of things–pretentious, pedantic, petty–but it cannot be called “linguistic.”

2) Their argument is just not true. The distinction between singular and plural pronouns in English has never been that strictly delimited. Many people learn the concept of the “royal we” through Shakespeare, and that’s one example of where a plural form is used in place of the singular. In Shakespearean times, people commonly used the pronoun “thou” to refer to singular persons and “you” for more than one person, but “you” was also used to refer to individual persons formally, and it eventually became the standard for all second-person addresses. More to the point, though, people in the 21st century use “we” in impersonal contexts when saying “I” simply sounds too committal. The sentence “We’re experiencing some cold weather up here” could refer to the people of that area, but really, it doesn’t refer to anyone specifically. It’s just impersonal, like when I said “our students” in a paragraph above. One could just as easily use “I” (or “my students”), but saying “we” depersonalizes the statement. All three pronoun distinctions, then–I/we, thou/you, he/she/they–are (or were) not quite so black-and-white as the grammar snobs would have us believe anyway.  Polysemy (having more than one possible meaning) and situational exceptions in the case of singular and plural forms are not even unusual across languages; English’s fuzziness in that respect is very similar to French, German, and many others. Separately, the snobs’ assumption that they are “preserving” English is also just false. Impersonal “they” was used more than 600 years ago in The Canterbury Tales by Chaucer, and has enjoyed widespread use consistently since then. It is English, and it has been English for quite some time, but even if it were some “new” development, John McWhorter likens the snobs’ practice of trying to preserve the language to trying to stop the tide from coming in by drying the beach with a towel. I find that a powerful image of how absurd what they’re doing really is.

3) If people weren’t scared off by grammar snobs, engaging them in a conversation about pronoun shifts in English might not sound like what it does today: something that should be prohibited in the Geneva Convention. Pronouns, which you’d think would be rock-solid, are in fact quite fluid and chaotic across languages. Japanese has or had literally dozens of personal pronouns, many of which have shifted meaning drastically over time, and all that despite the fact that pronouns are commonly dropped from speech and writing when not absolutely necessary for comprehension. Previously I discussed how the Japanese second-person omae, or other words like kisama, used to be strictly formal and honorific, but nowadays, only a few generations later, they can be insulting and even vulgar. German, in what I can only imagine is the result of years of its speakers consuming more beer than water, has come to a point where the second-person singular nominative and accusative (i.e. “you”) and third-person singular dative (“to him/her”) forms are the same, “ihr,” while the third-person singular and second-person plural nominative and accusative pronouns merged on a different form, “sie.”  If that whole sentence sounded like confusing nonsense, then you’ve accurately understood it. Even in English, “you” started out as the accusative case only. A millennium ago, Britons would say “I see you,” but crucially not “*You see me.”  That would sound weird to them, just like saying “Me see you” would sound weird to us. Instead, the form in that position was “ye.”  “Ye see me,” which sounds vaguely pirate-like now, was at one point in history the way people talked in English.

Why would that be?  Shouldn’t pronouns be relatively stable?  We use personal pronouns hundreds of times in daily conversation; they’re some of the most frequent words in the English language. Those are really interesting questions. Indeed, part of the work of linguists is finding answers to those questions.  Unfortunately, though, people don’t tend to think about those questions partially because they think discussions of grammar are exclusively for people who want to feel superior to others.

4) Lastly, and most importantly, this has an impact on English teaching, English teachers, and students learning English. Presumably due to the influence of grammar snobs in the language testing community, I have, to my dismay, seen questions on standardized tests of English that specifically target impersonal “they,” who/whom, lie/lay, and other grammar snob problems. What happens when a high-stakes standardized test like the TOEFL uses items that test mastery of these nonsense maxims?  Am I obligated to teach something I not only don’t believe, but in fact strongly believe against, all the while sacrificing classroom time that I could have otherwise dedicated to activities I feel would be truly beneficial?

Unfortunately, the answer to that last question is “yes.”  Psychometricians call this effect “washback,” and as much as ETS tries to use its power for good, the TOEFL has a long and storied history of negative washback in the ESL classroom. High-stakes exams often have dramatic, real-world consequences, and failure to pass them can cost students hundreds or even thousands of dollars and months of their time, so if I can get students to pass by having them memorize a few nonsense arguments to spew out for the exam and promptly forget thereafter, I will and probably even should. Now, I’m not helpless as a teacher. There are things conscientious teachers can do to deal with it, but that’s another post, and the point here is that we shouldn’t have to “deal with it” in the first place. Neither should my students, and neither should anyone.

So please, when someone tells you they’re studying linguistics or applied linguistics, understand that grammar snobbery is not part of their required coursework. (Note: I just used impersonal “they” twice) In fact, linguists often work against grammar snobs, advocating for our students, or simply advocating for logic, and I’d like to think we’re winning the war. Slowly but surely, awareness of the hypocrisy of the grammar police is spreading, as evidenced by educational websites, classes, and even comic skits (though I warn you, that last one isn’t family friendly). On the other hand, one could just as easily cite examples of other self-described grammar experts who continue to misinform and miseducate, but even they have to find ways to explain away their many fallacies instead of simply going unquestioned, and that’s good. The end result of those questions is, I believe, a fuller understanding of language, and that is what the job of linguists is all about.  (Excuse me: “That is all about which the job of linguists is.”)

“Blwyddyn Newydd Dda” and other chances to think about linguistic relativity

For those unfamiliar with Welsh, that’s “Happy New Year,” and to my knowledge it’s pronounced /blʊ͡iðɨ̞n nɛ͡ʊiðːɑː/.  Yeah, good luck with that.

I’ve been remiss about posting for quite some time. I’ve written quite a few drafts, some of which I might finally get out soon™, but I thought at least I could get this note in first.  To me, the end of the year, or really any internationally celebrated holiday, is a great chance to think about linguistic relativity.  That sounds unforgivably nerdy, but it is actually very interesting, I assure you.

Proponents of linguistic relativity posit that the syntactic structure, lexicon, semantics, pragmatics, and other properties of the languages we speak influence the way we think about, and ultimately the way we see and understand, the world around us.  For example, people have suggested that the reason why English speakers perceive brown and orange as different colours but dark blue and light blue as different shades of the same colour is because English treats them that way.  It’s true that we have words for different shades of blue like cerulean, azure, and zaffre, but in Italian or Russian, for example, dark blue and light blue have different labels that represent different colour categories. Inversely, in many languages all around the world, orange and brown might have separate words but they are considered two shades of the same category, or even both part of a yet larger category of “red.”  Reading this post, you could ask yourself what colour you think the side border on this webpage is.  In this case, linguistic relativity is the idea that people’s answers to that question–and other, more interesting questions–may differ depending on the properties of the language(s) they speak.

In the late 1800s and early 1900s, some linguists, most famously Edward Sapir and Benjamin Lee Whorf, went further to suggest that people who grow up learning a language where there is no word or expression for a certain concept might not be able to conceive of that idea at all, or at least until they learn words for it.  In other words, your language determines and delimits your thought processes.  That idea, which has since been called the “strong” version of the Sapir-Whorf Hypothesis, has been pretty resoundingly refuted by experimental data from studies in cognitive psychology, anthropology, and linguistics.  As for the example from above, native speakers of languages that have only three words for colour in the entire language (“light,” “dark,” and “red”), performed very similarly to Italians and English speakers in colour matching and identification tasks administered in the early 1900s, and by the 1980s, cognitive psychologists had accumulated masses of evidence suggesting that colour perception, while in fact differing between individual people, depends on factors that are more biological than linguistic. However, the “weak” version of the Sapir-Whorf Hypothesis–that our language and our thoughts mutually influence each other within larger, neuro-biological parameters–has had quite a lot of empirical support.  In this example, the weak hypothesis would hold that you might choose different labels for the colour on the sidebar like orange or red (when in fact, of course, it is vermilion, haha), and those differences in labeling might cause you to associate the bar with different objects and events and respond to the colour differently, but the actual colour you perceive, while potentially different from the one I or other people perceive, would not be different from the one you yourself would have perceived had you grown up speaking a different language.

This all speaks to a timeless question: how do our thoughts determine what we say, and does what we say influence our thoughts in that moment or thereafter?  How are cultural differences represented, manifested, or even created through language?  Certain events, and holidays most prominently, bring that question to the forefront for me.

As for the New Year, like many major holidays, it has a set expression to celebrate it.  In English, that expression is “Happy New Year,” but in other languages, the exact wording often subtly differs.  For me, that raises the question of whether that makes a difference or not.  Do speakers of other languages have different ideas in their mind when they say the equivalent of “Happy New Year” in their own language?  I thought I’d look at the two languages with which I’m most familiar, French and Japanese. First, French:

Bonne Année – word for word: “good year.”  There’s probably no meaningful difference between “good” and “happy” here.  The “happy” we say in English is, as linguists say, semantically bleached.  We don’t really think about contentedness specifically when we count down to midnight and start chanting Auld Lang Syne.  Rather, “happy” is used as a placeholder term in that and all sorts of other situations when we wish to congratulate someone, as in “Happy Birthday,” “Happy Valentine’s Day,” or “Happy Anniversary,” and that’s actually very similar to the French “Bonne.

Also like French, some English expressions obligatorily use synonymous adjectives in combination with other holidays, as in English’s “Merry Christmas” or French’s “Joyeux Noël.”  Is it really because we feel merriness at Christmas and happiness during other holidays that saying “Merry Valentine’s Day” or “Joyeux Année” is impossible?  Not really, no; it’s just a set expression. Further, in Britain and several other English-speaking countries, you can say “Happy Christmas” and not sound like you messed the expression up.  These are arbitrary, fixed chunks of language that can be reduced to [positive term for congratulations]+[name of the event], and you could say that the underlying semantics of both expressions are very similar or even no different in any meaningful sense.

“But wait!” you say. There is, of course, the matter of “New.”  The French expression is, linguistically at least, ambiguous. The “Année in “Bonne Année could refer to the current year that just ended or the year to come, but from talking to French-speaking people, and from celebrating one New Year’s in Montreal where the signs said “Bonne Année 2006” on January 1, 2006, I think the latter is much more likely.  Does having the “new” in English really make the newness of it any more salient or important?  Do French people not think about the year to come as new or think of it as somehow less new because they do not say the word “nouvelle” at midnight?  As my former syntax professor would say, “I’m not sure we’d want to say that,” by which he really means “That’d be flat-out wrong.”  So let’s move on.  What about Japanese?

明けましておめでとう – akemashite omedetou – a word for word translation is tough, but I’ll start with the second word. “Omedetou” straightforwardly means “congratulations,” and in this context it has the same function as English’s “Happy” or French’s “Bonne.”  The only difference is that, in Japanese, you can connect the word omedetou directly to the event worthy of congratulations to make a whole phrase.  You can say “tanjoubi omedetou” (birthday congratulations) to mean “Happy Birthday,” or “gosotsugyou omedetou” (graduation congratulations) to celebrate someone’s graduation.  In English and French, the phrase “Congratulations” is usually said on its own as a complete utterance.  We could in principle, I suppose, say “Congratulations on your birthday,” but most people functioning within normal parameters wouldn’t.  On the other hand, we couldn’t just say “Happy!” and be done with the sentence.  “Bonne” and “Happy” are only used as part of larger expressions almost as if to make up for the ungrammaticality of “Congratulations” (or Félicitations) in that position, whereas “Omedetou” can be said on its own or as part of a whole.  In sum, we could say that the word “omedetou” is similar enough to English or French where we wouldn’t predict any differences in the underlying thoughts and feelings it represents.

Akemashite,” on the other hand, is quite a bit different.  It’s the participle form of the verb akeru, so a preliminary translation of the whole expression would be “Congratulations on (the) [akeru]ing.”  Akeru can mean “to reveal,” as in the following expression:

  • 真実を打ち明ける
  • shinjitsu-o uchi-akeru
  • truth-ACC forceful-AKERU
  • “to reveal the truth,” “to divulge the truth”

Akeru can also mean “to dawn,” as in the following:

  • 夜が明けた
  • yoru-ga ake-ta
  • Evening-NOM AKERU-past
  • “dawn has come”

As you can see, here’s where it gets tricky.  Yoru means “evening,” and it is the subject of that sentence, but we don’t think of the evening as “dawning,” because the evening is ending, not beginning, when the sun rises.  Rather, just like “the truth” in the first example, the verb means something closer to “opening up to reveal something new.”  For New Year’s, we can think of 2012 as unfolding, dissolving, or even better, giving way to the next year.  The Japanese expression then, is something close to “Congratulations on [the previous year’s] giving way [to the new one].”  Going back to the “New” from the English-French comparison, in this case, one might even say that the Japanese expression is explicitly referring to the year that is coming to a close, except that wouldn’t be entirely correct, either.  In Japanese, you can also say the following:

  • 年が明けた
  • toshi-ga ake-ta
  • year-NOM AKERU-past
  • “a new year has dawned”

So, instead of being limited to the past, it refers to the transition between the period that is ending and the next that is beginning.  In other words, it refers to a specific transition event–in this case, the turning of the calendar–and that’s analogous to the Western versions.  Now, the Japanese expression is perhaps much more illustrative than the French, English, or Welsh versions, but I’d doubt that Japanese people are much more likely to wax all poetic about the year that is closing as a result of those linguistic differences alone.  In fact, there’s linguistic evidence to suggest otherwise.  First and foremost, on January 1st, Japanese people often extend the New Year’s greeting to say 新年明けましておめでとう (shinnen akemashite omedetou), where “shinnen” means the “New Year.”  This, I think, constitutes strong evidence of semantic bleaching, as it seems more likely that the phrase commemorates the event itself in an abstract sense and not that the phrase is literally referring to the new year and the past year together consecutively in some incoherent jumble.  Given, the grammaticality of the longer phrase is the source of some debate among Japanese people, but its widespread use is undeniable.  Second, the verb akeru written 明ける is also homophonous with other verbs like 開ける which means “to open” or, figuratively, “to begin,” and Japanese children commonly confuse which akeru belongs in the New Year’s celebratory address.  In other words, people know what the expression is, but that doesn’t mean that they’re analyzing it for its deeper meaning, including what time it specifically refers to, every time they utter it, just like we don’t inevitably think of happiness per se.  Again, I’m tempted to reduce the whole utterance to [positive term for congratulations]+[name of the event], and I challenge you to suggest otherwise!

So, that was probably a lot more information than you wanted about collocations for New Year’s celebrations, but hopefully the message you’re taking away from this is similar to the one I have: there are subtle, interesting distinctions between the ways that different peoples talk about the New Year, but the underlying sentiment behind many of those expressions is remarkably similar.

That’s not to say that English, French, and Japanese people have the same associations with the New Year or even with that expression, of course.  Our customs, traditions, and vicariously our personal experiences are vastly different.  Compare, for example, this:


with this:


The bottom picture features an event called 除夜の鐘, joya no kane, “The New Year’s Eve bell.”  On New Year’s Eve, monks ring the largest bell in the monastery 108 times to signify the 108 sins of Buddhism.  New Year’s as a whole is a more somber, family-oriented, quasi-religious holiday in Japan, something starkly different from the festive, youth-oriented, mostly secular Western tradition.  I’d even go so far as to say that it’s remarkable that our customs can differ so much and that the language is, in many ways, so similar.

Having experienced New Year’s in France, Japan, and various places in the US and Canada, my personal associations with the holiday are diverse, so using one of those perfunctory expressions might just be confusing and esoteric.  Instead, I’ll be more explicit:

A very happy 2013 to you and yours.  I wish you the very best!