how linguists can deal with grammatical “mistakes”

Thank you for the feedback on my previous post about the difference between linguists and grammar snobs!  This time, all of the feedback, positive and negative, was through personal correspondence, and I don’t have permission to make that public, so you’ll have to take my word on this next bit.  In talking about grammar snobbery, one question that came up more than once was (and I’m summarizing crudely) what I’d do if I came across a mistake, and by extension what I think other linguists would do and other people should do.  In other words, if grammary snobbery is wrong, then what’s right?  That’s a fair question.

While the answers (yes, plural) to that question might differ depending on the context and the mistake, I think the means of answering that question can be summarized with a single, overarching statement.  As a good friend of mine often says, “it all comes down to consequences.”

Everything we do has consequences, and our use of language is no different.  What we say, write, or communicate non-verbally can have positive or negative effects on other people, on our relationships with those people, and in some cases, on our relationships with other people we don’t even know.  A simple sentence can rally a city or a whole nation, as in “Tear down this wall,” or “I’m taking my talents to South Beach.”  On a personal level, if I’m consistently able to come up with perfectly worded witty statements on the spot, I might win more arguments, or win over a girl, or win respect from my peers.  On the other hand, if I put my foot in my mouth all the time and say asinine things as a matter of course, then I might become the Governor of Texas.  If I tack on a “just kidding, lol” to the previous sentence’s little paraprosdokian, then it’s likely that fewer people will take issue with my slighting Rick Perry, but other people will take issue with my use of “lol,” and my friends will think I’m writing this while vaguely drunk.

In sum, any language use has consequences, and there are consequences to using forms that snobs deem to be “bad grammar” as well.  Linguists wouldn’t use the term “bad grammar,” but we should be able to understand what those consequences likely will be and respond appropriately.

Before I try to explain what “responding appropriately” means to me, I should first explain why we wouldn’t say “bad grammar.”  In my last post, I tried to clarify why the very phrase “bad grammar” is confusing and almost comical to linguists; it would be like a chemist saying that there are “good” molecules and “bad” molecules. If some chemistry textbook called strychnine a “bad molecule,” you would have to assume that either the author was being facetious or that “bad” was being used figuratively to mean something more like “potentially fatal if ingested by humans.” In other words, the molecule itself isn’t “bad” per se, but its presence might have consequences that we consider negative.  And there you have it.  The way linguists would respond to instances of non-standard language use is in many ways very similar to that.

To reiterate, grammar is the system of context-dependent form-meaning associations in a speech community. In X circumstances, speakers of language Y say A (or B, C, etc.) in order to communicate Z idea. If something is “ungrammatical,” that means either that there is an important mismatch between the intended meaning and the meaning that is understood, or that the phrase wouldn’t be understood consistently or at all.  The key here, though, is that we have to conceive of the phrase “intended meaning” very broadly so that it includes not just the literal meaning of the words, but also the intended social effect.  That’s a bit tricky, so here’s an example.

I work with many international students who have trouble with English noun phrases, especially when talking about abstract nouns.  Sometimes, the presence of a plural-looking marker doesn’t make that much of a difference, as in the following:

  • I like strawberry.
  • I like strawberries.

On some level, there is a difference between liking the flavor of the fruit and liking the fruit itself.  One could imagine some person who likes strawberry-flavored candy but who doesn’t like eating the berries or vice versa.  On the whole, though, if you were talking with a person who said either of those sentences, you probably wouldn’t be confused.  Sometimes, though, there can be a very large difference, as in these two sentences:

  • I like dog.
  • I like dogs.

Suddenly, the difference between liking the thing itself and liking the flavor of it is more consequential.  If you meant to say one and said the other, then there could be a fairly large and important mismatch between what I would understand and what you intended for me to understand.  Now, if you were one of my international students, and you saw me walking my dog on the street and said “I like dog,” I would assume that you were going to pet her and not that you were imagining how my dog would taste, but if we met each other at a potluck and you said “I like dog” while chowing down on some unlabeled casserole, I might think twice before putting some on my own plate.

"I like dog. By the way, this is good.  Want some?"

“You have a dog?  That’s great.  I like dog. By the way, this is good. Want some?”

Typical grammar snobs would probably respond to someone’s uttering “I like dog” by calling it a “mistake.”  Next, they might add “It should be ‘I like dogs’ because…” and then that’s where it gets really tricky.  What could they say after that?

Some of them might say, “In English, we need the plural in that position.”  Well, no we don’t.  No one says “I like ice creams,” or to take John McWhorter’s example, “I like corns.”

So maybe instead they say, “In English, count nouns require the plural in that position.”  You can count dogs: one dog, two dogs.  You can’t count corns.  You can count kernels of corn, or rows of corn, or something else of corn, but we don’t say one corn, two corns.  That would be closer, although there would still be problems.  Certain nouns can be either count or non-count, like strawberries.  “I like democracy,” and “I like democracies” are both possible sentences; it’s just that they mean slightly different things.  And that’s when they’d finally get at what I think is the right answer:

The sentences “I like dog” and “I like dogs” mean different things to native speakers.  Saying “I like dog” is a “mistake” if you intend to say that you like canines as pets because the way that is typically expressed in English is “I like dogs.”  At this point, we’ve largely abandoned any “prescriptivist” argumentation (i.e. “thou shalt do X and thou shalt not do Y”).  We’ve simply described the observable reality of the world, and laid out the consequences of the language so that the language user can make a choice.  If one option is more desirable, then the language user should choose the form that corresponds to that option.  We can only say “you should say ‘I like dogs'” by understanding that behind that “should” is a much more important “If what you mean to communicate is ___.”

At this point, you might say, “But surely you can figure it out from context.  Wouldn’t saying ‘I like dog’ to mean that labradors are cute and lovable be just fine by linguists as long as it’s said in a context where the intended meaning is clear?”  Well… in fact, there are some scholars who make a very similar argument, but I personally wouldn’t go that far, no, for two reasons.

A)  The context often doesn’t disambiguate, or at least doesn’t do so in an objectively clear way.  I work with a law student from China.  I help copy-edit his papers, and I remember once running into trouble with one of the subheadings he’d used.  He knew that he had problems with this very same form, so he had even suggested multiple different options, as if to give me a multiple choice question!  I was immensely entertained, but as it turns out, it wasn’t an easy choice:

  1. A Democracy in China
  2. Democracy in China
  3. The Democracy in China
  4. The Democracy of China

Most people, I think, would agree that (3) doesn’t sound like a plausible English phrase.  It’s  unclear what that one would mean, but the other three options are all possible section titles.  Which one he should use really depends on very subtle distinctions like whether or not he is claiming democracy already exists, whether he is referring to democratic principles and structures in society or to a larger government body, whether or not he is being ironic or cute, and so on.  He may or may not be aware that these functions apply to these language structures, and may or may not be intending to use any one or combination of them.  I might be able to disentangle those things by reading the rest of his paper, but I might get that wrong.  I might accidentally attach a meaning to his subheading that he didn’t intend on being there.  He might even have meant to say something completely different, like “Democracies in China,” and as it turns out, after conferencing with him, we decided on “Chinese-style Democracy,” which sounds far removed from any of those previous options unless you add quotation marks as in “The ‘Democracy’ of China.”

The point is that, in many contexts, we can’t simply rely on the audience to figure it out for us.  If we use a form whose meaning is unclear, then our chances of successfully communicating our intent decrease.  Using the clearest, most unambiguous language possible might still not result in a perfect match; indeed, some theorists argue that a perfect match is an impossibility and we can only hope to approximate understanding.  Even if we assume it is possible in principle, listeners might just not get it, some might not be fully paying attention in the first place, or they might not have native-like listening comprehension abilities.  In other words, we almost never have ideal circumstances for communication, but even under ideal circumstances, communication is a probabilistic, messy process, so we’d do well to maximize our chances.

B) “Intended meaning” doesn’t just concern the literal denotation of any one particular set of words.  Let me tell a story to illustrate.  During my first few days living in Japan, I was happy if the food a waitress gave me vaguely corresponded to what I wanted to eat, let alone what I thought I’d ordered.  I had coughed up some garbled string of Japanese-sounding syllables, and victuals were brought to my seat, very likely in part because of my linguistic efforts or at least because they took pity on me and knew I had money.  Still, success!  I was happy and enjoyed my meal until the time came to think about what I had to say to pay the bill.

That sense of satisfaction for completing simple tasks didn’t last long, though.  After all, I didn’t just want physical sustenance.  I also desired to be thought of as a capable, functioning adult.  Often when people start learning a second language, they get frustrated or embarrassed because they feel like they sound like a child (and, in fact, we all do).  We don’t start out in Japanese 101 expounding on the perils of complementary schismogenesis.  We start by learning how to say simple phrases, but the eventual goal for most people is to be able to say what they want to say, how they want to say it.  For me personally, I wanted to be seen as capable of saying what I wanted to say, how I wanted to say it in Japanese, and if that is my intent, then only a very specific set of linguistic forms will do.  (It’s 相補分裂生成の危難, in case you were wondering… although I imagine you weren’t)

So, going back to the example of “I like dog,” a linguist would have a couple different answers depending on the situation.  If one of my students is writing to a potential host family abroad and wants to make a good first impression when introducing his likes and dislikes, I could tell him that “I like dogs” is better.  If he asks or cares to know why, I could even add that saying “I like dog” means that he likes the flavor of dog meat, and that while his future host family could probably figure out that he doesn’t mean to say that, he will sound less competent in English than if he had used the other form.  If he doesn’t care or isn’t present for me to terrorize, I’d probably just “correct” it without making a big deal of the situation.  After all, it looks like a simple sentence, but it’s actually a tough distinction to learn, with lots of exceptions and subtleties, and so I can’t reasonably get angry that he hasn’t learned it despite however long it is that he’s been trying to do so.

If another student says “I like dog” to me as I’m walking my dog on the sidewalk, I’ll probably just smile, ignore it, and move on in the conversation.  It’s not really an appropriate time to launch into an exposition on English noun phrases, and the student’s desire to communicate is probably more central than the desire to achieve native-like accuracy in that specific moment.  If I chose to comment on the statement, that might not help the student remember the form anyway.  More likely, switching the topic of conversation from my dog to English grammar would just cause embarrassment and frustration, and maybe even engender some small amount of resentment.  I could always bring it up at some later time when I thought it would be more helpful, but nitpicking then and there might give the false impression that I care only about formal clarity over meaningful exchange of ideas.  In short, it would make me look like a grammar snob, and that is a consequence I definitely want to avoid.

why ‘funner’ should be an accepted word

For most people, the litmus test for whether a word is a “real word” or not is its inclusion in or exclusion from a dictionary, and especially a big, fat, haughty-looking paper dictionary.  Erin McKean eloquently describes how the lexicographers who make those dictionaries disagree with that approach here, but it wouldn’t take long for even complete lexicographical amateurs to start to see the holes in that line of logic.  New words are added to the dictionary every year; were they just figments of the imagination until that time?  More to the point, the act of printing a word in a way immortalizes it, such that it remains in the dictionary long after it ceases to have any meaning at all.  For example, it is both tragic and frankly absurd that the Oxford English Dictionary accepts the words “funniment” and even, I kid you not, “funniosity,” but not funner or funnest.  Are those words, which have fewer examples of use in written or spoken English than the number of letters they contain, somehow better or more real?

But we mustn’t forget, those who deny “funner” often state, that “fun” is only a noun, not an adjective.  We only have comparative and superlative forms for adjectives, not for nouns, so therefore “funner” and “funnest” must only be figments of our imagination.  To their credit, certainly the word “fun” is used as a noun quite often.  We can have a lot of fun; we can’t have a lot of enjoyable.  We can “make fun of” someone, just like other noun constructions like “making sense of” something.  On the other hand, there’s nothing about the status of “fun” as a noun that makes it any less viable as an adjective; there are literally hundreds of noun-adjective homonyms in English (e.g. every colour word in the language: a red firetruck or the deep red associated with it).

Still, some dictionaries like Oxford’s American online dictionary or the American Heritage Dictionary stubbornly pigeon-hole “fun” as a noun, often accepting its exceedingly rare use as a verb (meaning something akin to “tease” or “joke with”) while labeling the adjectival form “informal” or “slang.”  That’s a little, well, funny, given that the very same American Heritage Dictionary’s citation for informal use of fun as an adjective is a quote from Margaret Truman, daughter of the US President, in a public speech in the 1950s.  It’s weirder still given that the word “fun,” even when used as a noun, is not exactly among the snootier choices for that concept in the English lexicon.  How is “I’m having fun” any more formal than “this is a fun party”?  Oxford goes even further, though, stating that “the comparative and superlative forms funner and funnest are sometimes used but should be restricted to very informal contexts.”  Notice: “should be.”  Who the hell do those Oxford braggarts think they are?

While I was looking this up, I found several comments from anonymous online contributors saying that adjectival “fun” was some sort of “new development” and that it was only because it was new that it hadn’t been accepted yet.  Well, frankly no.  There are plenty of newer words that have been accepted, like “gramophone” and “photograph,” and even newer adjectives like “toasty,” “photographic,” and even words like “fugly.”  More importantly, however, it would be a mistake to say that the adjectival reading is some sort of neologism, and that it’s only in today’s materialist, consumerist culture that uneducated young people (and Margaret Truman) have started using the word improperly.  First off, that argument’s hard to reconcile with uses like “this is a fun little item,” spoken by a professor during an academic lecture.  Indeed, in that same corpus (called MICASE), the word “fun” was used almost a third of the time in contexts that only allow for an adjectival reading.  Second, in terms of the word’s etymology, funner and funnest both date back to the 18th century by conservative estimates.  So far as anyone can tell, “fun” probably comes from the Middle English word “fon,” from which we get other words like “fondle” and expressions like being “fond of” something.  Interestingly, that word was a verb, noun, and adjective, and even had the -ly suffix attached to it to make an adverb. The adjective form is at least as old as the nominal reading.  (If you ever need entertainment, just try adding -ly to random nouns around you and then try to make sense of the result.  You’ll find that words like “bedly,” “pillowly,” and “windowly” don’t quite roll off the tongue.)

Here’s the real kicker, though.  Since 2010, Merriam-Webster has listed the comparative and superlative forms as legitimate words, and it’s not alone.  The Scrabble Dictionary, which, if not exactly the pinnacle of lexicographic achievement, often plays a key role in word/non-word disputes, and has included both funner and funnest since 2008.  In sum, a dictionary is a piss-poor means of determining a word’s status as “real” or not, but even under the dictionary rule, funner and funnest are in a grey area.

Why, then, do ill-informed pedants still swagger about denouncing users of funner as “stupid” or “uncultured”? (Real quotes)  Some insist on “correcting” phrases like “the funnest party ever” to the distinctly less natural “the most fun party ever.”  At least that rules out the possibility of a noun interpretation.  You can’t have “the most enjoyment party ever.”  For adjectives, when we make comparative and superlative forms in English, we follow a pretty straightforward mechanism.  If it’s a single-syllable adjective, it gets -er/-est; if it has three or more syllables, then it’s always more/the most X; if it has two syllables, it’s more complicated and depends on the endings and stress position (cf. narrower vs. *politer), but that’s a side point.  Fun has one syllable.  So why would “fun” behave differently than every other single-syllable adjective in the English language?  (That’s actually an overstatement. There are examples like “bored” that act as adjectives, but I think it’s obvious how that’s quite different)
One might say that “it sounds wrong,” but in my opinion that’s probably attributable to our experience of being told that it’s supposed to sound wrong.  I doubt that there’s a native English speaker alive today who hasn’t at some point, probably when they were quite young, said funner or funnest and felt that it was perfectly natural.  We learn that it’s wrong, but we also learn all sorts of things that are later proven incorrect or, at least, vastly oversimplified.  Most of us also learn (from Shirley Jackson or somewhere else) that blindly, unquestioningly following the status quo is not a recipe for success.
The last bastion of hope for the naysayers is to resort to the argument that “only children say funner.”  At least then we could say that the word is an age-based dialectal marker.  On first glance, that might be appealing, but it turns out that it’s not just children who use the word in informal settings, either.  Bono said “the funnest thing” in his interview with 60 minutes, and multiple speakers have used funner and/or funnest on NBC’s Meet the Press–hardly the equivalent of schoolyard chats.  In written correspondence, the word “funnest” had a brief period of fairly widespread use as early as the 1820s according to Google Ngrams.  According to COCA, newspapers like The New York Times and USAToday and even academic articles have printed the words dozens of times, but always, of course, quoting someone else saying it, and usually a teenager or young person.  If professionals were to use the word without being facetious or campy, then they would probably be ridiculed for it.  But why?  On what grounds?  So far as I can see, the only reason over-zealous editors continue to stamp out the word is because they’ve had it stamped out of them.
I’m not saying that the words are well-suited to formal academic writing.  They’re clearly not, but first, that’s probably more due to semantics than morphology; it’s very rare that formal writing calls for comparative/superlative, subjective judgments of amusement levels anyway.  Second and more to the point, the word “toasty” (among plenty others) is even less well-suited to formal written contexts and neither do dictionaries put a derogatory “informal/slang” label on it, nor do people seem to have a problem with that word’s existence.  Funner and funnest are picked on because they’re frequent examples, but it’s precisely because they are so frequent that we should just get over it already and accept the words for what they are: highly useful communicative tools.
As Erin McKean says, if we embrace our language for the diverse, chaotic wonder it is instead of trying to police it, we’d probably lead happier lives.  It’s no wonder that she’s such a bubbly personality; she gets paid to study how words work, and she sees words for what they are: a means of “windowly” viewing into the infinite variability of the human experience.  Viewed in that light, lexicography sounds like one of the funnest professions I could imagine.

a follow-up on American pragmatics

[Edit:  It took some time for me to figure out that replies weren’t being posted until they were “approved,” which is frustrating. Anyway, I encourage reading the responses.  One is longer than my post, but worth it.]

It’s been a while.  I’ve had a lot to say, but a lot has happened.  Anyway, let’s pick up where I left off.  Last time, I had a rather long rant on why I dislike it when Americans say “uh-huh” in response to “thank you.”  I’m very grateful that it sparked quite a lot of conversation, mostly with people who disagreed with me, haha!  Now, it is very rare that, after taking a rather extreme stance on an issue and having subsequent conversations with people who disagree with me on the subject, I end up at an even more extreme stance than I was previously.  Normally, I’m forced to acknowledge my oversights and adopt a more moderate, nuanced position.  But exactly the opposite has happened in terms of my previous rant on that most annoying of Americanisms: uh-huh.  It is rude, gawsh darn’t, and let me tell you why.

In part, my new stance comes from more data collection.  In the past few weeks, I’ve noticed that Americans often say “yeah” and “uh-huh” not just in response to thank you, but also in response to “sorry.”  I looked around on google scholar and my university’s library research cite trying to find anything on this, and I haven’t found anything yet (Suszczynska (1999) gets pretty close, though).  I’m making a tentative conclusion that it’s either a new phenomenon, too obscure to get published, or boring to most people.  It might be obscure and/or new, but it is anything but boring.  It gives us a linguistic window into how society works, and how societies might differ, in particular with respect to politeness.

What is “politeness”?  To be more to the point, what is “rudeness”?  Last post, I said that it would be difficult to call a commonly-used phrase “impolite” because, in so many words, its use within a community determines its meaning and value.  The fact that “uh-huh” is socially accepted by Americans as a response to thank you means that, at least within that community, it does not have a negative effect on interpersonal relations and, by extension, we would have to cede that it is not impolite.  That’s not the same thing as being “polite,” because that would imply engendering positive interpersonal relations, and this “uh-huh” seems more neutral.  Now, the American “uh-huh” is noteworthy because the social need for marking deference that many theorists postulated as universal is, at first glance, being ignored.  If we look further, though, we might see that the meaning of a casual “thank you” in American usage might not imply the same types of social correlates that it does in other English varieties, and that the reason why “uh-huh” is acceptable might be because the previous phrase “thank you” is being used differently.

Or so I thought.

But then how do we explain away “uh-huh” as a response to “sorry”?  Frankly, when I say “sorry” or “excuse me” to a person and get “yep” in reply, I want to retract my apology and punch the person.  It’s not so bad that I’d go around punching people in the face, but I might punch them in the part of the chest that connects to the shoulder pretending that I meant to hit them in the shoulder where it hurts less and is considered just a joke.  No joke.

To make this abstract just for a while, Bergman and Kasper (1993) and Suszczynska (1999) both give excellent outlines of what an apology constitutes.  In essence, it is an attempt to compensate for or mitigate the perceived negative effects of a prior action for which the speaker takes at least partial responsibility.  When we apologize, we are in essence saying that something for which we are at least partially to blame was wrong, contrary to social etiquette, unintended, or any number of other vaguely negative things.  To reduce it to its extreme, we’re in a very real sense saying “my bad.”

On what planet is affirmation an appropriate response to that!?  In pragmatics research, affirming the transgression would be called an “aggressor,” which is socially a very valuable tool when we disagree with people, but it has zero politeness value.  Now, you could say, “but that’s not what I mean when I say uh-huh,” and you might very well be right, but that doesn’t shield the phrase from criticism.

The way we communicate reflects many things: the way we think about ourselves and our relationships with other people, the way we think about society, and the way society itself is organized.  If I use the word “bro” (and it is a word, not an abbreviation, when used in that context, don’t even get me started) to refer to a friend, that implies a great deal about me, my friend, and it also situates me in a time and place in history where that would occur.  There might come a time when everyone in a particular society starts using the word “bro,” even in formal contexts, and if that time were to come to pass, my use of it would indicate something different about me than it would now, and it would indicate something different about society than it would now.

Another example. The Japanese word “omae” (from 御前, roughly: “honoring that which is before me) used to be a formal second person address term (i.e. “you” when speaking to someone socially above you, and usually way above you) up until the Meiji era.  The word was used almost exclusively by the upper class, so the simple use of that word really did imply quite a lot about both the speaker and addressee.  However, it implied something very different from what it does now, because in modern Japanese, the word is extremely widespread, casual, and even antagonistic when used with people who are not friends (almost like using the word “bitch” to refer to a female friend in modern English, except it’s genderless and used by and for all age groups).  The dramatic changes that the word “omae” has gone through reflect larger social changes in Japan, and it is not controversial to talk about how the word’s journey through Japanese society mirrors changes in that society itself.  It’s no different for English.   “Uh-huh” is an anomaly; very few societies at very few points in time would use such a construction in a similar context.  That fact is emphatically not beyond the scope of analysis, or criticism.

Yes, my punches would be misdirected. When I think about it, I don’t mean to blame the person who says “uh-huh” in response to the speech acts of thanking or apologizing.  That person, who is likely American, is equally likely to harbour no ill will against me, and likely does not intend to communicate something akin to “you’re right to be sorry.”  I absolutely do mean, however, to criticize the society that tolerates and promotes the use of that construction.  It gives the (in my opinion, largely correct) impression that American society tolerates a passive indifference and even disregard for the person who is thanking or apologizing, or in the very least the act of thanking or apologizing in that context, and I take issue with that.  There’s an excellent discussion by a hip-hop artist on the use of the word “bitch” among other things here that I think is really apropos.  It’s another example, and I encourage you to read it, but that’s a different rant.

So.  I’ll have to amend my earlier position to one that’s way more extreme than the previous iteration.  Right now, when I say that I think it’s rude to say “uh-huh” to acts of thanking or apologizing, I am in fact saying that American society is rude, and I’d go further to say that I don’t think criticism of society should be off the table.  I sincerely hope that, over future discussions, I’m forced to retreat from this position, but for now, I’ll just offer an apology to any Americans who might be offended by this post, to which I imagine you might respond:

Uh-huh.

on expressions of gratitude and politeness (or: an Americanism that bothers me)

As a proud Canadian, I’m happy to boast about my home country, and when travelling, it’s always nice to know that saying I’m from Canada usually elicits a positive reaction.  People seem to have, on the whole, a positive image of the Great White North, at least when they have any image at all, but not all of what people perceive is based in reality. 

One of the positive stereotypes of Canadians that I have trouble backing up is the notion that Canadians are “polite.”  If it were a simple dichotomous choice between “Canadians: if polite, yes; if rude, no,” then I’d have an easier time choosing, but what people most often mean by that statement is not that Canadians are or are not polite in absolute terms, but rather that Canadians are allegedly more polite than others, and in particular more polite than their much-maligned neighbours.  Now, I’ve lived in the United States for more than a third of my life, and despite its panoply of flaws, I truly have a deep respect for this country, its history, and its people.  That respect does not preclude me from thinking critically about the country, and even from sharing the gut feeling that Americans are, in fact, less polite on average than the common Canuck.  But what evidence could one even construct to show that?

A linguist would have difficulty arguing that X or Y society is any more “polite” than any other.  I’m not saying that it’s impossible or even untrue, but it would require meeting a very complex set of criteria.  From a language perspective, for a statement to be polite, at least as it’s often conceived in the literature on pragmatics, it would either maximize the positive social value of an utterance or maximally mitigate its negative effects through different uses of language devices.  What language choices are optimal for a given situation depends on the status of the speaker, the interlocutor (or addressee), and the social and linguistic norms of that culture regarding the situation.  Let’s take a look at my personal pet peeve: responses to “Thank you.”

Saying “Thank you” is a rather complex act.  Eisenstein and Bodman (1993) break down the many different ways a person elaborates an expression of gratitude for social purposes.  For example, if someone buys you dinner, you could say, “Thank you.  You’re too kind.”  In linguistics terms, we’d code that as [expression of gratitude] + [complimenting the giver].  We could alternatively express our affection for the giver, or express our own pleasure resulting from the giver’s actions, or any number of different things, each of which would be more or less appropriate in a given situation.  One common theme, though, is that expressions of gratitude often include some level of “deference.”  Deference as a technical term means the appreciation expressed by one person to another.  It is, in one sense, analogous to admitting debt.  Of course, as a speaker I want to maintain dignity and respect, but in expressing gratitude, I have to show deference to another person.  I therefore need to find words that are mutually satisfying–that is, not overly self-effacing, but properly acknowledging the person to whom I am showing gratitude.  That’s a fairly difficult negotiation, given that the means of doing that differ widely between different cultures.

In Japanese, for example, arigatou, which is the standard, dictionary “Thank you,” literally means “there has been (ari) difficulty (gatou).”  In other words, arigatou acknowledges that the addressee was inconvenienced or otherwise pained by doing something for the speaker instead of for him/herself.  It is also common to say sumimasen (“excuse me” or more literally, “this doesn’t settle”) or even gomennasai (“please forgive me” or more colloquially, “sorry”) as an expression of thank you.  Even in very casual speech, a very widespread expression for gratitude is warui/warukatta, which means “that is/was bad (of me).”

It’s pretty easy to see that the threshold for deference in Japanese is probably a lot higher than it is in English.  It’s not that Japanese people are “more polite” per se, but that the social norm for expressing gratitude in that context usually involves the mandatory expression of a higher degree of deference than would be needed in North American English.  Inversely, in English, saying “I’m sorry” or “That was bad, sorry” to someone who passes you the salt shaker wouldn’t be more “polite” by any stretch.  It’d just be weird.  That’s not the optimal match in our context, and so, if we can conceive of politeness as the optimal language form for creating positive feelings between giver and recipient of the act, then if I begged forgiveness when you gave me the salt, I would fail the politeness test.

The key, though, is the response to this.  Imagine a simple scenario with two Japanese guys at a bar.  Hypothetical Speaker A asks for the salt.  Interlocutor B gives A the salt, and A says “warui” (my bad) to express thanks.  What does B say next?  There are a few choices.  Most often, he would defuse the situation, either by saying “ya ya” (meaning “nah” or “no big”) or maybe by a simple hand wave that dismissed the statement (body language is language, after all).  In a more formal setting, the person responding to thank-you might say iie (“No”), nan/ton-demo nai desu (“It’s nothing”), or a similar negator.  Notice there that B wouldn’t be dismissing the expression of gratitude, but rather the deference in it, reaffirming that both speakers are on equal terms, which helps to reaffirm the positive social relationship between the two bar mates.

Imagine instead what would happen if B had said ee or hai (meaning “yeah” or “yes”).  It doesn’t take a genius to see that that would go over pretty poorly.  He would be acknowledging and even affirming the fact that his friend is in debt to him, implying that he is somehow above his friend, and that would be socially inappropriate.  Perhaps because of that, some of the most common responses to “Thank you,” not just in Japanese but in most of the world’s languages, often use negators like “No problem,” or “Think nothing of it,” and so on.

Now, for some reason, American English speakers seem to think that “Yeah,” “Yep,” and “Uh-huh” are appropriate responses to thank-you.  I have, on a very rare occasion heard this from Canadians, usually in Toronto (in all seriousness, actually, haha!), but it’s by far more common to say “No worries” or “No problem” or some other negator.  When I express my annoyance with this to Americans, I most commonly get denial.  “We don’t say that!  You must have heard wrong.”  Frankly, yes, they do say that.  Luckily, they do this so often that I have innumerable opportunities to catch them in the act, so to speak.  If I point out to American friends when someone says “Yeah” to me in response to thank-you, I usually get one or more of the following:

“It’s no big deal.” / “You’re too picky.”  (I’m not saying it’s the end of the world; I’m saying it’s impolite)

“That’s totally fine/cool/normal/acceptable in America.”  (Well, yeah!  That’s the problem.  That’s exactly the problem, in fact.)

“She said it with a friendly face.”  “It’s all about how she said it, and she was friendly enough.” (Uh…. how exactly?  If I smile while saying something rude, it changes what I said very little.  In terms of what speech act she performed, her facial expression doesn’t sway that one way or the other.)

So, is this definitive cross-linguistic proof that Americans are in fact ruder than Canadians?  As much as I do hate that habit, probably not.  The American custom of responding in the affirmative instead of negating the deferment might have another explanation.  In a fairly old but oft-cited study of expressions of thanks, Rubin (1983) revealed that, when we really look at spoken data, American English speakers very rarely just say “Thank you” alone except when the act is very small, and then only just as a “social amenity,” which he called “bald thank-you.”  This “bald thank-you” doesn’t seem to carry any meaning of deferment.  In fact, it seems to be almost entirely semantically void.  This special type of thank-you might have lost its connotation of appreciation in America while still retaining it in most of the rest of the English-speaking world.  If that’s the case, then responding to a bald thank-you with “Yeah” wouldn’t be acknowledging the deferment so much as completing the social expectation of responding, almost as if the original thanker had said “Hello.”  That’s certainly one possible explanation.  Although for me with my own cultural expectations, it’s hard to imagine what semantic role “yeah” or “uh-huh” could be fulfilling, if I try to look at it on its own terms, it could certainly be less rude than it appears on the surface.

To answer the question of which nation is more polite, it would hypothetically be possible to assemble a list of these instances where certain speech acts are realised by Americans in a way that breaks some sort of pragmatic principle while the Canadian version follows the same politeness principles.  If that’s borne out, it would indeed constitute evidence that Canadians are genuinely more polite than their southern neighbours.  As you can see with the above example, though, that’s tough to do.  There may be reasons to suspect that the stereotype might have some minor element of truth to it, but there are always complications that make it hard to pin down.  I would, however, strongly recommend to Americans that interact with anyone other than their compatriots that they abandon their uniquely annoying habit of responding to thank-you in the affirmative.  It genuinely, linguistically speaking, makes you sound rude.