how linguists can deal with grammatical “mistakes”

Thank you for the feedback on my previous post about the difference between linguists and grammar snobs!  This time, all of the feedback, positive and negative, was through personal correspondence, and I don’t have permission to make that public, so you’ll have to take my word on this next bit.  In talking about grammar snobbery, one question that came up more than once was (and I’m summarizing crudely) what I’d do if I came across a mistake, and by extension what I think other linguists would do and other people should do.  In other words, if grammary snobbery is wrong, then what’s right?  That’s a fair question.

While the answers (yes, plural) to that question might differ depending on the context and the mistake, I think the means of answering that question can be summarized with a single, overarching statement.  As a good friend of mine often says, “it all comes down to consequences.”

Everything we do has consequences, and our use of language is no different.  What we say, write, or communicate non-verbally can have positive or negative effects on other people, on our relationships with those people, and in some cases, on our relationships with other people we don’t even know.  A simple sentence can rally a city or a whole nation, as in “Tear down this wall,” or “I’m taking my talents to South Beach.”  On a personal level, if I’m consistently able to come up with perfectly worded witty statements on the spot, I might win more arguments, or win over a girl, or win respect from my peers.  On the other hand, if I put my foot in my mouth all the time and say asinine things as a matter of course, then I might become the Governor of Texas.  If I tack on a “just kidding, lol” to the previous sentence’s little paraprosdokian, then it’s likely that fewer people will take issue with my slighting Rick Perry, but other people will take issue with my use of “lol,” and my friends will think I’m writing this while vaguely drunk.

In sum, any language use has consequences, and there are consequences to using forms that snobs deem to be “bad grammar” as well.  Linguists wouldn’t use the term “bad grammar,” but we should be able to understand what those consequences likely will be and respond appropriately.

Before I try to explain what “responding appropriately” means to me, I should first explain why we wouldn’t say “bad grammar.”  In my last post, I tried to clarify why the very phrase “bad grammar” is confusing and almost comical to linguists; it would be like a chemist saying that there are “good” molecules and “bad” molecules. If some chemistry textbook called strychnine a “bad molecule,” you would have to assume that either the author was being facetious or that “bad” was being used figuratively to mean something more like “potentially fatal if ingested by humans.” In other words, the molecule itself isn’t “bad” per se, but its presence might have consequences that we consider negative.  And there you have it.  The way linguists would respond to instances of non-standard language use is in many ways very similar to that.

To reiterate, grammar is the system of context-dependent form-meaning associations in a speech community. In X circumstances, speakers of language Y say A (or B, C, etc.) in order to communicate Z idea. If something is “ungrammatical,” that means either that there is an important mismatch between the intended meaning and the meaning that is understood, or that the phrase wouldn’t be understood consistently or at all.  The key here, though, is that we have to conceive of the phrase “intended meaning” very broadly so that it includes not just the literal meaning of the words, but also the intended social effect.  That’s a bit tricky, so here’s an example.

I work with many international students who have trouble with English noun phrases, especially when talking about abstract nouns.  Sometimes, the presence of a plural-looking marker doesn’t make that much of a difference, as in the following:

  • I like strawberry.
  • I like strawberries.

On some level, there is a difference between liking the flavor of the fruit and liking the fruit itself.  One could imagine some person who likes strawberry-flavored candy but who doesn’t like eating the berries or vice versa.  On the whole, though, if you were talking with a person who said either of those sentences, you probably wouldn’t be confused.  Sometimes, though, there can be a very large difference, as in these two sentences:

  • I like dog.
  • I like dogs.

Suddenly, the difference between liking the thing itself and liking the flavor of it is more consequential.  If you meant to say one and said the other, then there could be a fairly large and important mismatch between what I would understand and what you intended for me to understand.  Now, if you were one of my international students, and you saw me walking my dog on the street and said “I like dog,” I would assume that you were going to pet her and not that you were imagining how my dog would taste, but if we met each other at a potluck and you said “I like dog” while chowing down on some unlabeled casserole, I might think twice before putting some on my own plate.

"I like dog. By the way, this is good.  Want some?"

“You have a dog?  That’s great.  I like dog. By the way, this is good. Want some?”

Typical grammar snobs would probably respond to someone’s uttering “I like dog” by calling it a “mistake.”  Next, they might add “It should be ‘I like dogs’ because…” and then that’s where it gets really tricky.  What could they say after that?

Some of them might say, “In English, we need the plural in that position.”  Well, no we don’t.  No one says “I like ice creams,” or to take John McWhorter’s example, “I like corns.”

So maybe instead they say, “In English, count nouns require the plural in that position.”  You can count dogs: one dog, two dogs.  You can’t count corns.  You can count kernels of corn, or rows of corn, or something else of corn, but we don’t say one corn, two corns.  That would be closer, although there would still be problems.  Certain nouns can be either count or non-count, like strawberries.  “I like democracy,” and “I like democracies” are both possible sentences; it’s just that they mean slightly different things.  And that’s when they’d finally get at what I think is the right answer:

The sentences “I like dog” and “I like dogs” mean different things to native speakers.  Saying “I like dog” is a “mistake” if you intend to say that you like canines as pets because the way that is typically expressed in English is “I like dogs.”  At this point, we’ve largely abandoned any “prescriptivist” argumentation (i.e. “thou shalt do X and thou shalt not do Y”).  We’ve simply described the observable reality of the world, and laid out the consequences of the language so that the language user can make a choice.  If one option is more desirable, then the language user should choose the form that corresponds to that option.  We can only say “you should say ‘I like dogs'” by understanding that behind that “should” is a much more important “If what you mean to communicate is ___.”

At this point, you might say, “But surely you can figure it out from context.  Wouldn’t saying ‘I like dog’ to mean that labradors are cute and lovable be just fine by linguists as long as it’s said in a context where the intended meaning is clear?”  Well… in fact, there are some scholars who make a very similar argument, but I personally wouldn’t go that far, no, for two reasons.

A)  The context often doesn’t disambiguate, or at least doesn’t do so in an objectively clear way.  I work with a law student from China.  I help copy-edit his papers, and I remember once running into trouble with one of the subheadings he’d used.  He knew that he had problems with this very same form, so he had even suggested multiple different options, as if to give me a multiple choice question!  I was immensely entertained, but as it turns out, it wasn’t an easy choice:

  1. A Democracy in China
  2. Democracy in China
  3. The Democracy in China
  4. The Democracy of China

Most people, I think, would agree that (3) doesn’t sound like a plausible English phrase.  It’s  unclear what that one would mean, but the other three options are all possible section titles.  Which one he should use really depends on very subtle distinctions like whether or not he is claiming democracy already exists, whether he is referring to democratic principles and structures in society or to a larger government body, whether or not he is being ironic or cute, and so on.  He may or may not be aware that these functions apply to these language structures, and may or may not be intending to use any one or combination of them.  I might be able to disentangle those things by reading the rest of his paper, but I might get that wrong.  I might accidentally attach a meaning to his subheading that he didn’t intend on being there.  He might even have meant to say something completely different, like “Democracies in China,” and as it turns out, after conferencing with him, we decided on “Chinese-style Democracy,” which sounds far removed from any of those previous options unless you add quotation marks as in “The ‘Democracy’ of China.”

The point is that, in many contexts, we can’t simply rely on the audience to figure it out for us.  If we use a form whose meaning is unclear, then our chances of successfully communicating our intent decrease.  Using the clearest, most unambiguous language possible might still not result in a perfect match; indeed, some theorists argue that a perfect match is an impossibility and we can only hope to approximate understanding.  Even if we assume it is possible in principle, listeners might just not get it, some might not be fully paying attention in the first place, or they might not have native-like listening comprehension abilities.  In other words, we almost never have ideal circumstances for communication, but even under ideal circumstances, communication is a probabilistic, messy process, so we’d do well to maximize our chances.

B) “Intended meaning” doesn’t just concern the literal denotation of any one particular set of words.  Let me tell a story to illustrate.  During my first few days living in Japan, I was happy if the food a waitress gave me vaguely corresponded to what I wanted to eat, let alone what I thought I’d ordered.  I had coughed up some garbled string of Japanese-sounding syllables, and victuals were brought to my seat, very likely in part because of my linguistic efforts or at least because they took pity on me and knew I had money.  Still, success!  I was happy and enjoyed my meal until the time came to think about what I had to say to pay the bill.

That sense of satisfaction for completing simple tasks didn’t last long, though.  After all, I didn’t just want physical sustenance.  I also desired to be thought of as a capable, functioning adult.  Often when people start learning a second language, they get frustrated or embarrassed because they feel like they sound like a child (and, in fact, we all do).  We don’t start out in Japanese 101 expounding on the perils of complementary schismogenesis.  We start by learning how to say simple phrases, but the eventual goal for most people is to be able to say what they want to say, how they want to say it.  For me personally, I wanted to be seen as capable of saying what I wanted to say, how I wanted to say it in Japanese, and if that is my intent, then only a very specific set of linguistic forms will do.  (It’s 相補分裂生成の危難, in case you were wondering… although I imagine you weren’t)

So, going back to the example of “I like dog,” a linguist would have a couple different answers depending on the situation.  If one of my students is writing to a potential host family abroad and wants to make a good first impression when introducing his likes and dislikes, I could tell him that “I like dogs” is better.  If he asks or cares to know why, I could even add that saying “I like dog” means that he likes the flavor of dog meat, and that while his future host family could probably figure out that he doesn’t mean to say that, he will sound less competent in English than if he had used the other form.  If he doesn’t care or isn’t present for me to terrorize, I’d probably just “correct” it without making a big deal of the situation.  After all, it looks like a simple sentence, but it’s actually a tough distinction to learn, with lots of exceptions and subtleties, and so I can’t reasonably get angry that he hasn’t learned it despite however long it is that he’s been trying to do so.

If another student says “I like dog” to me as I’m walking my dog on the sidewalk, I’ll probably just smile, ignore it, and move on in the conversation.  It’s not really an appropriate time to launch into an exposition on English noun phrases, and the student’s desire to communicate is probably more central than the desire to achieve native-like accuracy in that specific moment.  If I chose to comment on the statement, that might not help the student remember the form anyway.  More likely, switching the topic of conversation from my dog to English grammar would just cause embarrassment and frustration, and maybe even engender some small amount of resentment.  I could always bring it up at some later time when I thought it would be more helpful, but nitpicking then and there might give the false impression that I care only about formal clarity over meaningful exchange of ideas.  In short, it would make me look like a grammar snob, and that is a consequence I definitely want to avoid.

Advertisements

on the difference between linguists and grammar snobs

People often believe that I am the type of person to whom it would be unsafe to write anything containing a grammatical mistake, and while that pains me, I get why. I study Applied Linguistics, and as such I am passionate about language. I think about it often, and I talk about it in casual conversation as if that were a normal thing to do. Moreover–besides being the type of person who would try to get away with using a word like “moreover”–like many in my field, I teach English, which lends even more credence to the notion that I am a linguistic control freak. However, I and, more importantly, most applied linguists would be deeply offended to be grouped together with people who cry in horror at split infinitives, missing apostrophes, and dangling participles (or, for that matter, the “serial comma” I just used), and I think the distinction between what I do and what they do is important to understand.  (For an introduction to this topic in general and to the serial comma specifically, this NPR editorial is a good read)

There are many different names for people who do that sort of thing such as “Grammar Nazis” or “the grammar police.” As you can see in the picture below, they even have their own merchandise, and some attempt to put a positive spin on things by calling themselves “Grammar nerds” or “Grammar geeks.” In her hilarious and wonderfully written book, June Casagrande calls them “Grammar snobs,” and for the sake of consistency, I’ll use the term “snobs” to refer to them here, but really, we all know who (excuse me: “whom”) I’m talking about (excuse me: “about whom I’m talking”). We’ve all at some point either been witness to, victim of, and/or perhaps even complicit in their tirades against “improper usage” or simply “bad grammar,” and on the surface it seems like their passion for correct form resembles the work of linguists, but that is really far from the case. Linguists and grammar snobs are in many ways diametrically opposed.  I’d like to try to show why.

Grammar Police Mug

Mantra of the snob, not linguist

First, let me say that correction itself doesn’t bother me. I edit my own writing (and self-correct in speech) quite often. Even in informal written contexts, I sometimes delete and rewrite my Facebook comments as fast as I can in a futile attempt to make it look like I had originally written what I eventually decided was better. I believe there is such a thing as a better, clearer, more powerful means of expression, and that, all else equal, we should pursue it. After all, language is a powerful tool, and Spiderman has taught us that “with great power comes great responsibility.” I do strongly value comprehensibility, force, deftness, and even beauty in language, but that’s not the same thing as conformity to arbitrary, self-contradictory stylistic edicts of the self-proclaimed elite. In so many words, I’m not against all of the snobs’ “corrections,” but I end up disagreeing with most of them because I strongly disagree with their means of judging language as grammatical or ungrammatical, and what they mean by both of those terms. So call those grammar snobs what you may–nerds, nazis, or nitpickers–just don’t call them linguists. They aren’t.

Grammar Snob Cat

I’m being rather nonchalant in using the word “they,” as if grammar snobs were some unified, homogeneous cult, but I’m comfortable doing so here because, no matter how diverse the individual snobs may be, “their” handiwork tends to follow a very distinct, uniform pattern. The following is what I see as the typical modus operandi of a grammar snob:

  • Decide before reading or listening to something that formal accuracy is more important than successful communication
  • Read or listen to a given language sample, paying special attention to particular forms, often (though not always) at the expense of the message itself
  • Ignoring the content, label any form that differs from their conception of the norm as wrong, when possible using linguistic-looking jargon
  • (Optional) Add some haughty-sounding phrase and assert that it constitutes what the original speaker or writer “should have said”
  • (Optional, and less frequent) Insult the original speaker or writer

I think it’s clear from that overview why I don’t much respect that whole process. The first two in that list are objectionable enough such that whatever happens after that is moot, but that’s actually not the only reason why linguists and grammar snobs differ in their judgments.  My biggest pet peeve with grammar snobs is that, in a surprisingly large number of cases, in the act of trying to “correct” someone else’s “grammar,” snobs commit three separate but related offenses.

  1. They invoke an argument that has nothing to do with linguistics, grammar, or sometimes even language
  2. The argument itself often isn’t true or even internally consistent
  3. The exchange that results distracts from real, underlying issues in language use and deflects people away from otherwise readily available information on language that is genuinely interesting, empowering, and meaningful

That was long and complicated, so let me break it down in a simple example. In a previous post I ranted about the word “funnest.” Use of that word can push grammar snobs into a long diatribe about how funnest isn’t a word because it’s not in a dictionary. Well, the three problems I just mentioned are well-evidenced in this example:

  1. Dictionaries (especially paper ones) aren’t an appropriate source of determining word/non-word status. That’s just not the kind of argumentation you’d use in linguistics at all.
  2. “Funnest” is listed in several prestigious dictionaries. They assume it isn’t in the dictionary because they think it shouldn’t be there, but in fact the opposite is true.
  3. The issue of what constitutes a “real word” is a fascinatingly icky problem, but if we look at real usage and gather data from (often free, often online) sources like corpora, it shows that “funnest” is as real a word as any other. If people knew about these relatively straightforward tools, they could find out all sorts of things about their own language and how it works.

Now, “funnest” is one example of this 1-2-3 pattern (bad argument; false anyway; a better argument says the opposite and is insightful), but there are countless others. I’ll describe just one of those “countless others” below, but before that I thought you might be wondering why I’m riled up about this. After all, maybe grammar snobs have nothing better to do with their time. Maybe–indeed, quite probably–they derive a sadistic pleasure from making snide remarks about other people’s language, and who am I to deny other people pleasure? It’s a free internet, and all that.

The problem is, though, language teachers have to deal with the aftermath of their handiwork. The legacy of this conflation of (often misbegotten) “style guidelines” with real “English grammar” (which, properly understood, are two very different things) is such that our students believe not only that they have to learn these “rules,” but that those rules have some intrinsic value. Some of my own students believe that teachers can (and even should) be evaluated based on their mastery of those rules and their ability to foster mastery thereof in their students, and that’s frankly appalling. While belittling someone for a missing apostrophe is trite and objectionable on its own grounds, for me there is the added grievance that these snobs are interfering with good teaching practice. Grammar snobs make my job, my colleagues’ jobs, and the work of my students harder and more complicated, and for that, they have become the subject of my rant.

There are so many examples of fundamentally flawed grammar snob arguments that it’s tough to choose just one. Who/whom, lie/lay, there/they’re/their, sentence-final prepositions, and effect/affect are each worthy of separate rants, but for here, in my thinly veiled attempt to reach out to people who might think grammar snobbery is a good thing, I’ll talk about one that’s less close to the typical grammar snob’s heart. It’s called “impersonal they.”

Grammar snobs will tell you that the following sentence is malformed:

  • If someone comes looking for me, tell them I’ll be back soon.

Instead, some of them will insist, straight-faced, that the sentence should be:

  • If someone comes looking for me, tell him or her (that) I’ll be back soon.
  • (Alternatively) If people come looking for me, tell them I’ll be back soon.

Their argument is typically that “they” refers to plural subjects, and “someone” refers to singular subjects, so the pronouns don’t agree.  They typically add on that “young people” or “people nowadays” say “they” but that traditionally, English strictly maintained that distinction.  Well, we can go down that checklist I proposed earlier: 1) Non-linguistic? Yes. 2) Untrue anyway? Yes. 3) Obfuscates interesting language-related issues? Yes. 4) Makes my job harder? Yes. Here’s the breakdown:

1) The argument sounds linguistically based, but it really isn’t. The snobs simply assert that “they” refers to plural subjects only–because snobs said so–and not because that’s how the pronoun actually behaves in the language. Linguists don’t just get to call the shots and say how a language should operate. Our job is to figure out how it actually operates and pass on the relevant parts of that knowledge to our students. If the word “they” is used frequently in the context of an unspecified singular third person–as in fact it is–then that’s part of the grammar of English. Grammar is, very simply, the system of form-meaning associations in a speech community. In certain contexts, certain forms mean certain things and have certain communicative value, and others do not. A linguistic argument against impersonal “they” would have to be phrased like “In formal written contexts, use of ‘they’ or its other case forms to refer to a singular referent is stigmatized and may result in unfavorable reception.”  Even that argument, in my opinion, is flawed (there are quite a few examples of its use in articles published in academic journals like TESOL Quarterly and national newspapers, but they’re usually not salient enough to attract ire). Really, though, that’s not their argument anyway. Their argument can be called any number of things–pretentious, pedantic, petty–but it cannot be called “linguistic.”

2) Their argument is just not true. The distinction between singular and plural pronouns in English has never been that strictly delimited. Many people learn the concept of the “royal we” through Shakespeare, and that’s one example of where a plural form is used in place of the singular. In Shakespearean times, people commonly used the pronoun “thou” to refer to singular persons and “you” for more than one person, but “you” was also used to refer to individual persons formally, and it eventually became the standard for all second-person addresses. More to the point, though, people in the 21st century use “we” in impersonal contexts when saying “I” simply sounds too committal. The sentence “We’re experiencing some cold weather up here” could refer to the people of that area, but really, it doesn’t refer to anyone specifically. It’s just impersonal, like when I said “our students” in a paragraph above. One could just as easily use “I” (or “my students”), but saying “we” depersonalizes the statement. All three pronoun distinctions, then–I/we, thou/you, he/she/they–are (or were) not quite so black-and-white as the grammar snobs would have us believe anyway.  Polysemy (having more than one possible meaning) and situational exceptions in the case of singular and plural forms are not even unusual across languages; English’s fuzziness in that respect is very similar to French, German, and many others. Separately, the snobs’ assumption that they are “preserving” English is also just false. Impersonal “they” was used more than 600 years ago in The Canterbury Tales by Chaucer, and has enjoyed widespread use consistently since then. It is English, and it has been English for quite some time, but even if it were some “new” development, John McWhorter likens the snobs’ practice of trying to preserve the language to trying to stop the tide from coming in by drying the beach with a towel. I find that a powerful image of how absurd what they’re doing really is.

3) If people weren’t scared off by grammar snobs, engaging them in a conversation about pronoun shifts in English might not sound like what it does today: something that should be prohibited in the Geneva Convention. Pronouns, which you’d think would be rock-solid, are in fact quite fluid and chaotic across languages. Japanese has or had literally dozens of personal pronouns, many of which have shifted meaning drastically over time, and all that despite the fact that pronouns are commonly dropped from speech and writing when not absolutely necessary for comprehension. Previously I discussed how the Japanese second-person omae, or other words like kisama, used to be strictly formal and honorific, but nowadays, only a few generations later, they can be insulting and even vulgar. German, in what I can only imagine is the result of years of its speakers consuming more beer than water, has come to a point where the second-person singular nominative and accusative (i.e. “you”) and third-person singular dative (“to him/her”) forms are the same, “ihr,” while the third-person singular and second-person plural nominative and accusative pronouns merged on a different form, “sie.”  If that whole sentence sounded like confusing nonsense, then you’ve accurately understood it. Even in English, “you” started out as the accusative case only. A millennium ago, Britons would say “I see you,” but crucially not “*You see me.”  That would sound weird to them, just like saying “Me see you” would sound weird to us. Instead, the form in that position was “ye.”  “Ye see me,” which sounds vaguely pirate-like now, was at one point in history the way people talked in English.

Why would that be?  Shouldn’t pronouns be relatively stable?  We use personal pronouns hundreds of times in daily conversation; they’re some of the most frequent words in the English language. Those are really interesting questions. Indeed, part of the work of linguists is finding answers to those questions.  Unfortunately, though, people don’t tend to think about those questions partially because they think discussions of grammar are exclusively for people who want to feel superior to others.

4) Lastly, and most importantly, this has an impact on English teaching, English teachers, and students learning English. Presumably due to the influence of grammar snobs in the language testing community, I have, to my dismay, seen questions on standardized tests of English that specifically target impersonal “they,” who/whom, lie/lay, and other grammar snob problems. What happens when a high-stakes standardized test like the TOEFL uses items that test mastery of these nonsense maxims?  Am I obligated to teach something I not only don’t believe, but in fact strongly believe against, all the while sacrificing classroom time that I could have otherwise dedicated to activities I feel would be truly beneficial?

Unfortunately, the answer to that last question is “yes.”  Psychometricians call this effect “washback,” and as much as ETS tries to use its power for good, the TOEFL has a long and storied history of negative washback in the ESL classroom. High-stakes exams often have dramatic, real-world consequences, and failure to pass them can cost students hundreds or even thousands of dollars and months of their time, so if I can get students to pass by having them memorize a few nonsense arguments to spew out for the exam and promptly forget thereafter, I will and probably even should. Now, I’m not helpless as a teacher. There are things conscientious teachers can do to deal with it, but that’s another post, and the point here is that we shouldn’t have to “deal with it” in the first place. Neither should my students, and neither should anyone.

So please, when someone tells you they’re studying linguistics or applied linguistics, understand that grammar snobbery is not part of their required coursework. (Note: I just used impersonal “they” twice) In fact, linguists often work against grammar snobs, advocating for our students, or simply advocating for logic, and I’d like to think we’re winning the war. Slowly but surely, awareness of the hypocrisy of the grammar police is spreading, as evidenced by educational websites, classes, and even comic skits (though I warn you, that last one isn’t family friendly). On the other hand, one could just as easily cite examples of other self-described grammar experts who continue to misinform and miseducate, but even they have to find ways to explain away their many fallacies instead of simply going unquestioned, and that’s good. The end result of those questions is, I believe, a fuller understanding of language, and that is what the job of linguists is all about.  (Excuse me: “That is all about which the job of linguists is.”)