Chapter 7: Down Boy? No — Give the People a Break
By Alan F. Kay, PhD
© 2004, (fair use with attribution and copy to author)
July 22, 2004
The poll results that the media show us in newspapers, magazines, TV and radio are supplied by big-time commercial pollsters. With their six-figure incomes, college-plus education, and frequent association with high-level officials, pundits and moguls, they tend to be elitists. They avoid the public-interest polling techniques we examined in earlier chapters. It seems reasonable to them to confirm their predilections and use question types that put people in their place, unmasking those who may be spouting off without any analysis that backs-up their response choices.
Spin-spotters will enjoy seeing in this chapter the many ways this is done.
On-the-air pundits know better than to put down “the people.” Disparaging the public publicly is the kiss of death for a public personality. So, like other elitists frequently in the media spotlight, pollsters often come across on TV as low-key, honest and pleasant, even homey and modest. But pollsters do tend to believe that many people will give you an opinion on almost any subject you throw out to them; that their answers are knee-jerk responses, not based on thought and knowledge; and that “ordinary” people have attitudes, not reasoned opinions. Therefore, when required to ask questions on issues with any depth and complexity — and especially questions on a highly controversial issue when a hotly contested, key vote will likely be in the headlines when their latest findings will be released — what do commercial pollsters do to protect themselves?
One simple thing is to offer an additional response choice. At the tail-end of questions, they add:
” — or don’t you know enough to say?”
This is low-key and seems fair, but it can have a big effect on responses. Why and how does this option put spin on polls that undermine the validity of commercial polling?
The name used for this “don’t-you-know-enough-to-say” choice is, “Down, boy” — a name that suggests what is going on. With complex issues, particularly those unfamiliar to the public, saying in effect, “Down, boy” can change responses enormously. Frequently, 75% of the public will choose it, as we will see in the next section. “Down, boy” narrows, rather than broadens, the wide range of substantive policy choices with which polling in the public-interest is concerned, but it satisfies the elitist mentality.
An alternative to the “Down, boy” response choice that produces a similar chilling effect comes from asking a preliminary question, called – fair enough — the “Down, boy” question. An example is this: If A is the policy to be tested and X is the deep complex issue for which A is supposed to be a remedy, the “Down, boy” question is
(1) “How closely do you follow issue X, very closely, somewhat closely, or hardly at all?”
followed by
(2) “Do you favor or oppose policy A?”
Many of those who do not choose “very closely,” typically a majority, will be embarrassed enough to beg off answering the follow-up question (2). The DKs in the policy question (2) will be much larger than if the “Down, boy” question (1) had been omitted.
Two prominent pollsters, Norbert Schwarz and Howard Schuman, researched this subject a few years ago, and in a respected learned journal of survey research a few years ago published an article:
“Political Knowledge, Attribution, and Inferred Interest in Politics:
the Operation of Buffer Items.”
They showed that flunking a knowledge question just before being asked whether respondents know enough about something to have an opinion significantly increases the percentage who opt for “OK. So, I’m a dummy.”
Still another way for elitist pollsters to say “Down, boy” that works for them in certain questions comes up in the next chapter. These pollsters allow only responses with the phrase, “People like me believe (or other verb) . . .”. Carrying an implication or cue that affects responses, the idea is simple but the consequences can be enormous.
Many people are a little embarrassed that they do not follow issues closely or are not presumptuous enough to think they have something important to say about them, unlike the pundits on talk shows who appear to have an enormous knowledge of policies and issues, and also unlike those “ordinary” people who are nervy enough to call into TV/radio shows to ask questions of the guests or make a comment. In survey research, respondents are not getting paid. Why should they extend themselves? It is reasonable and modest for them to accept the “Down, boy” cue and opt out or respond “Don’t Know,” and many do.
Much can be learned from asking the questions without possible cues like “Down, boy” that people interpret as meaning the pollsters do not want a substantive answer. We will see how policy questions with and without “Down, boy” work out in Chapter 8.
Respondents are Amazing Real People One of the favorite ways elites have of putting down the public is to present knowledge questions, like “How long is the term of a U.S. senator?” or “What is the capital of Honduras?” Probably not one in 100 knows the capital of Honduras. I had to look it up myself – not so easy if you don’t know how to spell it – Hunduras, Hondurus. Help, spell check, Google! OK, here it is — Tegucigalpa.
Having spent years wandering through the offices of official Washington, I do know the length of a senator’s term of office, but most people could go all their lives, have a successful career, even vote regularly, and never need to know that particular fact. On knowledge questions that are important to politicians and political scientists or, for that matter, any professional jargon, the public does poorly.
One is tempted to say “abysmally.” In fact, that was the very term I used at one presentation and lost half my audience, apparently, a group of true populists. I don’t use it anymore, because it is not fair. In order to understand why I say that, let’s look at knowledge questions from the point of view of the typical respondent. Before we do that, let’s get to know respondents a little in a way that sheds light on how real people feel about being interviewed. This insight came from my own frequent monitoring of survey interviews on a listen-only telephone receiver.
On one occasion, a man with a strong, confident voice gave sensible answers to a long survey on policy preferences until the last few questions, when he was asked the year of his birth. He had not noticed, or perhaps ignored, the interviewers opening qualifying spiel that asked for “the youngest household member over 18.” It turned out he was only 15. The interview had to be discarded.
In another survey about global issues, one of the respondents was very negative about the United Nations. Rooted in the oral social contract where the respondent agrees at the beginning to complete the interview, the process of the interview itself paid off. Both respondent and interviewer continued slogging away with the Q&A. My silent reaction was, “Uh, oh. We’ve got a real UN hater here.” I knew that if I had found myself talking in person with him, I would have expected little but a continuing negative reaction. I would have already changed the subject or walked away. Further, along in the interview came a few questions on the UN’s role in the protection of women. Surprise. On this one issue, out-of-the-blue, the respondent was remarkably pro-UN. What a story there must have been behind that. But a more important conclusion is how only the polling interview process and a well-designed survey together could have uncovered that obscure and seemingly odd fact.
Another story serves two purposes. A woman answers the phone, and I hear the interviewer in her most professional tone read from the CATI screen:
“Hello, my name is (caller name). I’m calling for ATI, a non-profit foundation. I would like to ask you a few questions concerning the problems facing our nation, state and local communities. We are NOT selling anything, and I will NOT ask you for a donation. Since this is a scientific survey, we need a balance of men and women. May I speak to the youngest man, 18 years or older, who is at home right now?”
Then I hear the voice of the woman at home turn away from the phone, quite loud and dripping with sarcasm: “Honey, your sweetie pie is on the line.”
One more – a woman answering a similar opening by an interviewer with:
“Well, it’s about time. You people have called everybody. Now, it’s my turn. What d’ya want to know?”
The two purposes? Oh, (1) as we design questions meeting a host of criteria, it reminds us that the people being interviewed, the respondents, are real people with enormously diverse personalities, knowledge and interests, and (2) you’ve just read a typical interviewer’s introduction, which is an offer to get a respondent to commit to being interviewed. It implies that the survey sponsor seriously wants to know what the people think, and there will be no tricks, no embarrassments, and no questions that a typical respondent could not reasonably be expected to be able to answer. When the respondent agrees to the interview, there is created a verbal contract to the effect that the respondent will answer to the best of his/her ability and the interviewer will live up to his/her introductory offer.
It is true that the two parties have little to lose if the contract is broken. Certainly, the respondent can hang up and forget about it. Since the sponsor then will lose a few bucks and gets no interview, the respondent who accepts the offer has reason to be optimistic. So, the questions start. Let’s assume for a while that the questions are easy and reasonable from the respondent’s point of view. Then, the first knowledge question breaks the contract. Why? Let the respondent explain.
Real People Reactions to Knowledge Questions.
Here is the respondent’s case, bluntly stated.
“Well, sir/ma’am, why are you asking me a question whose answer you could better find out from a research library? I don’t have to take any damn test from you. I could ask you some questions you couldn’t answer, too. This is like the pop quizzes some smart-ass teachers loved to pull off. Or those job aptitude tests we sometimes got that had little to do with what the job actually required. I bet you didn’t like them anymore than anybody else. I only remember what is important to me in my life and that is plenty. Yeah, I might be able to answer some of these questions, but why should I? And by the way, this is not personal. I know you are just a hard working guy/gal like the rest of us. But tell this to your sponsor.”
I am not implying that any of those thoughts actually are going through the mind of a respondent at that moment, but people have a sixth sense about when others have baited and switched. When they realize that is just what has happened, the attitudes of many change. Some may start giving garbage answers. The attention of others may slack off. It isn’t fun anymore. The public is diverse, of course, and for others a knowledge question may be welcome.
Some may actually enjoy taking the knowledge test – the ex-teacher’s pets who have not had a chance to show off how good they are at tests since their last school year, which was, for the average respondent, 12 years earlier. But even they are at a disadvantage, something that the elites ignore when poll results uncover the “ignorance of the public.” Interrupted from home activities with their minds on other things, respondents living in the “attention-deficit society” as we all are, cannot deal well with the pop quiz on political knowledge. Experts, when their minds are on other things, may not do much better.
It is a cheap shot for anyone to use knowledge question findings to put down the public as foolish or dumb. But politicians sometimes find it useful to try to deflect any heat arising from poll results that could put them at odds with the public by implying that poll results are generally misleading (not true) and that polls make people look foolish. The news media can use that same rationale to justify dismissing results the editors and gatekeepers dislike or find dubious.
Reactions to Breaking-News Stories
A closely related cheap shot occurs when reporters and pundits do not know how to evaluate a breaking news story where few facts are yet available. So, in a few hours, they do a quickie poll of the public with some question like: “Did the Navy do the right thing shooting down the Iranian airliner yesterday?” This is amusing. The media hotshots do not know how to play the story themselves, so they decide to throw the ball to the public. This kind of question should be ruled off-limits in public-interest polling as a knowledge question or as an evaluation question. The public has no more aptitude for evaluating fleeting incidents than the news media.
A hard knowledge question, say on a quiz show, is one that few people know the answer to, like “What is the capital of Honduras?” That produces over 90% “Don’t knows.” A hard question in public-interest polling is one where the respondent has difficulty choosing the response from those offered — sometimes because the question is confusing or unintelligible, sometimes because it takes a little thought to choose the answer, and sometimes because the choices offered are inadequate. Fortunately, in ATI surveys, all of these are relatively rare. If “What is the capital of Honduras?” were asked in an ATI survey where there are no right or wrong answers, it would be a relatively easy question for most people to answer by the ATI definition. For most people the right answer is clearly “Don’t Know.” Knowledge questions should be asked rarely in surveys.
If a survey asks a lot of knowledge questions, it is a sure sign that the results will be made public in order to put down the public, and minimize the impact of public opinion on what the leaders are planning to do with or without public support. Does this really happen? A prominent example arose in January of 1996 when Washington Post survey results appeared for five consecutive days in long articles, leading with front-page headlines on the issue of public trust of the government. Earlier surveys, including several by ATI, had shown that mistrust of official Washington was at an all time high for a simple reason: the government was not doing what most people, with good reason, wanted. Clearly, the Post editors believed that the public’s mistrust was unfair to all the hardworking bureaucrats, officials, journalists, lobbyists, lawyers and politicians who depended on the Post as their paper.
The Post articles presented the results to make the case that the mistrust was really no worse than it had always been, that the public was more mistrustful of all institutions, that people mistrusted each other more, and ,of course, that the public did not know enough to have valid opinions on the difficult issues being dealt with by the government. Old knowledge questions that had been asked over the years were asked again, and without showing that the public’s knowledge was less than it used to be (it wasn’t), the Washington Post wrapped it up with data that implied “You (the public) don’t trust us. Well, we don’t trust you much either.” Of course, such words, putting down the public explicitly, never appeared in the paper in print.
The Washington Post effort to defuse the antagonism between the inner circle of the Beltway and the rest of the country crashed in vain before the end of the year, when the Clinton-era scandals broke over the sale to big money contributors of overnights in the Lincoln bedroom, breakfasts and coffees in the White House, and equivalent shenanigans in the House and Senate. The justified perception of corruption is rampant everywhere now, but was not then. The Republicans blasted what President Bill Clinton did in 1995, but now by 2001, the same behavior is, when conducted quietly, “accepted,” if not “acceptable.”
When asked by a reporter what was the difference between the fund raising that President George W. Bush was engaging in with fat-cats in the White House, including many specifics like “overnights in the Lincoln bedroom,” Ari Fleischer, spokesperson for Bush in 2001, dismissed the reporter with a simple, “We haven’t had overnight’s in the Lincoln bedroom.” Fleischer did not deny all the other similarities of selling White House access between the two parties, who often act like two football teams with the same owner.