Tag Archives: Social Science

Now You See the Bias Inherent in the System

When I was a graduate student, one of the big fields of study was the temperature of the cosmic microwave background. The studies were converging on a value of 2.7 degrees with increasing precision. In fact, they were converging a little too well, according to one scientist I worked with.

If you measure something like the temperature of the cosmos, you will never get precisely the right answer. There is always some uncertainty (2.7, give or take a tenth of a degree) and some bias (2.9, give or take a tenth of a degree). So the results should span a range of values consistent with what we know about the limitations of the method and the technology. This scientist claimed that the range was too small. As he said, “You get the answer. And if it’s not the answer you wanted, you smack your grad student and tell him to do it right next time.”

It’s not that people were faking the data or tiling their analysis. It’s that knowing the answer in advance can cause subtle confirmation biases. Any scientific analysis is going to have a bias — an analytical or instrumentation effect that throws off the answer. A huge amount of work is invested in ferreting out and correcting for these biases. But there is a danger when a scientist thinks he knows the answer in advance. If they are off from the consensus, they might pore through their data looking for some effect that biased the results. But if they are close, they won’t look as carefully.

Megan McArdle flags two separate instances of this in the social sciences. The first is the long-standing claim that conservatives are authoritarian while liberals are not:

Jonathan Haidt, one of my favorite social scientists, studies morality by presenting people with scenarios and asking whether what happened was wrong. Conservatives and liberals give strikingly different answers, with extreme liberals claiming to place virtually no value at all on things like group loyalty or sexual purity.

In the ultra-liberal enclave I grew up in, the liberals were at least as fiercely tribal as any small-town Republican, though to be sure, the targets were different. Many of them knew no more about the nuts and bolts of evolution and other hot-button issues than your average creationist; they believed it on authority. And when it threatened to conflict with some sacred value, such as their beliefs about gender differences, many found evolutionary principles as easy to ignore as those creationists did. It is clearly true that liberals profess a moral code that excludes concerns about loyalty, honor, purity and obedience — but over the millennia, man has professed many ideals that are mostly honored in the breach.

[Jeremy] Frimer is a researcher at the University of Winnipeg, and he decided to investigate. What he found is that liberals are actually very comfortable with authority and obedience — as long as the authorities are liberals (“should you obey an environmentalist?”). And that conservatives then became much less willing to go along with “the man in charge.”

Frimer argues that conservatives tend to support authority because they think authority is conservative; liberals tend to oppose it for the same reason. Liberal or conservative, it seems, we’re all still human under the skin.

Exactly. The deference to authority for conservatives and liberals depends on who is wielding said authority. If it’s a cop or a religious figure, conservatives tend to trust them and liberals are skeptical. If it’s a scientist or a professor, liberals tend to trust them and conservatives are rebellious.

Let me give an example. Liberals love to cite the claim that 97% of climate scientists agree that global warming is real. In fact, this week they are having 97 hours of consensus where they have 97 quotes from scientists about global warming. But what is this but an appeal to authority? I don’t care if 100% of scientists agree on global warming: they still might be wrong. If there is something wrong with the temperature data (I don’t think there is) then they are all wrong.

The thing is, that appeal to authority does scrape something useful. You should accept that global warming is very likely real. But not because 97% of scientists agree. The “consensus” supporting global warming is about as interesting as a “consensus” opposing germ theory. It’s the data supporting global warming that is convincing. And when scientists fall back on the data, not their authority, I become more convinced.

If I told liberals that we should ignore Ferguson because 97% of cops think the shooting justified, they wouldn’t say, “Oh, well that settles it.” If I said that 97% of priests agreed that God exists, they wouldn’t say, “Oh, well that settles it.” Hell, this applies even to things that aren’t terribly controversial. Liberals are more than happy to ignore the “consensus” on the unemployment effects of minimum wage hikes or the safety of GMO crops.

I’m drifting from the point. The point is that the studies showing the conservatives are more “authoritarian” were biased. They only asked about certain authority figures, not all of them. And since this was what the mostly liberal social scientists expected, they didn’t question it. McArdle gets into this in her second article, which takes on the claim that conservative views come from “low-effort thought” based on two small studies.

In both studies, we’re talking about differences between groups of 18 to 19 students, and again, no mention of whether the issue might be disinhibition — “I’m too busy to give my professor the ‘right’ answer, rather than the one I actually believe” — rather than “low-effort thought.”

I am reluctant to make sweeping generalizations about a very large group of people based on a single study. But I am reluctant indeed when it turns out those generalizations are based on 85 drunk people and 75 psychology students.

I do not have a scientific study to back me up, but I hope that you’ll permit me a small observation anyway: We are all of us fond of low-effort thought. Just look at what people share on Facebook and Twitter. We like studies and facts that confirm what we already believe, especially when what we believe is that we are nicer, smarter and more rational than other people. We especially like to hear that when we are engaged in some sort of bruising contest with those wicked troglodytes — say, for political and cultural control of the country we both inhabit. When we are presented with what seems to be evidence for these propositions, we don’t tend to investigate it too closely. The temptation is common to all political persuasions, and it requires a constant mustering of will to resist it.

One of these studies found that drunk students were more likely to express conservative views than sober ones and concluded that this was because it was easier to think conservatively when alcohol is inhibiting your through process. The bias there is simply staggering. They didn’t test the students before they started drinking (heavy drinkers might skew conservative). They didn’t consider social disinhibition — which I have mentioned in studies claiming that hungry or “stupid” men like bigger breasts. This was a study designed with its conclusion in mind.

All sciences are in danger of confirmation bias. My advisor was very good about side-stepping it. When we got the answer we expected, he would say, “something is wrong here” and make us go over the data again. But the social sciences seem more subject to confirmation bias for various reasons: the answers in the social sciences are more nebulous, the biases are more subtle, the “observer effect” is more real and, frankly, some social scientists lack the statistical acumen to parse data properly (see the Hurricane study discussed earlier this year). But I also think there is an increased danger because of the immediacy of the issues. No one has a personal stake in the time-resolved behavior of an active galactic nucleus. But people have very personal stakes in politics, economics and sexism.

Megan also touches on what I’ve dubbed the Scientific Peter Principle: that a study garnering enormous amounts of attention is likely erroneous. The reason is that when you do something wrong in a study, it will usually manifest as a false result, not a null result. Null results are usually the result of doing your research right, not doing it wrong. Take the sexist hurricane study earlier this year. Had the scientists done their research correctly: limiting their data to post-1978 or doing a K-S test, they would have found no connection between the femininity of hurricane names and their deadliness. As a result, we would never have heard about it. In fact, other scientists may have already done that analysis and either not bothered to publish it or publish it quietly.

But because they did their analysis wrong — assigning an index to the names, only sub-sampling the data in ways that supported the hypothesis — they got a result. And because they had a surprising result, they got publicity.

This happens quite a bit. The CDC got lots of headlines when they exaggerated the number of obesity deaths by a factor of 14. Scottish researchers got attention when they erroneously claimed that smoking bans were saving lives. The EPA got headlines when they deliberately biased their analysis to claim that second-hand smoke was killing thousands.

Cognitive bias, in combination with the Scientific Peter Principle, is incredibly dangerous.

Halloween Linkorama

Three stories today:

  • Bill James once said that, when politics is functioning well, elections should have razor thin margins. The reason is that the parties will align themselves to best exploit divisions in the electorate. If one party is only getting 40% of the vote, they will quickly re-align to get higher vote totals. The other party will respond and they will reach a natural equilibrium near 50% I think that is the missing key to understanding why so many governments are divided. The Information Age has not only given political parties more information to align themselves with the electorate, it has made the electorate more responsive. The South was utterly loyal the Democrats for 120 years. Nowadays, that kind of political loyalty is fading.
  • I love this piece about how an accepted piece of sociology turned out to be complete gobbledygook.
  • Speaking of gobbledygook, here is a review of the article about men ogling women. It sounds like the authors misquoted their own study.
  • Mathematical Questions: Guns Yet Again

    I’m not going to call this mathematical malpractice because I don’t think it’s been reviewed or published yet. But the way the study is quoted in the press makes me highly dubious of its conclusions:

    There are approximately 7,500 child hospitalizations and 500 in-hospital deaths each year due to injuries sustained from guns. In an abstract presented Oct. 27 at the American Academy of Pediatrics (AAP) National Conference and Exhibition in Orlando, researchers also identified a link between the percentage of homes with guns and the prevalence of child gunshot injuries.

    In “United States Gunshot Violence—Disturbing Trends,” researchers reviewed statistics from the Kids’ Inpatient Database (KID) from 1997, 2000, 2003, 2006 and 2009 (for a total of 36 million pediatric hospital admissions), and estimated state household gun ownership using the most recent Behavioral Risk Factor Surveillance System data (2004).

    The study found that approximately 7,500 children are admitted to the hospital for the treatment of injuries sustained from guns each year, and more than 500 children die during hospital admission from these injuries. Between 1997 and 2009, hospitalizations from gunshot wounds increased from 4,270 to 7,730, and in-hospital deaths from 317 to 503.

    Several things that raise alarm bells:

  • The study is of five very specific years rather than of all twelve years.
  • The study using the KID database for hospitalizations. Looking over the details of this database shows that these five years are the only years in the database (which is apparently compiled every three years). However, the number of participating hospitals has increased over time. This could induce a variety of biases, not all of which are obvious. It could also be biased by an increasing tendency to hospitalize gunshot victims. It could also be biased because KID, after 1997, uses a larger age range, adding 18- and 19-year olds who are more likely to be engaged in criminal activity. I’d have to see the paper to see how they have accounted for these. I suspect the biases are very strong compared to the overall signal and the reliability of the result is critically dependent on how they account for the biases.
  • The press release does not specify an age range but KID tracks patients up to age 18 (in 1997) and age 20 (from 2000 on). This isn’t exactly what people think of with “kids” and this age range has been used before to inflate the number of kids who are victims of gun violence.
  • The trend of massively increasing violence is the complete opposite of what every study of criminal violence is showing. Murder is down, assault is down, gun violence is down down down according to crime states, FBI stats and victim surveys.
  • I’m not here to slander anyone’s work. They may already have addressed the points I raise above. My read is that this is an abstract, not a refereed paper. So all I can do is point out the obvious pitfalls that might be causing this abstract to contradict everything else we know about violent crime. The gun control side, as I have documented on this blog, has a history of twisting the stats, sometimes unintentionally. And the media have a tendency to exaggerate the results of tentative early studies when it suits their narrative.

    (The gun control side also has a history of outright fraud but I am very dubious that any fraud is occurring here because it would be too easy to check. They are using a public database, not proprietary data.)

    There’s also the Scientific Peter Principle to consider here. If you hear of a study with startling results, it is most likely to be erroneous. That goes doubly so for unrefereed abstracts presented at conferences. The reason is that errors and biases almost always give you unexpected results. For veteran scientists, that’s often how you spot biases but even the best scientists can be fooled. That this study indicates a massive increase in violence at a time when every other study indicates that violence is falling (as is gun ownership) causes me to be concerned that something is wrong.

    Two things I would place small bets on. Within a few months, this study will be shown to be flawed in some way and its conclusions toned down. And for the next ten years, the initial abstract will be quoted by gun control advocates as proof of their position (e.g., the selective quoting of Mother Jones).

    Tuesday Linkorama

  • All right, here’s the thing about the “study” that Congressional speaking patterns have gotten simpler. Notice that from ’96 to ’06, the speaking grade level was higher and especially high among Republicans. How come we didn’t read all these articles about what intelligent speakers the Republicas were? It did’t fit the narrative, that’s why.
  • I love me new web browsers, but calling it Axis? Is it being tested in Poland and China?
  • Looks like the mainstream media has discovered Chaga’s Disease. I remember my first visit to Campanas, when they tried to scare the new guy with stories about Venchukas.
  • How a story goes viral. Personally, I find the story amusing and cute.
  • A fascinating breakdown of where your airfare goes.
  • Friday Linkorama

  • Fun with data. The thing is, some social scientist would probably publish this seriously.
  • Your inspiration for the week. Most people are so good.
  • Cool medieval art. I’m so glad I get to enjoy cool medieval stuff (art, literature, professors with armor) and none of the bad stuff (famine, disease, war and death).
  • Because it’s Friday: cute cats.
  • Some questions don’t need to be answered.
  • Mathematical Malpractice Watch: 10 Billion

    Seriously? You think we can project population growth a century in advance? Really? You have Nigeria quintupling in population over the next century. That’s predicting the reproductive habits of people who’s great-grandparents have yet to be born.

    What a load of crap.

    Social scientists simply never learn. Every single population projection we have seen for the last fifty years has been too high. Why should we trust them now?