Posts Tagged ‘Mathematical Malpractice’

Mathematical Malpractice Watch: Torturing the Data

Thursday, August 28th, 2014

There’s been a kerfuffle recently about a supposed CDC whistleblower who has revealed malfeasance in the primary CDC study that refuted the connection between vaccines and autism. Let’s put aside that the now-retracted Lancet study the anti-vaxxers tout as the smoking gun was a complete fraud. Let’s put aside that other studies have reached the same conclusion. Let’s just address the allegations at hand, which include a supposed cover up. These allegations are in a published paper (now under further review) and a truly revolting video from Andrew Wakefield — the disgraced author of the fraudulent Lancet study that set off this mess — that compares this “cover-up” to the Tuskegee experiments.

According to the whistle-blower, his analysis shows that while most children do not have an increased risk of autism (which, incidentally, discredits Wakefield’s study), black males vaccinated before 36 months show a 240% increased risk (not 340, as has been claimed). You can catch the latest from Orac. Here’s the most important part:

So is Hooker’s result valid? Was there really a 3.36-fold increased risk for autism in African-American males who received MMR vaccination before the age of 36 months in this dataset? Who knows? Hooker analyzed a dataset collected to be analyzed by a case-control method using a cohort design. Then he did multiple subset analyses, which, of course, are prone to false positives. As we also say, if you slice and dice the evidence more and more finely, eventually you will find apparent correlations that might or might not be real.

In other words, what he did was slice and dice the sample to see if one of those slices would show a correlation. But by pure chance, one of those slices would show a correlation, even there wasn’t one. As best illustrated in this cartoon, if you run twenty tests for something that has no correlation, statistics dictate that at least one of those will show a spurious correlation at the 95% confidence level. This is one of the reasons many scientists, especially geneticists, are turning to Bayesian analysis, which can account for this.

If you did a study of just a few African-American boys and found a connection between vaccination and autism, it would be the sort of preliminary shaky result you would use to justify looking at a larger sample … such as the full CDC study that the crackpot’s own analysis shows refutes such a connection. To take a large comprehensive study, narrow it down to a small sample and then claim the result of this small sample override those of the large one is ridiculous. It’s the opposite of how epidemiology works (and there is no suggestion that there is something about African American males that makes them more susceptible to vaccine-induced autism).

This sort of ridiculous cherry-picking happens a lot, mostly in political contexts. Education reformers will pore over test results until they find that fifth graders slightly improved their reading scores and claim their reform is working. When the scores revert back the next year, they ignore it. Drug warriors will pore over drug stats and claim that a small drop in heroine use among people in their 20′s indicates that the War on Drugs is finally working. When it reverts back to normal, they ignore it.

You can’t pick and choose little bits of data to support your theory. You have to be able to account for all of it. And you have to be aware of how often spurious results pop up even in the most objective and well-designed studies, especially when you parse the data finer and finer.

But the anti-vaxxers don’t care about that. What they care about is proving that evil vaccines and Big Pharma are poisoning us. And however they have to torture the data to get there, that’s what they’ll do.

Mathematical Malpractice Watch: Hurricanes

Monday, June 2nd, 2014

There’s a new paper out that claims that hurricanes with female names tend to be deadlier than ones with male names based on hurricane data going back to 1950. They attribute this to gender bias, the idea that people don’t take hurricanes with female-names seriously.

No, this is not the onion.

I immediately suspected a bias. For one thing, even with their database, we’re talking about 92 events, many of which killed zero people. More important, all hurricanes had female names until 1979. What else was true before 1979? We had a lot less advanced warning of hurricanes. In fact, if you look up the deadliest hurricanes in history, they are all either from times before we named them or when hurricanes all had female names. In other words, they may just be measuring the decline in hurricane deadliness.

Now it’s possible that the authors use some sophisticated model that also account for hurricane strength. If so, that might mitigate my analysis. But I’m dubious. I downloaded their spreadsheet, which is available for the journal website. Here is what I found:

Hurricanes before 1979 averaged 27 people killed.

Hurricanes since 1979 average 16 people killed.

Hurricanes since 1979 with male names average … 16 people killed.

Hurricanes since 1979 with female names averaged … 16 people killed.

Maybe I’m missing something. How did this get past a referee?

Update: Ed Yong raises similar points here. The authors say that cutting the sample at 1979 made the numbers too small and so therefore use an index of how feminine or masculine the names were. I find that dubious when a plain and simple average will give you an answer. Moreover, they try this qualifier in the comments:

What’s more, looking only at severe hurricanes that hit in 1979 and afterwards (those above $1.65B median damage), 16 male-named hurricane each caused 23 deaths on average whereas 14 female-named hurricanes each caused 29 deaths on average. This is looking at male/female as a simple binary category in the years since the names started alternating. So even in that shorter time window since 1979, severe female-named storms killed more people than did severe male-named storms.

You be the judge. I average 54 post-1978 storms totally 1200 deaths and get even numbers. They narrow it to 30 totally 800 deaths and claim a bias based on 84 excess deaths. That really crosses as stretching to make a point.

Update: My friend Peter Yoachim did a K-S test of the data and found a 97% chance that the male- and female-named hurricanes were drawn from the same distribution. This is a standard test of the null hypothesis and wasn’t done at all. Ridiculous.

The Return of Linkorama

Saturday, February 22nd, 2014

Linkoramas are getting rarer these days mostly because I tweet most articles. But I will still be occasionally posting something more long-form.

To wit:

  • A fascinating article about how Vermeer used a camera obscura to enable his paintings. Yet another example about how people were pretty damn clever in the supposedly unenlightened past.
  • This is a couple of months late, but someone posted up Truman Capote’s christmas story. The recent death of Phillip Seymour Hoffman reminded me of this little gem.
  • This is the second and by far the largest study yet to show that routine mammography is basically a gigantic waste of money, being just as likely to precipitate unnecessary treatment as to discover a tumor that a breast exam wouldn’t. Do you think our “evidence-based” government will embrace this? No way. They already mandated mammogram coverage when the first study showed it to be a waste.
  • I don’t know even know if this counts as mathematical malpractice. There’s no math at all. It’s just “Marijuana! RUN!”. Simply appalling reporting by the MSM.
  • This on the other hand, does count as mathematical malpractice. The gun control advocates are hyping a Missouri study that shows a rise in murder rate after a change in the gun control laws. However, in doing so they are ignoring data from 17 other states, data on all other forms of violent crime and data from Missouri that showed a steep rise in the murder rate before the laws were changed. They are picking a tiny slice of data to make a huge claim. Disgraceful. And completely expected from the gun-grabbers.
  • I love color photos from history. Just love them.
  • This is old but worth reposting: one of the biggest feminists texts out there is loaded with garbage data, easily checked facts that are completely wrong. This was a big reason I distanced myself from third-wave feminism in college: it had been taken over by crackpots who would believe any statistic as long as it was bad. In college, we were told that one in three women are raped (they aren’t) that abuse is the leading cause of admission to ER’s (it isn’t), that violence erupts very Superbowl (it doesn’t). I even had one radical tell me — with no apparent self-awareness, that murder was the second leading cause of death among women (it’s not even close). As I seem to say about everything: reality is bad enough; we don’t need to invent stuff.
  • Mathematical Malpractice Watch: A Trilogy of Error

    Wednesday, February 12th, 2014

    Three rather ugly instances of mathematical malpractice have caught my attention in the last month. Let’s check them out.

    The Death of Facebook or How to Have Fun With Out of Sample Data

    Last month, Princeton researchers came out with the rather spectacular claim that the social network Facebook would be basically dead within a few years. The quick version is that they fit an epidemiological model to the rise and fall of MySpace. They then used that same model, varying the parameters, to fit Google trends on searches for Facebook. They concluded that Facebook would lose 80% of its customers by 2017.

    This was obviously nonsese as detailed here and here. It suffered from many flaws, notably assuming that the rise and fall of MySpace was necessarily a model for all social networks and the dubious method of using Google searches instead of publicly available traffic data as their metric.

    But there was a deeper flaw. The authors fit a model of a sharp rise and fall. They then proclaim that this model works because Facebook’s google data follows the first half of that trend and a little bit of the second. But while the decline in Facebook Google searches is consistent with their model, it is also consistent with hundreds of others. It would be perfectly consistent with a model that predicts a sharp rise and then a leveling off as the social network saturates. Their data are consistent with but not discriminating against just about any model.

    The critical part of the data — the predicted sharp fall in Facebook traffic — is out of sample (meaning it hasn’t happened yet). But based on a tiny sliver of data, they have drawn a gigantic conclusion. It’s Mark Twain and the length of the Mississippi River all over again.

    We see this a lot in science, unfortunately. Global warming models often predict very sharp rises in temperature — out of sample. Models of the stock market predict crashes or runs — out of sample. Sports twerps put together models that predict Derek Jeter will get 4000 hits — out of sample.

    Anyone who does data fitting for a living knows this danger. The other day, I fit a light curve to a variable star. Because of an odd intersection of Fourier parameters, the model predicted a huge rise in brightness in the middle of its decay phase because there were no data to constrain it there. So it fit a small uptick in the decay phase as though it were the small beginning of a massive re-brightening.

    The more complicated the model, the more danger there is of drawing massive conclusions from tiny amounts of data or small trends. If the model is anything other than a straight line, be very very wary at out-of-sample predictions, especially when they are predicting order-of-magnitude changes.

    A Rape Epidemic or How to Reframe Data:

    The CDC recently released a study that claimed that 1.3 million women were raped and 12.6 million more were subject to sexual violence in 2010. This is six or more times the estimates of the FBI’s extremely rigorous NCVS estimate. Christina Hoff Summers has a breakdown of why the number is so massive:

    It found them by defining sexual violence in impossibly elastic ways and then letting the surveyors, rather than subjects, determine what counted as an assault. Consider: In a telephone survey with a 30 percent response rate, interviewers did not ask participants whether they had been raped. Instead of such straightforward questions, the CDC researchers described a series of sexual encounters and then they determined whether the responses indicated sexual violation. A sample of 9,086 women was asked, for example, “When you were drunk, high, drugged, or passed out and unable to consent, how many people ever had vaginal sex with you?” A majority of the 1.3 million women (61.5 percent) the CDC projected as rape victims in 2010 experienced this sort of “alcohol or drug facilitated penetration.”

    What does that mean? If a woman was unconscious or severely incapacitated, everyone would call it rape. But what about sex while inebriated? Few people would say that intoxicated sex alone constitutes rape — indeed, a nontrivial percentage of all customary sexual intercourse, including marital intercourse, probably falls under that definition (and is therefore criminal according to the CDC).

    Other survey questions were equally ambiguous. Participants were asked if they had ever had sex because someone pressured them by “telling you lies, making promises about the future they knew were untrue?” All affirmative answers were counted as “sexual violence.” Anyone who consented to sex because a suitor wore her or him down by “repeatedly asking” or “showing they were unhappy” was similarly classified as a victim of violence. The CDC effectively set a stage where each step of physical intimacy required a notarized testament of sober consent.

    In short, they did what is called “reframing”. They took someone’s experiences, threw away that person’s definition of them and substituted their own definition.

    This isn’t the first time this has happened with rape stats nor the first time Summers had uncovered this sort of reframing. Here is an account of how researchers decided that women who didn’t think they had been raped were, in fact, raped, so they could claim a victimization rate of one in four.

    Scientists have to classify things all the time based on a variety of criteria. The universe is a messy continuum; to understand it, we have to sort things into boxes. I classify stars for a living based on certain characteristics. The problem with doing that here is that women are not inanimate objects. Nor are they lab animals. They can have opinions of their own about what happened to them.

    I understand that some victims may reframe their experiences to try to lessen the trauma of what happened to them. I understand that a woman can be raped but convince herself it was a misunderstanding or that it was somehow her fault. But to a priori reframe any woman’s experience is to treat them like lab rats, not human beings capable of making judgements of their own.

    But it also illustrates a mathematical malpractice problem: changing definitions. This is how 10,000 underage prostitutes in the United States becomes 200,000 girls “at risk”. This is how small changes in drug use stats become an “epidemic”. If you dig deep into the studies, you will find the truth. But the banner headline — the one the media talk about — is hopelessly and deliberately muddled.

    Sometimes you have to change definitions. The FBI changed their NCVS methodology a few years ago on rape statistics and saw a significant increase in their estimates. But it’s one thing to hone; it’s another to completely redefine.

    (The CDC, as my friend Kevin Wilson pointed out, mostly does outstanding work. But they have a tendency to jump with both feet into moral panics. In this case, it’s the current debate about rape culture. Ten years ago, it was obesity. They put out a deeply flawed study that overestimated obesity deaths by a factor of 14. They quickly admitted their screwup but … guess which number has been quoted for the last decade on obesity policy?)

    You might ask why I’m on about this. Surely any number of rapes is too many. The reason I wanted to talk about this, apart from my hatred of bogus studies, is that data influences policy. If you claim that 1.3 million women are being raped every year, that’s going to result in a set of policy decisions that are likely to be very damaging and do very little to address the real problem.

    If you want a stat that means something, try this one: the incidence of sexual violence has fallen 85% over the last 30 years. That is from the FBI’s NCVS data so even if they are over- or under-estimating the amount of sexual violence, the differential is meaningful. That data tells you something useful: that whatever we are doing to fight rape culture, it is working. Greater awareness, pushing back against blaming the victim, changes to federal and state laws, changes to the emphasis of attorneys general’s offices and the rise of internet pornography have all been cited as contributors to this trend.

    That’s why it’s important to push back against bogus stats on rape. Because they conceal the most important stat; the one that is the most useful guide for future policy and points the way toward ending rape culture.

    The Pending Crash or How to Play with Scales:

    Yesterday morning, I saw a chart claiming that the recent stock market trends are an eerie parallel of the run-up to the 1929 crash. I was immediately suspicious because, even if the data were accurate, we see this sort of crap all the time. There are a million people who have made a million bucks on Wall Street claiming to pattern match trends in the stock market. They make huge predictions, just like the Facebook study above. And those predictions are always wrong. Because, again, the out of sample data contains the real leverage.

    This graph is even worse than that, though. As Quartz points out, the graph makers used two different y-axes. In one, the the 1928-29 rise of the stock market was a near doubling. In the other, the 2013-4 rise was an increase of about 25%. When you scale them appropriately, the similarity vanishes. Or, alternatively, the pending “crash” would be just an erasure of that 25% gain.

    I’ve seen this quite a bit and it’s beginning to annoy me. Zoomed-in graphs of narrow ranges of the y-axis are used to draw dramatic conclusions about … whatever you want. This week, it’s the stock market. Next week, it’s global warming skeptics looking at little spikes on a 10-year temperature plot instead of big trends on a 150-year one. The week after, it will be inequality data. Here is one from Piketty and Saez, which tracks wealth gains for the rich against everyone else. Their conclusion might be accurate but the plot is useless because it is scaled to intervals of $5 million. So even if the bottom 90% were doing better, even if their income was doubling, it wouldn’t show up on the graph.

    Halloween Linkorama

    Sunday, November 3rd, 2013

    Three stories today:

  • Bill James once said that, when politics is functioning well, elections should have razor thin margins. The reason is that the parties will align themselves to best exploit divisions in the electorate. If one party is only getting 40% of the vote, they will quickly re-align to get higher vote totals. The other party will respond and they will reach a natural equilibrium near 50% I think that is the missing key to understanding why so many governments are divided. The Information Age has not only given political parties more information to align themselves with the electorate, it has made the electorate more responsive. The South was utterly loyal the Democrats for 120 years. Nowadays, that kind of political loyalty is fading.
  • I love this piece about how an accepted piece of sociology turned out to be complete gobbledygook.
  • Speaking of gobbledygook, here is a review of the article about men ogling women. It sounds like the authors misquoted their own study.
  • Mathematical Malpractice: Food Stamps

    Sunday, October 6th, 2013

    I’m sorry, but I’m going to have to call out my favorite website again.

    One of the things that drives budget hawks nuts is baseline spending. In baseline spending, government program X is projected to grow in the future and any slice of that growth that is removed by budget-cutters is called a “cut” even though it really isn’t.

    Let’s say you have a government program that pays people to think about how wonderful our government is. Call it the Positing Thinking Initiative and fund it at $1 billion. Future spending for PTI will be projected to grow a few percent a year for cost of living, a few percent for increase utilization, etc. so that, in FY 2014, it’s a $1.2 billion program. And by FY2023, it’s a $6 billion program.

    Congress will then “cut” the funding a little bit so that, by FY2023 it’s “only” a $4 billion program. They’ll then claim a few billion in spending cuts and go off for tea and medals.

    This drives budget hawks nuts because it changes the language. It makes spending increases into spending “cuts” and makes actual spending cuts (or just level spending) into “savage brutal cuts”. This one of the reasons the sequester drew as much opposition as opponents thought it would. The sequester actually did cut spending for programs but everyone was so used to the distorted language of Washington that they couldn’t distinguish a real cut from a faux cut.

    So I can understand where Ira Stoll is coming from when he claims that the cuts to the food stamp program aren’t actually cuts. The problem is that he’s not comparing apples to apples:

    The non-partisan Congressional Budget Office estimates that the House bill would spend $725 billion on food stamps over the years 2014 to 2023. The Department of Agriculture’s web site offers a summary of spending on the program that reports spending totaling $461.7 billion over the years 2003 to 2012, a period that included a dramatic economic downturn.

    This is a great example of how and why it is so difficult to cut government spending, and how warped the debate over spending has become. The Republicans want to increase food stamp spending 57 percent. The Democrats had previously planned to increase it by 65 percent (to $764 billion over 10 years instead of the $725 billion in the Republican bill), so they depict the Republicans as “meanspirited class warriors” seeking “deep cuts.”

    Stoll acknowledges the economic downturn but ignores that the time period he’s talking about includes five years of non-downturn time. Food stamp spending tracks unemployment; the economy is the biggest reason food stamp spending has exploded in recent years. So this isn’t really a spending “hike” so much as the CBO estimating that unemployment will be a bigger problem in the next decade than it was in the last one.

    Here is the CBOs report. Pay particular attention to Figure 2, which clearly shows that food stamp spending will decline every year for the next decade (a little more sharply in inflation-adjusted terms). It will be a very long time before it is back to pre-recessionary levels, but it is, in fact, declining, even in nominal dollars. This isn’t a baseline trick; this is an actual decline.

    Spending (mostly for benefits and administrative costs) on SNAP in 2022 will be about $73 billion, CBO projects. In inflation-adjusted dollars, spending in 2022 is projected to be about 23 percent less than it was in 2011 but still about 60 percent higher than it was in 2007.

    In fact, long-term projections of food stamp spending are very problematic since they depend heavily on the state of the economy. If the economy is better than the CBO anticipates, food stamp spending could be down to pre-recession levels by the end of the decade.

    So with a program like food stamps, you really can’t play with decade-long projections like Stoll. That’s mathematical malpractice: comparing two completely different sets of budgets. CBO does decade-long projections because they are obligated to. But the only thing you can really judge is year-to-year spending.

    Food stamp spending in FY2012 was $78 billion. FY2014 spending, under the Republican bill, will be lower than that (how much lower is difficult to pin down).

    That’s a cut, not an increase. Even by Washington standards.

    Mathematical Malpractice Watch: Cherry-Picking

    Sunday, September 15th, 2013

    Probably one of the most frustrating mathematical practices is the tendency of politicos to cherry-pick data: only take the data points that are favorable to their point of view and ignore all the others. I’ve talked about this before but two stories circling the drain of the blogosphere illustrated this practice perfectly.

    The first is on the subject of global warming. Global warming skeptics have recently been crowing about two pieces of data that supposedly contradict the theory of global warming: a slow-down in temperature rise over the last decade and a “60% recovery” in Arctic sea ice.

    The Guardian, with two really nice animated gifs, show clearly why these claims are lacking. Sea ice levels vary from year to year. The long-term trend, however, has been a dramatic fall with current sea ice levels being a third of what they were a few decades ago (and that’s just area: in terms of volume it’s much worse with sea ice levels being a fifth of what they were). The 60% uptick is mainly because ice levels were so absurdly low last year that the natural year-to-year variation is equal to almost half the total area of ice. In other words, the variation in yearly sea levels has not changed — the baseline has shrunk so dramatically that the variations look big in comparison. This could easily — and likely will — be matched by a 60% decline. Of course, that decline will be ignored by the very people hyping the “recovery”.

    Temperature does the same thing. If you look at the second gif, you’ll see the steady rise in temperature over the last 40 years. But, like sea ice levels, planetary temperatures vary from year to year. The rise is not perfect. But each time it levels or even falls a little, the skeptics ignore forty years worth of data.

    (That having been said, temperatures have been rising much slower for the last decade than they were for the previous three. A number of climate scientists now think we have overestimated climate sensitivity).

    But lest you think this sort of thing is only confined to the Right …

    Many people are tweeting and linking this article which claims that Louis Gohmert spouted 12 lies about Obamacare in two minutes. Some of the things Gohmert said were not true. But other were and still others can not really be assessed at this stage. To take on the lies one-by-one:

    Was Obamacare passed against the will of the people?

    Nope. It was passed by a president who won the largest landslide in two decades and a Democratic House and Senate with huge majorities. It was passed with more support than the Bush tax cuts and Medicare Part D, both of which were entirely unfunded. And the law had a mostly favorable perception in 2010 before Republicans spent hundreds of millions of dollars spreading misinformation about it.

    The first bits of that are true but somewhat irrelevant: the Iraq War had massive support at first, but became very unpopular. The second is cherry-picked. Here is the Kaiser Foundation’s tracking poll on Obamacare (panel 6). Obamacare barely crested 50% support for a brief period, well within the noise. Since then, it has had higher unfavorables. If anything, those unfavorables have actually fallen slightly, not risen in response to “Republican lies”.

    Supporters of the law have devised a catch-22 on the PPACA: if support falls, it’s because of Republican money; if it rises it’s because people are learning to love the law. But the idea that there could be opposition to it? Perish the thought!

    Is Obamacare still against the will of American people?

    Actually, most Americans want it implemented. Only 6 percent said they wanted to defund or delay it in a recent poll.

    That is extremely deceptive. Here is the poll. Only 6% want to delay or defund the law because 30% want it completely repealed. Another 31% think it needs to be improved. Only 33% think the law should be allowed to take effect or be expanded.

    (That 6% should really jump out at you since it’s completely at variance with any political reality. The second I saw it, I knew it was garbage. Maybe they should have focus-group-tested it first to come up with some piece of bullshit that was at least believable.)

    Of the remaining questions, many are judgement calls on things that have yet to happen. National Memo asserts that Obamacare does not take away your decisions about health care, does not put the government between you and your doctor and will not keep seniors from getting the services they need. All of these are judgement calls about things that have yet to happen. There are numerous people — people who are not batshit crazy like Gohmert — who think that Obamacare and especially the IPAB will eventually create government interference in healthcare. Gohmert might be wrong about this. But to call it a lie when someone makes a prediction about what will happen is absurd. Let’s imagine this playing out in 2002:

    We rate Senator Liberal’s claim that we will be in Iraq for a decade and it will cost 5000 lives and $800 billion to be a lie. The Bush Administration has claimed that US troops will be on the ground for only a few years and expect less than a thousand casualties and about $2 billion per month. In fact, some experts predict it will pay for itself.

    See what I did there?

    Obamacare is a big law with a lot of moving parts. There are claims about how it is going to work but we won’t really know for a long time. Maybe the government won’t interfere with your health care. But that’s a big maybe to bet trillions of dollars on.

    The article correctly notes that the government will not have access to medical records. But then it is asserts that any information will be safe. This point was overtaken by events this week when an Obamacare site leaked 2400 Social Security numbers.

    See what I mean about “fact-checking” things that have yet to happen?

    Then there’s this:

    Under Obamacare, will young people be saddled with the cost of everybody else?

    No. Thanks to the coverage for students, tax credits, Medicaid expansion and the fact that most young people don’t earn that much, most young people won’t be paying anything or very much for health care. And nearly everyone in their twenties will see premiums far less than people in their 40s and 50s. If you’re young, out of school and earning more than 400 percent of the poverty level, you may be paying a bit more, but for better insurance.

    This is incorrect. Many young people are being coerced into buying insurance that they wouldn’t have before. As Avik Roy has pointed out, cheap high-deductible plans have been effectively outlawed. Many college and universities are seeing astronomical rises in health insurance premiums, including my own. The explosion of invasive wellness programs, like UVAs, has been explicitly tied to the PPACA. Gohmert is absolutely right on this one.

    The entire point of Obamacare was to get healthy people to buy insurance so that sick people could get more affordable insurance. That is how this whole thing works. It’s too late to back away from that reality now.

    Does Obamacare prevent the free exercise of your religious beliefs?

    No. But it does stop you from forcing your beliefs on others. Employers that provide insurance have to offer policies that provide birth control to women. Religious organizations have been exempted from paying for this coverage but no one will ever be required to take birth control if their religion restricts it — they just can’t keep people from having access to this crucial, cost-saving medication for free.

    This is a matter of philosophy. Many liberals think that if an employer will not provide birth control coverage to his employees, he is “forcing” his religious views upon them (these liberals being under the impression that free birth control pills are a right). I, like many libertarians and conservatives (and independents), see it differently: that forcing someone to pay for something with which they have a moral qualm is violating their religious freedom. The Courts have yet to decide on this.

    I am reluctant to call something a “lie” when it’s a difference of opinion. Our government has made numerous allowance for religious beliefs in the past, including exemptions from vaccinations, the draft, taxes and anti-discrimination laws. We are still having a debate over how this applies to healthcare. Sorry, National Memo, that debate isn’t over yet.

    So let’s review. Of Gohmert’s 12 “lies”, the breakdown is like so:

    Lies: 4
    Debatable or TBD: 5
    Correct: 3
    Redundant: 1

    (You’ll note that’s 13 “lies”; apparently National Memo can’t count).

    So 4 only out of 13 are lies. Hey, even Ty Cobb only hit .366

    Mathematical Malpractice: Focus Tested Numbers

    Tuesday, September 3rd, 2013

    One of the things I keep encountering in news, culture and politics are numbers that appear to be pulled out of thin air. Concrete numbers, based on actual data, are dangerous enough in the wrong hands. But when data get scarce, this doesn’t seem to intimidate advocates and some social scientists. They will simply commission a “study” that produces, in essence, any number they want.

    What is striking is that the numbers seem to be selected with the diligent care and skill that the methods lack.

    The first time I became aware of this was with Bill Clinton. According to his critics — and I can’t find a link on this so it’s possibly apocryphal — when Bill Clinton initiated competency tests for Arkansas teachers, a massive fraction failed. He knew the union would blow their stack if the true numbers were released so he had focus groups convened to figure out what percentage of failures was expected, then had the test curved so that the results met the expectation.

    As I said, I can’t find a reference for that. I seem to remember hearing it from Limbaugh, so it may be a garbled version (I can find lawsuits about race discrimination with the testing, so it’s possible a mangled version of that). But the story struck me to the point where I remember it twenty years later. And the reason it struck is because:

  • It sounds like the sort of thing politicians and political activists would do.
  • It would be amazingly easy to do.
  • Our media are so lazy that you could probably get away with it.
  • Since then, I’ve seen other numbers which I call “focus tested numbers” even tough they may not have been run by focus groups. But they cross me as numbers derived by someone coming up with the number first and then devising the methodology second. They first part is the critical one. Whatever the issue is, you have to come with a number that is plausible and alarming without being ridiculous. Then you figure out the methods to get the number.

    Let’s just take an example. The first time I became aware of the work of Maggie McNeill was her thorough debunking of the claim that 200,000 underage girls are trafficked for sex in the United States. You should read that article, which comes to an estimate of about 15,000 total underage prostitutes (most which are 16 or 17) and only a few hundred to a few thousand that are trafficked in any meaningful sense of that word. That does not make the problem less important, but it does make it less panic-inducing.

    But the 200,000 number jumped out at me. Here’s my very first comment on Maggie’s blog and her response:

    Me: Does anyone know where the 100,000 estimate comes from? What research it’s based on?

    It’s so close to 1% [of total underage girls] that I suspect it may be as simple as that. We saw a similar thing in the 1980′s when Mitch Snyder claimed (and the media mindlessly repeated) that three million Americans were homeless (5-10 times the estimates from people who’d done their homework). It turned out the entire basis of that claim was that three million was 1% of the population.

    This is typical of the media. The most hysterical claim gets the most attention. If ten researchers estimates there are maybe 20,000 underage prostitutes and one big-mouth estimates there are 300,000, guess who gets a guest spot on CNN?

    —–

    Maggie: Honestly, I think 100,000 is just a good large number which sounds impressive and is too large for most people to really comprehend as a whole. The 300,000 figure appears to be a modification of a figure from a government report which claimed that something like 287,000 minors were “at risk” from “sexual exploitation” (though neither term was clearly defined and no study was produced to justify the wild-ass guess). It’s like that game “gossip” we played as children; 287,000 becomes 300,000, “at risk” becomes “currently involved” and “sexual exploitation” becomes “sex trafficking”. :-(

    The study claimed that 100-300,000 girls were “at risk” of exploitation but defined “at risk” so loosely that simply living near a border put someone at risk. With such methods, the authors could basically claim any number they wanted. After reading that analysis and picking my jaw up off of the floor, I wondered why anyone would do it that way.

    And then it struck me: because the method wasn’t the point; the result was. Even the result wasn’t the point; the issue they wanted to advocate was. The care was not in the method: it was in the number. If they had said that there were a couple of thousand underage children in danger, people would have said, “Oh, OK. That sounds like something we can deal with using existing policies and smarter policing.” Or even worse, they might have said, “Well, why don’t we legalize sex work for adults and concentrate on saving these children?” If they had claimed a million children were in danger, people would have laughed. But claim 100-300,000? That’s enough to alarm people into action without making them laugh. It’s in the sweet spot between the “Oh, is that all?” number of a couple thousand and the “Oh, that’s bullshit” number of a million.

    Another great example was the number SOPA supporters bruted about to support their vile legislation. Julian Sanchez details the mathematical malpractice here. At first, they claimed that $250 billion was lost to piracy every year. That number — based on complete garbage — was so ridiculous they had to revise it down to $58 billion. Again, notice how well-picked that number is. At $250 billion, people laughed. If they had gone with a more realistic estimate — a few billion, most likely — no one would have supported such draconian legislation. But $58 billion? That’s enough to alarm people, not enough to make them laugh and — most importantly — not enough to make the media do their damn job and check it out.

    I encountered it again today. The EU is proposing to put speed limiters on cars. Their claim is this will cut traffic deaths by a third. Now, we actually do have some data on this. When the national speed limit was introduced in America, traffic fatalities initially fell about 20%, but then slowly returned to normal. They began falling again, bumped up a bit when Congress loosened the law, then leveled out in the 90′s and early 00′s after Congress completely repealed the national speed limit. The fatality rate has plunged over the last few years and is currently 40% below the 1970′s peak — without a speed limit.

    That’s just raw numbers, of course. In real terms — per million vehicle miles driven — fatalities have plunged almost 75% of the last forty years, with no effect of the speed limit law. Of course, more cars contain single drivers than ever before. But even on a per capita basis, car fatalities are half of what they once were.

    That’s real measurable progress. Unfortunately for the speed limiters, it’s result of improved technology and better enforcement of drunk driving laws.

    So the claim that deaths from road accidents will plunge by a third because of speed limits is simply not supported by data in the United States. They might plunge as technology, better roads and laws against drunk driving spread to Eastern Europe. And I’m sure one of the reasons they are pushing for speed limits is that they can claim credit for that inevitable improvement. But a one-third decline is just not realistic.

    No, I suspect that this is a focus tested number. If they claimed fatalities would plunge by half, people would laugh. If they claimed 1-2%, no one would care. But one-third? That’s in the sweet spot.

    August Linkorama

    Thursday, August 8th, 2013

    Time to clear out a few things I don’t have time to write lengthy posts about.

  • I’m tickled that Netflix garnered Emmy nominations. Notice that none of the nominated dramas are from the major networks. Their reign of terror is ending.
  • This look at Stand Your Ground laws look state by state to see if murder rates went up. I find this far more convincing than the confusing principle component analysis being cited. Also, check out this analysis of the complicated relationship these laws have with race.
  • Speaking of guns, we have yet another case of Mathematical Malpractice. Business Insider claims California’s gun laws have dramatically dropped the rate of gun violence. But their lead graphic shows California’s rate of gun violence has fallen … about as much as the rest of the country’s.
  • Mother Jones Doesn’t Know Data

    Wednesday, July 31st, 2013

    You know, you could probably cut out a career in responding to Mother Jones twisting and distorting of data from gun deaths. Today has another wonderful example. Hopping on the rather hysterical claim that gun deaths are close to exceeding traffic deaths, they look at it at a state by state level and conclude that “It’s little surprise that many of these states—including Alaska, Arizona, Colorado, Indiana, Utah, and Virginia—are notorious for lax gun laws.”

    Look at the map. Then look at this one which shows the Brady Campaign’s scorecard for state laws on guns. The states were gun deaths exceed traffic deaths are Alaska (Bradley score 0), Washington (48), Oregon (38), California (81!!), Nevada (5), Utah (0), Arizona (0), Colorado (15), Missouri (4), Illinois (35), Louisiana (2), Michigan (25), Ohio (7) and Virginia (12). Of the 14 states, half have Brady scores over 12 and California has the most restrictive gun laws in the nation.

    Going by rate of gun ownership, the states are Alaska (3rd highest gun ownership rate in nation), Washington (33), Oregon (28), California (44), Nevada (38), Utah (16), Arizona (32), Colorado (36), Missouri (15), Illinois (43), Louisiana (13), Michigan (27), Ohio (37) and Virginia (35). In other words, the states where traffic deaths exceed gun death are just as likely to have a low gun ownership rate as a high one.

    Oops.

    Moreover, the entire “guns are killing more than cars” meme is garbage to begin with. Gun deaths, as I have said in every single post on this subject, have fallen over the last twenty years. The thing is that traffic deaths have fallen even faster. The gun grabbers might have had a point back in 1991, when we had a spike in gun deaths that caused them to almost exceed traffic deaths. But they don’t now because both rates are down, way down. Traffic fatalities, in particular, plunged dramatically in the mid-00′s.

    A real analysis of the data would look at both factors to see if better drunk driving laws or seatbelt laws or whatever are also playing a factor here. But Mother Jones isn’t interested in that (for the moment). What they are interested in is stoking panic about guns.

    (Notice also that MJ illustrates their graph with a picture of an assault rifle, even though these are responsibly for a tiny number of gun deaths.)

    Mathematical Malpractice Watch: Et Tu, Reason?

    Sunday, June 30th, 2013

    Oh, no, not you, Best Magazine on the Planet:

    The growth of federal regulations over the past six decades has cut U.S. economic growth by an average of 2 percentage points per year, according to a new study in the Journal of Economic Growth. As a result, the average American household receives about $277,000 less annually than it would have gotten in the absence of six decades of accumulated regulations—a median household income of $330,000 instead of the $53,000 we get now.

    You know, I hate it when people play games with numbers and I won’t put up with it from my side. I agree with Reason’s general point that we are over-regulated and badly regulated and that it is hurting our economy. Even the most conservative estimates indicate that bad regulation is sucking hundreds of billions out of the economy — and that’s accounting for the positive effects of regulation.

    But the claim that we would be four times richer if it weren’t for regulation is garbage. As Bailey notes in the article, the growth in the US economy over the last half century has been about 3.2 percent. Without regulation, according to this study, it would have been 5.2, which is far higher than the US has ever had over any extended period of time, even before the progressive era. And because that wild over-estimate is exponential, it results in an economy that would be four times what we have now; four times what any large country would have now. The hypothetical US would be as wealthy, relative the real US, as the real US is to Serbia. Does anyone really think that without regulation we would be producing four times as much goods and services?

    Even if we assume that we could produce an ideally regulated society, regulation is not the only limit on the economy. Other factors — birth rate, immigration, war, business cycles, education, technological progress, social unrest and the economic success of other countries — play a factor. A perfectly regulated society would most likely move from a position where its growth was limited by regulation to a position where its growth was limited by other factors (assuming this is not already the case)

    The paper is very long and complicated so I can’t dissect where their economic model goes wrong. But I will point out that no country in history, including the United States, has ever had half a century of 5% economic growth. Even countries with far less regulation and far more economic freedom than we have do not show the kind of explosive growth they project. In the absence of any real-life example showing that regulatory restraint can produce this kind of growth, we can’t accept numbers that are so ridiculous.

    Other studies, as Reason notes, estimate the impact of regulation as being something like 10-20% of our economy. That would require that regulation knock down our economic growth by 0.3% per year, which seems much more reasonable.

    (H/T: Maggie McNeill, although she might not like where I went with this one.)

    Late May Linkorama

    Tuesday, May 28th, 2013
  • A brief bit of mathematical malpractice, although not a deliberate one. The usually smart Sarah Kliff cites a study that of an ER that showed employees spent nearly 5000 minutes on Facebook. Of course, over 68 computers and 15 days, that works out to about 4 minutes per day per computer which … really isn’t that much.
  • What’s interesting about the Netflix purge is that many of the studios are pulling movies to start their own streaming services. This is idiotic. I’m pretty tech savvy and I have no desire to have 74 apps on my iPad, one for each studio. If I want to watch a movie, I’m going to Netflix or Amazon or iTunes, not a studio app (that I have to pay another subscription fee for). In fact, many days my streaming is defined by opening up the Netflix app and seeing what intrigues me.
  • We go into this on Twitter. The NYT ran an article about how little nutrition our food has. Of course, they have defined “nutritional content” as the amount of pigment which has dubious nutritional value (aside from anti-oxidant value; so, no nutritional value). As Kevin Wilson said according to the graph, the value of blue corn is that it is blue and not yellow.
  • While we’re on the subject of nutrition, it turns out that low sodium intake may not only not be beneficial, it may even be harmful. I’m slowly learning that almost everything we think we know about nutrition is shaky at best.
  • Ultra-conserved words. I am fascinated by language.
  • Wine tasting is bullshit.
  • How the peaceful loving people-friendly Soviet Union tried to militarize space.
  • The most remote places in each state.
  • Porn is not the problem. You are. More on how “sex addiction” is a made up disorder.
  • Meet the coins that could rewrite history. Every time we learn more about the past, we find out that our ancestors were smarter and more adventurous than we thought they were. And some people think they needed aliens to build the pyramids.
  • Mother Jones Again. Actually Texas State

    Wednesday, May 22nd, 2013

    Mother Jones, not content with having running one of the more bogus studies on mass shootings (for which they boast about winning an award from Ithaca College), is crowing again about a new study out of Texas State. They claim that the study shows that mass shooting are rising, that available guns are the reason and that civilians never stop shootings.

    It’s too bad they didn’t read the paper too carefully. Because it supports none of those conclusions.

  • The Texas State study covers only 84 incidents. Their “trend” is that about half of these incident happened in the last two years of the study. That is, again, an awfully small number to be drawing conclusions from.
  • The data are based on Lexis/Nexus searches. That is not nearly as thorough as James Alan Fox‘s use of FBI crime stats and may measure media coverage more than actual events. They seem to have been reasonably thorough but they confirm their data from … other compilations.
  • Their analysis only covers the years 2000-2010. This conveniently leaves out 2011 (which had few incidents) and the entirety of the 80′s and 90′s, when crime rates were nearly twice what they are now. The word for this is “cherry picking”. Consider what their narrow year range means. If the next decade has fewer incidents, the “trend” becomes a spike. Had you done a similar study covering the years 1990-2000, using MJ’s graph, you would have concluded that mass shootings were rising then. But this would have been followed by five years with very few active shooter events. Look at Mother Jones’ graph again. You can see that mass shootings fell dramatically in the early 2000′s, then spiked up again. That looks like noise in a flat trend over a 30-year baseline. But when you analyze it the way the Blair study does, it looks like a trend. You know what this reminds me of? The bad version of global warming skepticism. Global warming “skeptics” will often show temperature graphs that start in 1998 (an unusually warm year) and go the present to claim that there is no global warming. But if you look at the data for the last century, the long-term trend becomes readily apparent. As James Alan Fox has show, the long-term trend is flat. What Mother Jones has done is jump on a study that really wasn’t intended to look at long-term trends and claim it confirms long-term trends.
  • Mother Jones’ says: “The unprecedented spike in these shootings came during the same four-year period, from 2009-12, that saw a wave of nearly 100 state laws making it easier to obtain, carry, and conceal firearms.” They ignore that the wave of gun law liberalization began in the 90′s, before the time span of this study.
  • MJ also notes that only three of the 84 attacks were stopped by the victims using guns. Ignored in their smugness is that a) that’s three times what Mother Jones earlier claimed over a much longer time baseline; b) the number of incidents stopped by the victims was actually 16. Only three used guns.; c) at least 1/3 of the incident happened in schools, were guns are forbidden.
  • So, yeah. They’re still playing with tiny numbers and tiny ranges of data to draw unsupportable conclusions. To be fair, the authors of the study are a bit more circumspect in their analysis, which is focused on training for law enforcement in dealing with active shooter situations. But Mother Jones never feels under any compulsion to question their conclusions.

    (H/T: Christopher Mason)

    Update: You might wonder why I’m on about this subject. The reason is that I think almost any analysis of mass shootings is deliberately misleading. Over the last twenty years, gun homicides have declined 40% (PDF) and gun violence by 70%. This is the real data. This is what we should be paying attention to. By diverting our attention to these horrific mass killings, Mother Jones and their ilk are focusing on about one one thousandth of the problem of gun violence because that’s the only way they can make it seem that we are in imminent danger.

    The thing is, Mother Jones does acknowledge the decline in violence in other contexts, such as claiming that the crackdown on lead has been responsible for the decline in violence. So when it suits them, they’ll freely acknowledge that violent crime has plunged. But when it comes to gun control, they pick a tiny sliver of gun violence to try to pretend that it’s not. And the tell, as I noted before, is that in their gun-control articles, they do not acknowledge the overall decline of violence.

    Using a fact when it suits your purposes and ignoring it when it doesn’t is pretty much the definition of hackery.

    Mother Jones Hacks Again

    Monday, February 18th, 2013

    A few weeks ago Mother Jones, having not learned the lesson of their absurd article claiming mass shootings are on the rise, published a list of 10 Myths about guns and gun control from Dave Gilson. And I’m going to debunk their debunking again because the article represents what I believe is one of the worst sins in the field of Mathematical Malpractice: cherry-picking. As I went through this, it became obvious that MJ was not interested in the facts, really. What was motivating them was the argument. And so they picked any study — no matter how small, how biased or how old — to support their point. They frequently ignore obvious objections and biases. And they sometimes ignore larger more detailed studies in favor of the smaller ones if it will support their contention.

    We see this a lot in the punditocracy, unfortunately. As Bill James said, most people use studies the way a drunk uses a lamppost — for support, not illumination. In any sufficiently advanced but difficult field of study, you will find multiple studies examining an issue. Let’s say it’s a supposed connection between watching Glee and having a heart attack. If there is, in reality, no connection between the two, you might find eight studies that show no connection, one that shows an anti-correlation and one that shows a correlation. This is fine. This is science. There are always outlier studies even if all the researchers are completely ethical and honest. The outliers fall away when your interest is the question and you look at all the evidence. But the outliers dominate the discussion from those who have an agenda.

    This happens a lot in the gun debate. On both sides, really. But Mother Jones’ article is a particularly putrid example of this because that’s basically all it does: collect the cherry-picked nonsensical studies that support their anti-gun agenda. It’s quite remarkable actually; almost a clinic in how not to do research.

    But here’s the one thing that really tips you off. There is one myth that Mother Jones does not debunk. It’s a myth that’s really independent of what you think of gun ownership … unless you’ve already staked part of your reputation and agenda on the myth that gun violence is increasing. In fact, all forms of violent crime have been falling for twenty years. This is, in my mind, the single most important fact in debates over crime and violence and the single most important myth to debunk.

    MJ does not address this myth. They don’t even talk about it. That is a huge tell.

    (more…)