Archive for the ‘Science and Edumacation’ Category

Ehrlich-Simon Round Two

Saturday, October 3rd, 2015

In 1980, doom-monger and perennial wrong person Paul Ehrlich made a wager with Julian Simon. Simon bet that the price of any commodities Ehrlich cared to nominate would go down over the next ten years due to increasing productivity. Ehrlich bet that they would go up because of the global pandemics and starvation that were supposed to happen but never did. At the end of the ten years, Ehrlich had to write Simon a check of $576.07.

Since the day the wager was concluded, Ehrlich and his fellow doom-sayers have been trying to weasel out of the result. You can see this on the Wikipedia page on the wager:

The prices of all five metals increased between 1950 and 1975, but Ehrlich believes three of the five went down during the 1980s because of the price of oil doubling in 1979, and because of a world-wide recession in the early 1980s.

Ehrlich might have a point if the entire motivation for the bet weren’t his prediction of a global catastrophe by the end of the 80′s.

Ehrlich would likely have won if the bet had been for a different ten-year period. Asset manager Jeremy Grantham wrote that if the Simon–Ehrlich wager had been for a longer period (from 1980 to 2011), then Simon would have lost on four of the five metals. He also noted that if the wager had been expanded to “all of the most important commodities,” instead of just five metals, over that longer period of 1980 to 2011, then Simon would have lost “by a lot.”

This doesn’t mean much. In 2011, we were in the middle of a precious metal crunch that sent prices skyrocketing. Since then, prices have tumbled. All five metals are down since 2011 and all but chromium are down around 50%. I can’t find good historical records, but copper, at least, is cheaper than it was in 1989, in inflation-adjusted terms. I’m also not clear by what Grantham means by “all of the most important commodities”. Oil is about the same as it was in 1980 in inflation adjusted terms. Wheat is cheaper.

Moreover, even if the commodities prices were higher, it’s still because Ehrlich was wrong. Ehrlich was predicting prices would go higher because we would run out of things. But when prices did go higher, they did because of increased global demand. They went higher because of the post-Cold War economic growth and development that has, so far, lifted two billion people out of poverty. This is growth that Ehrlich insisted (and still insists) was impossible.

(To be honest, I’m not sure why the Grantham quote is even on Wikipedia. Grantham is an investor, not an economist, and this quote was an off-hand remark in a quarterly newsletter about sound investment strategies, not a detailed analysis. The quote is so obscure and oblique to the larger issues that its inclusion crosses me as a desperate effort to defend Ehrlich’s honor.)

What really amuses me about the Ehrlich-Simon bet, however, was that he proposed a second wager. The second wager was not like the first. It was far more specific, far narrower and far more tailored to things that were not really in dispute. Simon rejected the second wager for the following reasons:

Let me characterize their offer as follows. I predict, and this is for real, that the average performances in the next Olympics will be better than those in the last Olympics. On average, the performances have gotten better, Olympics to Olympics, for a variety of reasons. What Ehrlich and others says is that they don’t want to bet on athletic performances, they want to bet on the conditions of the track, or the weather, or the officials, or any other such indirect measure.

Exactly. The two-and-a-half decades since the second wager have been the best in human history. We have had fewer wars than at any period. Fewer murders. Less disease. Less starvation. More wealth. Ehrlich’s second wager didn’t care about any of that. It was about details like global temperature and arable land and wild fish catches, not the big picture of human prosperity. Below, I will go through the specifics of the second wager and you will see, over and over again, how narrow Ehrlich’s points were. In many cases, he would have “won” on issues where he really lost — correctly predicting trivial details of a trend that contradicted everything he was saying.

In fact, even if Simon had taken up Ehrlich on his useless hideously biased bet, Ehrlich might have lost the bet anyway, depending on the criteria they agreed on. Let’s go through the bet Eherlich proposed, point by point:

  • The three years 2002–2004 will on average be warmer than 1992–1994. This would have been a win for Ehrlich, exaggerated by the Mt. Pinatubo cooling in 1993. So, 1-0 Ehrlich.
  • There will be more carbon dioxide in the atmosphere in 2004 than in 1994. Ehrlich wins again, 2-0. Neither of these was really in dispute, however.
  • There will be more nitrous oxide in the atmosphere in 2004 than 1994. Ehrlich wins again, 3-0. Again, this wasn’t really in dispute. Note, however, that this is basically three bets about the same thing: global warming. It would be as if Simon counted Tungsten three times in the original bet.
  • The concentration of ozone in the lower atmosphere (the troposphere) will be greater than in 1994. This took a lot of digging and I am happy to be corrected if I’m reading the literature wrong. But from what I can tell, tropospheric ozone has actually decreased or leveled off since 1994. At the very least, this pollutant has gotten under much firmer control. So, 3-1 Ehrlich.
  • Emissions of the air pollutant sulfur dioxide in Asia will be significantly greater in 2004 than in 1994. This is what I mean about how narrowly tailored the second wager was. Global SO2 emissions are down. In North America and Europe, they fell off a cliff. Only in Asia, did they go up. In fact, it’s really only in China. If it weren’t for China, global SO2 emissions would have been down by 20% over the span of the bet. It’s hard to call this a win for Ehrlich. 3-2.
  • There will be less fertile cropland per person in 2004 than in 1994. and There will be less agricultural soil per person in 2004 than 1994. Ehrlich really wanted to stack the deck in his favor, didn’t he? We’ll give him both of these as wins, but they really aren’t. There’s a reason we’re using less land for growing crops and it’s not because of environmental catastrophe. Its because …
  • There will be on average less rice and wheat grown per person in 2002–2004 than in 1992–1994. Boom. Not only is there not less rice and wheat per person being grown, there’s a hell of a lot more being grown. Thanks to improved farming methods and genetic engineering, today’s crops are hardier and more productive than ever. The reason Ehrlich “won” on the last two is because we don’t need as much land to feed people. So we’re at 5-3, but it’s a very cheap five for Ehrlich. So far, the only thing he was right on was that global warming would get worse.
  • In developing nations there will be less firewood available per person in 2004 than in 1994. I’m going to play Ehrlich’s game here. The key word here is “available”. From what I can tell, per capita firewood consumption is down in developing countries. But that’s not because of a crushing environmental crisis. It’s because fossil fuel consumption is way up. There’s plenty of firewood available. There’s a lot less need for it. This is once again Ehrlich focusing on trivia — how much firewood is available — rather than the big picture. Lots of firewood being burned is a bad thing. It means forests being gutted. It means indoor pollution. It means poverty. Fossil fuels aren’t exactly a panacea, obviously. But they’re way better than firewood. The people of the developing world are going to need natural gas if they’re ever going to get to solar power. 5-4.
  • The remaining area of virgin tropical moist forests will be significantly smaller in 2004 than in 1994. Once again, a very narrow criterion. I can’t find a source that is that specific. Tropical rainforest destruction has leveled off recently, thanks to prosperity. But I have little doubt Ehrlich would have won this. 6-4.
  • The oceanic fishery harvest per person will continue its downward trend and thus in 2004 will be smaller than in 1994. Again, note how harrow the criterion is. The amount of fish per person is actually up. But that’s because of a huge surge in farm fishing. Ehrlich only counts wild fish, which has not decreased, but is less on a per capita basis. How do we call that? I wouldn’t give him SO2, so I’ll give him wild fish. 7-4.
  • There will be fewer plant and animal species still extant in 2004 than in 1994. Sure. 8-4
  • More people will die of AIDS in 2004 than in 1994. Ehrlich would have won this one. But it’s a narrow win. AIDS deaths peaked in 2005. Since then, they have fallen by a third. 9-4
  • Between 1994 and 2004, sperm cell counts of human males will continue to decline and reproductive disorders will continue to increase. Nope. People aren’t even sure the original research is correct. At the very least, the picture is mixed. Reproductive disorders are probably increasing but that’s because of people waiting longer to have children because of … wait for it … increasing prosperity. 9-5.
  • The gap in wealth between the richest 10% of humanity and the poorest 10% will be greater in 2004 than in 1994. Nope. There’s concern about wealth inequality in developed countries. But, on global scale, wealth inequality has been in a decades-long decline. 9-6.
  • So … with Ehrlich counting global warming three times … with Ehrlich tailoring the questions incredibly narrowly … with Ehrlich betting twice on global farmland … Ehrlich wins the bet. But it’s a hollow victory. Ehrlich wins on nine of the bets, but only two or three of the larger issues. Had Simon taken him up on it and tweaked a few questions — going to global SO2 emissions or global fishing catches or one measure for global warming — Ehrlich would have lost and lost badly.

    That’s the point here. If you put aside the details, the second wager amounted to Ehrlich betting that the disaster that failed to happen in the 80′s would happen in the 90′s. He was wrong. Again. And if you “extended the bet” the way his defenders do for the initial bet, he was even wronger.

    And yet … he’s still a hero to the environmental movement. It just goes to show you that there’s no path to fame easier than being wrong with authority.

    More on Vaccination

    Friday, May 15th, 2015


    One issue that I am fairly militant about is vaccination. Vaccines are arguably the greatest invention in human history. Vaccines made smallpox, a disease that slaughtered billions, extinct. Polio, which used to maim and kill millions, is on the brink of extinction. A couple of weeks ago, Rubella became extinct in the Americas:

    After 15 years of a widespread vaccination campaign with the MMR (measles-mumps-rubella) vaccine, the Pan American Health Organization and the World Health Organization announced yesterday that rubella no longer circulates in the Americas. The only way a person could catch it is if they are visiting another country or if it is imported into a North, Central or South American country.

    Rubella, also known as German measles, was previously among a pregnant woman’s greatest fears. Although it’s generally a mild disease in children and young adults, the virus wreaks the most damage when a pregnant woman catches it because the virus can cross the placenta to the fetus, increasing the risk for congenital rubella syndrome.

    Congenital rubella syndrome can cause miscarriage or stillbirth, but even the infants who survive are likely to have birth defects, heart problems, blindness, deafness, brain damage, bone and growth problems, intellectual disability or damage to the liver and spleen.

    Rubella used to cause tens of thousands of miscarriages and birth defects every year. Now it too could be pushed to extinction.

    Of course, many deadly diseases are now coming back thanks to people refusing to vaccinate their kids. There is an effort to blame this on “anti-government” sentiment. But while that plays role, the bigger role is by liberal parents who think vaccines cause autism (you’ll notice we’re getting outbreaks in California, not Alabama). As I’ve noted before, the original research that showed a link between vaccines and autism is now known to have been a fraud. Recently, we got another even more proof:

    On the heels of a measles outbreak in California fueled by vaccination fears that scientists call unfounded, another large study has shown no link between the measles-mumps-rubella vaccine and autism.

    The study examined insurance claims for 96,000 U.S. children born between 2001 and 2007, and found that those who received MMR vaccine didn’t develop autism at a higher rate than unvaccinated children, according to results published Tuesday by the Journal of the American Medical Association, or JAMA. Even children who had older siblings with autism—a group considered at high risk for the disorder—didn’t have increased odds of developing autism after receiving the vaccine, compared with unvaccinated children with autistic older siblings.

    96,000 kids — literally 8000 times the size of the sample Wakefield had. No study has ever reproduced Wakefield’s results. That’s because no study has been a complete fraud.

    There’s something else, though. This issue became somewhat personal for me recently. My son Ben came down with a bad cough, a high fever and vomiting. He was eventually admitted to the hospital for a couple of days with pneumonia, mainly to get rehydrated. He’s fine now and playing in the next room as I write this. But it was scary.

    I mention this because one of the first questions the nurses and doctors asked us was, “Has he been vaccinated?”

    My father, the surgeon, likes to say that medicine is as much art as science. You can know the textbooks by heart. But the early symptoms of serious diseases and not-so-serious one are often similar. An inflamed appendix can look like benign belly pain. Pneumonia can look like a cold. “Flu-like symptoms” can be the early phase of anything from a bad cold to ebola. But they mostly get it right because experience with sick people has honed their instincts. They might not be able to tell you why they know it’s not just a cold, but they can tell you (with Ben, the doctor’s instinct told him it wasn’t croup and he ordered a chest X-ray that spotted the pneumonia).

    Most doctors today have never seen measles. Or mumps. Or rubella. Or polio. Or anything else we routinely vaccinate for. Thus, they haven’t built up the experience to recognize these conditions. Orac, the writer of the Respectful Insolence blog, told me of a sick child who had Hib. It was only recognized because an older doctor had seen it before.

    When I told the doctors Ben had been vaccinated, their faces filled with relief. Because it meant that they didn’t have to think about a vast and unfamiliar terrain of diseases that are mostly eradicated. It wasn’t impossible that he would have a disease he was vaccinated against — vaccines aren’t 100%. But it was far less likely. They could narrow their focus on a much smaller array of possibilities.

    Medicine is difficult. The human body doesn’t work like it does in a textbook. You don’t punch symptoms into a computer and come up with a diagnosis. Doctors and nurses are often struggling to figure out what’s wrong with a patient let alone how to treat it. Don’t cloud the waters even further by making them have to worry about diseases they’ve never seen before.

    Vaccinate. Take part in the greatest triumph in human history. Not just to finally rid ourselves of these hideous diseases but to make life much easier when someone does get sick.

    Movie Review: Interstellar

    Saturday, May 9th, 2015

    So far, I have seen five of last year’s Best Picture nominees — Birdman, Boyhood, The Grand Budapest Hotel, The Imitation Game and Whiplash. I’ve also seen a few other 2014 films — Gone Girl, Guardians of the Galaxy and The Edge of Tomorrow — that rank well on IMDB. I’ll have a post at some point about all of them when I look at 2014 in film. But right now, they would all be running behind Interstellar, which I watched last night.

    I try very hard to mute my hopes for movies but I was anticipating Interstellar since the first teaser came out. I’m glad to report that it’s yet another triumph for Nolan. The film is simply excellent. The visuals are spectacular and clear, the characters well-developed, the minimalist score is one of Zimmer’s best so far. The ending and the resolution of the plot could be argued with but it’s unusual for me to watch a three-hour movie in one sitting unless it’s Lord of the Rings. I definitely recommend it, especially to those are fans of 2001 or Tree of Life.

    That’s not the reason I’m writing about it though.

    One of the remarkable things about Interstellar is that it works very hard to get the science right. There are a few missteps, usually for dramatic reasons. For example, the blight affecting Earth works far faster than it would in real life. The spacecraft seem to have enormous amounts of fuel for planetary landings. The astronauts don’t use probes and unmanned landers to investigate planets before landing. And, as I mentioned, the resolution of the plot ventures well into the realm of science fiction and pretty much into fantasy.

    But, most of the film is beautifully accurate. The plan to save Earth (and the backup plan) is a realistic approach. Trips through the stellar systems take months or years. Spacecraft have to rotate to create gravity (including a wonderful O’Neill Cylinder). Space is silent — an aesthetic I notice is catching on in sci-fi films as directors figure out how eerie silence is. General and special relativity play huge roles in the plot. Astrophysicist Kip Thorne insisted on being as scientifically accurate as possible and it shows.

    And the result is a better film. The emotional thrust of Cooper’s character arc is entirely built on the cruel tricks relativity plays on him. The resolution of Dr. Mann’s arc is built entirely on rock solid physics including the daring stunt Coop uses to save the day. The incredible sequences near the black hole could be taken right of a physics textbook, including a decision that recalls The Cold Equations.

    We’re seeing this idea trickle into more and more of science fiction. Battlestar Galactica had muted sounds in space. Moon has reasonably accurate scientific ideas. Her had a sound approach to AI. Serenity has a silent combat scene in space, as did, for a moment, Star Trek. Gravity has some serious issues with orbital dynamics, but much of the rest was rock solid.

    I’m hoping this will continue, especially if the rumors of a Forever War movie are true. A science fiction movie doesn’t need accurate science to be good. In fact, it can throw science out the window and be great (e.g., Stars Wars). But I hope that Interstellar blazes a path for more science fiction movies that are grounded, however shakily at times, in real science. This could breath new life into a genre that’s been growing staler with every passing year.

    I don’t say this as an astrophysicist (one available for consultation for any aspiring filmmakers). I say this as a movie buff. I say this as someone who loves good movies and think great movies can be made that show science in all its beautiful, glorious and heart-stopping accuracy.

    Post Scriptum: Many of my fellow astronomers disagree with me on Interstellar, both on the quality of the film and its scientific accuracy. You can check out fellow UVa alum Phil Plait here, although note that in saying it got the science wrong, he actually got the science wrong. Pro Tip: if you’re going to say Kip Thorne got the science wrong, be sure to do your homework.

    How Many Women?

    Saturday, November 1st, 2014

    Campus sexual violence continues to be a topic of discussion, as it should be. I have a post going up on the other site about the kangaroo court system that calls itself campus justice.

    But in the course of this discussion, a bunch of statistical BS has emerged. This centers on just how common sexual violence is on college campuses, with estimates ranging from the one-in-five stat that has been touted, in various forms, since the 1980′s, to a 0.2 percent rate touted in a recent op-ed.

    Let’s tackle that last one first.

    According to the FBI “[t]he rate of forcible rapes in 2012 was estimated at 52.9 per 100,000 female inhabitants.”

    Assuming that all American women are uniformly at risk, this means the average American woman has a 0.0529 percent chance of being raped each year, or a 99.9471 percent chance of not being raped each year. That means the probability the average American woman is never raped over a 50-year period is 97.4 percent (0.999471 raised to the power 50). Over 4 years of college, it is 99.8 percent.

    Thus the probability that an American woman is raped in her lifetime is 2.6 percent and in college 0.2 percent — 5 to 100 times less than the estimates broadcast by the media and public officials.

    This estimate is way too low. It is based on taking one number and applying high school math to it. It misses the mark because it uses the wrong numbers and some poor assumptions.

    First of all, the FBI’s stats are on documented forcible rape and does not account for under-reporting and does not includes sexual assault. The better comparison is the National Crime Victimization Survey, which estimates about 300,000 rapes or sexual assaults in 2013 for an incidence rate of 1.1 per thousand. But even that number needs some correction because about 2/3 of sexual violence is visited upon women between the ages of 12 and 30 and about a third among college-age women. The NCVS rate indicates about a 10% lifetime risk or about 3% college-age risk for American women. This is lower than the 1-in-5 stat but much higher than 1-in-500.

    (*The NCVS survey shows a jump in sexual violence in the 2000′s. That’s not because sexual violence surged; it’s because they changed their methodology, which increased their estimates by about 20%.)

    So what about 1-in-5? I’ve talked about this before, but it’s worth going over again: the one-in-five stat is almost certainly a wild overestimate:

    The statistic comes from a 2007 Campus Sexual Assault study conducted by the National Institute of Justice, a division of the Justice Department. The researchers made clear that the study consisted of students from just two universities, but some politicians ignored that for their talking point, choosing instead to apply the small sample across all U.S. college campuses.

    The CSA study was actually an online survey that took 15 minutes to complete, and the 5,446 undergraduate women who participated were provided a $10 Amazon gift card. Men participated too, but their answers weren’t included in the one-in-five statistic.

    If 5,446 sounds like a high number, it’s not — the researchers acknowledged that it was actually a low response rate.

    But a lot of those responses have to do with how the questions were worded. For example, the CSA study asked women whether they had sexual contact with someone while they were “unable to provide consent or stop what was happening because you were passed out, drugged, drunk, incapacitated or asleep?”

    The survey also asked the same question “about events that you think (but are not certain) happened.”

    That’s open to a lot of interpretation, as exemplified by a 2010 survey conducted by the U.S. Centers for Disease Control and Prevention, which found similar results.

    I’ve talked about the CDC study before and its deep flaws. Schow points out that the victimization rate they are claiming is way more than the National Crime Victimization Survey (NCVS), the FBI and the Rape, Abuse and Incest National Network (RAINN) estimates. All three of those agencies use much more rigorous data collection methods. NCVS does interviews and asks the question straight up: have you been raped or sexually assaulted? I would trust the research methods of these agencies, who have been doing this for decades, over a web-survey of two colleges.

    Another survey recently emerged from MIT which claimed 1-in-6 women are sexually assaulted. But only does this suffer from the same flaws as the CSA study (a web survey with voluntary participation), it’s not even claiming what it claims:

    When it comes to experiences of sexual assault since starting at MIT:

  • 1 in 20 female undergraduates, 1 in 100 female graduate students, and zero male students reported being the victim of forced sexual penetration
  • 3 percent of female undergraduates, 1 percent of male undergraduates, and 1 percent of female grad students reported being forced to perform oral sex
  • 15 percent of female undergraduates, 4 percent of male undergraduates, 4 percent of female graduate students, and 1 percent of male graduate students reported having experienced “unwanted sexual touching or kissing”
  • All of these experiences are lumped together under the school’s definition of sexual assault.

    When students were asked to define their own experiences, 10 percent of female undergraduates, 2 percent of male undergraduates, three percent of female graduate students, and 1 percent of male graduate students said they had been sexually assaulted since coming to MIT. One percent of female graduate students, one percent of male undergraduates, and 5 percent of female undergraduates said they had been raped.

    Note that even with a biased study, the result is 1-in-10, not 1-in-5 or 1-in-6.

    OK, so web surveys are a bad way to do this. What is a good way? Mark Perry points out that the one-in-five stat is inconsistent with another number claimed by advocates of new policies: a reporting rate of 12%. If you assume a reporting rate near that and use the actual number of reported assaults on major campuses, you get a rate of around 3%.


    Further research is consistent with this rate. For example, here, we see that UT Austin has 21 reported incidents of sexual violence. That’s one in a thousand enrolled women. Texas A&M reported nine, one in three thousand women. Houston reported 11, one in 2000 women. If we are to believe the 1-in-5 stat, that’s a reporting rate of half a percent. A reporting rate of 10%, which is what most people accept, would mean … a 3-5% risk for five years of enrollment.

    So … Mark Perry finds 3%. Texas schools show 3-5%. NCVS and RAINN stats indicate 2-5%. Basically, any time we use actual numbers based on objectives surveys, we find the number of women who are in danger of sexual violence during their time on campus is 1-in-20, not 1-in-5.

    One other reason to disbelieve the 1-in-5 stat. Sexual violence in our society is down — way down. According to the Bureau of Justice Statistics, rape has fallen from 2.5 per 1000 to 0.5 per thousand, an 80% decline. The FBI’s data show a decline from 40 to about 25 per hundred thousand, a 40% decline (they don’t account for reporting rate, which is likely to have risen). RAINN estimates that the rate has fallen 50% in just the last twenty years. That means 10 million fewer sexual assaults.

    Yet, for some reason, sexual assault rates on campus have not fallen, at least according to the favored research. They were claiming 1-in-5 in the 80′s and they are claiming 1-in-5 now. The sexual violence rate on campus might fall a little more slowly than the overall society because campus populations aren’t aging the way the general population is and sexual violence victims are mostly under 30. But it defies belief that the huge dramatic drops in violence and sexual violence everywhere in the world would somehow not be reflected on college campuses.

    Interestingly, the decline in sexual violence does appear if you polish the wax fruit a bit. The seminal Koss study of the 1980′s claimed that one-in-four women were assaulted or raped on college campuses. As Christina Hoff Summer and Maggie McNeill pointed out, the actual rate was something like 8%. A current rate of 3-5% would indicate that sexual violence on campus has dropped in proportion to that of sexual violence in the broader society.

    It goes without saying, of course that 3-5% of women experiencing sexual violence during their time at college is 3-5% too many. As institutions of enlightenment (supposedly), our college campuses should be safer than the rest of society. I support efforts to clamp down on campus sexual violence, although not in the form that it is currently taking, which I will address on the other site.

    But the 1-in-5 stat isn’t reality. It’s a poll-test number. It’s a number picked to be large enough to be scary but not so large as to be unbelievable. It is being used to advance an agenda that I believe will not really address the problem of sexual violence.

    Numbers means things. As I’ve argued before, if one in five women on college campuses are being sexually assaulted, this suggests a much more radical course of action than one-in-twenty. It would suggest that we should shut down every college in the country since they are the most dangerous places for women in the entire United States. But 1-in-20 suggests that an overhaul of campus judiciary systems, better support for victims and expulsion of serial predators would do a lot to help.

    In other words, let’s keep on with the policies that have dropped sexual violence 50-80% in the last few decades.

    A Fishy Story

    Thursday, October 30th, 2014

    Clearing out some old posts.

    A while ago, I encountered a story on Amy Alkon’s site about a man fooled into fathering a child:

    Here’s how it happened, according to Houston Press. Joe Pressil began dating his girlfriend, Anetria, in 2005. They broke up in 2007 and, three months later, she told him she was pregnant with his child. Pressil was confused, since the couple had used birth control, but a paternity test proved that he was indeed the father. So Pressil let Anetria and the boys stay at his home and he agreed to pay child support.
    Fast forward to February of this year, when 36-year-old Pressil found a receipt – from a Houston sperm bank called Omni-Med Laboratories – for “cryopreservation of a sperm sample” (Pressil was listed as the patient although he had never been there). He called Omni-Med, which passed him along to its affiliated clinic Advanced Fertility. The clinic told Pressil that his “wife” had come into the clinic with his semen and they performed IVF with it, which is how Anetria got pregnant.

    The big question, of course, is how exactly did Anetria obtain Pressil’s sperm without him knowing about it? Simple. She apparently saved their used condoms. Gag. (Anetria denies these claims.) [tagbox tag="IVF"]

    “I couldn’t believe it could be done. I was very, very devastated. I couldn’t believe that this fertility clinic could actually do this without my consent, or without my even being there,” Pressil said, adding that artificial insemination is against his religious beliefs. “That’s a violation of myself, to what I believe in, to my religion, and just to my manhood,” Pressil said.

    I’ve now seen this story show up on a couple of other sites. The only links in Google are for the original claim and her denial. I can’t find out how it was resolved. But I suspect his claim was dismissed. The reason I suspect this is because his story is total bullshit.

    Here’s a conversation that has never happened:

    Patient: “Hi, I have this condom full of sperm. God knows how I got it or who it belongs to. Can you harvest my eggs and inject this into them?”

    Doctor: “No problem!”

    I’ve been through IVF (Ben was conceived naturally after two failed cycles). It is a very involved process. We had to have interviews, then get tests for venereal diseases and genetic conditions. I then had to show up and make my donation either on site or in nearby hotel. And no, I was not allowed to bring in a condom. Condoms contain spermicides and lubricants that murder sperm and latex is not sperm’s friend. Even in a sterile container, sperm cells don’t last very long unless they are placed in a special refrigerator. Freezing sperm is a slow process that takes place in a solution that keeps the cells from shattering from ice crystal formation.

    And that’s only the technical side of the story. There’s also the legal issue that no clinic is going to expose themselves to a potential multi-million dollar lawsuit by using the sperm of a man they don’t have a consent form from.

    So, no, you can’t just have a man fill a condom, throw it in your freezer and get it injected into your eggs. It doesn’t work that way. This is why I believe the woman’s lawyer, who claims Pressil agreed to IVF and signed consent forms.

    I’ve seen the frozen sperm canard come up on TV shows and movies from time to time. It annoys me. This is something conjured up by people who haven’t done their research.

    Mother Jones Revisited

    Saturday, October 18th, 2014

    A couple of years ago, Mother Jones did a study of mass shootings which attempted to characterize these awful events. Some of their conclusions were robust — such as the finding that most mass shooters acquire their guns legally. However, their big finding — that mass shootings are on the rise — was highly suspect.

    Recently, they doubled down on this, proclaiming that Harvard researchers have confirmed their analysis1. The researchers use an interval analysis to look at the time differences between mass shootings and claim that the recent run of short intervals proves that the mass shootings have tripled since 2011.2

    Fundamentally, there’s nothing wrong with the article. But practically, there is: they have applied a sophisticated technique to suspect data. This technique does not remove the problems of the original dataset. If anything, it exacerbates them.

    As I noted before, the principle problem with Mother Jones’ claim that mass shootings were increasing was the database. It had a small number of incidents and was based on media reports, not by taking a complete data set and paring it down to a consistent sample. Incidents were left out or included based on arbitrary criteria. As a result, there may be mass shootings missing from the data, especially in the pre-internet era. This would bias the results.

    And that’s why the interval analysis is problematic. Interval analysis itself is useful. I’ve used it myself on variable stars. But there is one fundamental requirement: you have to have consistent data and you have to account for potential gaps in the data.

    Let’s say, for example, that I use interval analysis on my car-manufacturing company to see if we’re slowing down in our production of cars. That’s a good way of figuring out any problems. But I have to account for the days when the plant is closed and no cars are being made. Another example: let’s say I’m measuring the intervals between brightness peaks of a variable star. It will work well … if I account for those times when the telescope isn’t pointed at the star.

    Their interval analysis assumes that the data are complete. But I find that suspect given the way the data were collected and the huge gaps and massive dispersion of the early intervals. The early data are all over the place, with gaps as long as 500-800 days. Are we to believe that between 1984 and 1987, a time when violent crime was surging, that there was only one mass shooting? The more recent data are far more consistent with no gap greater than 200 days (and note how the data get really consistent when Mother Jones began tracking these events as they happened, rather than relying on archived media reports).

    Note that they also compare this to the average of 172 days. This is the basis of their claim that the rate of mass shootings has “tripled”. But the distribution of gaps is very skewed with a long tail of long intervals. The median gap is 94 days. Using the median would reduce their slew of 14 straight below-average points to 11 below-median points. It would also mean that mass shootings have increased by only 50%. Since 1999, the median is 60 days (and the average 130). Using that would reduce their slew of 14 straight short intervals to four and mean that mass shootings have been basically flat.

    The analysis I did two years ago was very simplistic — I looked at victims per year. That approach has its flaws but it has one big strength — it is less likely to be fooled by gaps in the data. Huge awful shootings dominate the number of victims and those are unlikely to have been missed in Mother Jones’ sample.

    Here is what you should do if you want to do this study properly. Start with a uniform database of shootings such as those provided by law enforcement agencies. Then go through the incidents, one by one, to see which ones meet your criteria.

    In Jesse Walker’s response to Mother Jones, in which he graciously quotes me at length, he notes that a study like this has been done:

    The best alternative measurement that I’m aware of comes from Grant Duwe, a criminologist at the Minnesota Department of Corrections. His definition of mass public shootings does not make the various one-time exceptions and other jerry-riggings that Siegel criticizes in the Mother Jones list; he simply keeps track of mass shootings that took place in public and were not a byproduct of some other crime, such as a robbery. And rather than beginning with a search of news accounts, with all the gaps and distortions that entails, he starts with the FBI’s Supplementary Homicide Reports to find out when and where mass killings happened, then looks for news reports to fill in the details. According to Duwe, the annual number of mass public shootings declined from 1999 to 2011, spiked in 2012, then regressed to the mean.

    (Walker’s article is one of those “you really should read the whole thing” things.)

    This doesn’t really change anything I said two year ago. In 2012, we had an awful spate of mass shootings. But you can’t draw the kind of conclusions Mother Jones wants to from rare and awful incidents. And it really doesn’t matter what analysis technique you use.

    1. That these researchers are from Harvard is apparently a big deal to Mother Jones. As one of my colleague used to say, “Well, if Harvard says it, it must be true.”

    2. This is less alarming than it sounds. Even if we take their analysis at face value, we’re talking about six incidents a year instead of two for a total of about 30 extra deaths or about 0.2% of this country’s murder victims or about the same number of people that are crushed to death by their furniture. We’re also talking about two years of data and a dozen total incidents.

    Now You See the Bias Inherent in the System

    Thursday, September 11th, 2014

    When I was a graduate student, one of the big fields of study was the temperature of the cosmic microwave background. The studies were converging on a value of 2.7 degrees with increasing precision. In fact, they were converging a little too well, according to one scientist I worked with.

    If you measure something like the temperature of the cosmos, you will never get precisely the right answer. There is always some uncertainty (2.7, give or take a tenth of a degree) and some bias (2.9, give or take a tenth of a degree). So the results should span a range of values consistent with what we know about the limitations of the method and the technology. This scientist claimed that the range was too small. As he said, “You get the answer. And if it’s not the answer you wanted, you smack your grad student and tell him to do it right next time.”

    It’s not that people were faking the data or tiling their analysis. It’s that knowing the answer in advance can cause subtle confirmation biases. Any scientific analysis is going to have a bias — an analytical or instrumentation effect that throws off the answer. A huge amount of work is invested in ferreting out and correcting for these biases. But there is a danger when a scientist thinks he knows the answer in advance. If they are off from the consensus, they might pore through their data looking for some effect that biased the results. But if they are close, they won’t look as carefully.

    Megan McArdle flags two separate instances of this in the social sciences. The first is the long-standing claim that conservatives are authoritarian while liberals are not:

    Jonathan Haidt, one of my favorite social scientists, studies morality by presenting people with scenarios and asking whether what happened was wrong. Conservatives and liberals give strikingly different answers, with extreme liberals claiming to place virtually no value at all on things like group loyalty or sexual purity.

    In the ultra-liberal enclave I grew up in, the liberals were at least as fiercely tribal as any small-town Republican, though to be sure, the targets were different. Many of them knew no more about the nuts and bolts of evolution and other hot-button issues than your average creationist; they believed it on authority. And when it threatened to conflict with some sacred value, such as their beliefs about gender differences, many found evolutionary principles as easy to ignore as those creationists did. It is clearly true that liberals profess a moral code that excludes concerns about loyalty, honor, purity and obedience — but over the millennia, man has professed many ideals that are mostly honored in the breach.

    [Jeremy] Frimer is a researcher at the University of Winnipeg, and he decided to investigate. What he found is that liberals are actually very comfortable with authority and obedience — as long as the authorities are liberals (“should you obey an environmentalist?”). And that conservatives then became much less willing to go along with “the man in charge.”

    Frimer argues that conservatives tend to support authority because they think authority is conservative; liberals tend to oppose it for the same reason. Liberal or conservative, it seems, we’re all still human under the skin.

    Exactly. The deference to authority for conservatives and liberals depends on who is wielding said authority. If it’s a cop or a religious figure, conservatives tend to trust them and liberals are skeptical. If it’s a scientist or a professor, liberals tend to trust them and conservatives are rebellious.

    Let me give an example. Liberals love to cite the claim that 97% of climate scientists agree that global warming is real. In fact, this week they are having 97 hours of consensus where they have 97 quotes from scientists about global warming. But what is this but an appeal to authority? I don’t care if 100% of scientists agree on global warming: they still might be wrong. If there is something wrong with the temperature data (I don’t think there is) then they are all wrong.

    The thing is, that appeal to authority does scrape something useful. You should accept that global warming is very likely real. But not because 97% of scientists agree. The “consensus” supporting global warming is about as interesting as a “consensus” opposing germ theory. It’s the data supporting global warming that is convincing. And when scientists fall back on the data, not their authority, I become more convinced.

    If I told liberals that we should ignore Ferguson because 97% of cops think the shooting justified, they wouldn’t say, “Oh, well that settles it.” If I said that 97% of priests agreed that God exists, they wouldn’t say, “Oh, well that settles it.” Hell, this applies even to things that aren’t terribly controversial. Liberals are more than happy to ignore the “consensus” on the unemployment effects of minimum wage hikes or the safety of GMO crops.

    I’m drifting from the point. The point is that the studies showing the conservatives are more “authoritarian” were biased. They only asked about certain authority figures, not all of them. And since this was what the mostly liberal social scientists expected, they didn’t question it. McArdle gets into this in her second article, which takes on the claim that conservative views come from “low-effort thought” based on two small studies.

    In both studies, we’re talking about differences between groups of 18 to 19 students, and again, no mention of whether the issue might be disinhibition — “I’m too busy to give my professor the ‘right’ answer, rather than the one I actually believe” — rather than “low-effort thought.”

    I am reluctant to make sweeping generalizations about a very large group of people based on a single study. But I am reluctant indeed when it turns out those generalizations are based on 85 drunk people and 75 psychology students.

    I do not have a scientific study to back me up, but I hope that you’ll permit me a small observation anyway: We are all of us fond of low-effort thought. Just look at what people share on Facebook and Twitter. We like studies and facts that confirm what we already believe, especially when what we believe is that we are nicer, smarter and more rational than other people. We especially like to hear that when we are engaged in some sort of bruising contest with those wicked troglodytes — say, for political and cultural control of the country we both inhabit. When we are presented with what seems to be evidence for these propositions, we don’t tend to investigate it too closely. The temptation is common to all political persuasions, and it requires a constant mustering of will to resist it.

    One of these studies found that drunk students were more likely to express conservative views than sober ones and concluded that this was because it was easier to think conservatively when alcohol is inhibiting your through process. The bias there is simply staggering. They didn’t test the students before they started drinking (heavy drinkers might skew conservative). They didn’t consider social disinhibition — which I have mentioned in studies claiming that hungry or “stupid” men like bigger breasts. This was a study designed with its conclusion in mind.

    All sciences are in danger of confirmation bias. My advisor was very good about side-stepping it. When we got the answer we expected, he would say, “something is wrong here” and make us go over the data again. But the social sciences seem more subject to confirmation bias for various reasons: the answers in the social sciences are more nebulous, the biases are more subtle, the “observer effect” is more real and, frankly, some social scientists lack the statistical acumen to parse data properly (see the Hurricane study discussed earlier this year). But I also think there is an increased danger because of the immediacy of the issues. No one has a personal stake in the time-resolved behavior of an active galactic nucleus. But people have very personal stakes in politics, economics and sexism.

    Megan also touches on what I’ve dubbed the Scientific Peter Principle: that a study garnering enormous amounts of attention is likely erroneous. The reason is that when you do something wrong in a study, it will usually manifest as a false result, not a null result. Null results are usually the result of doing your research right, not doing it wrong. Take the sexist hurricane study earlier this year. Had the scientists done their research correctly: limiting their data to post-1978 or doing a K-S test, they would have found no connection between the femininity of hurricane names and their deadliness. As a result, we would never have heard about it. In fact, other scientists may have already done that analysis and either not bothered to publish it or publish it quietly.

    But because they did their analysis wrong — assigning an index to the names, only sub-sampling the data in ways that supported the hypothesis — they got a result. And because they had a surprising result, they got publicity.

    This happens quite a bit. The CDC got lots of headlines when they exaggerated the number of obesity deaths by a factor of 14. Scottish researchers got attention when they erroneously claimed that smoking bans were saving lives. The EPA got headlines when they deliberately biased their analysis to claim that second-hand smoke was killing thousands.

    Cognitive bias, in combination with the Scientific Peter Principle, is incredibly dangerous.

    Mathematical Malpractice Watch: Torturing the Data

    Thursday, August 28th, 2014

    There’s been a kerfuffle recently about a supposed CDC whistleblower who has revealed malfeasance in the primary CDC study that refuted the connection between vaccines and autism. Let’s put aside that the now-retracted Lancet study the anti-vaxxers tout as the smoking gun was a complete fraud. Let’s put aside that other studies have reached the same conclusion. Let’s just address the allegations at hand, which include a supposed cover up. These allegations are in a published paper (now under further review) and a truly revolting video from Andrew Wakefield — the disgraced author of the fraudulent Lancet study that set off this mess — that compares this “cover-up” to the Tuskegee experiments.

    According to the whistle-blower, his analysis shows that while most children do not have an increased risk of autism (which, incidentally, discredits Wakefield’s study), black males vaccinated before 36 months show a 240% increased risk (not 340, as has been claimed). You can catch the latest from Orac. Here’s the most important part:

    So is Hooker’s result valid? Was there really a 3.36-fold increased risk for autism in African-American males who received MMR vaccination before the age of 36 months in this dataset? Who knows? Hooker analyzed a dataset collected to be analyzed by a case-control method using a cohort design. Then he did multiple subset analyses, which, of course, are prone to false positives. As we also say, if you slice and dice the evidence more and more finely, eventually you will find apparent correlations that might or might not be real.

    In other words, what he did was slice and dice the sample to see if one of those slices would show a correlation. But by pure chance, one of those slices would show a correlation, even there wasn’t one. As best illustrated in this cartoon, if you run twenty tests for something that has no correlation, statistics dictate that at least one of those will show a spurious correlation at the 95% confidence level. This is one of the reasons many scientists, especially geneticists, are turning to Bayesian analysis, which can account for this.

    If you did a study of just a few African-American boys and found a connection between vaccination and autism, it would be the sort of preliminary shaky result you would use to justify looking at a larger sample … such as the full CDC study that the crackpot’s own analysis shows refutes such a connection. To take a large comprehensive study, narrow it down to a small sample and then claim the result of this small sample override those of the large one is ridiculous. It’s the opposite of how epidemiology works (and there is no suggestion that there is something about African American males that makes them more susceptible to vaccine-induced autism).

    This sort of ridiculous cherry-picking happens a lot, mostly in political contexts. Education reformers will pore over test results until they find that fifth graders slightly improved their reading scores and claim their reform is working. When the scores revert back the next year, they ignore it. Drug warriors will pore over drug stats and claim that a small drop in heroine use among people in their 20′s indicates that the War on Drugs is finally working. When it reverts back to normal, they ignore it.

    You can’t pick and choose little bits of data to support your theory. You have to be able to account for all of it. And you have to be aware of how often spurious results pop up even in the most objective and well-designed studies, especially when you parse the data finer and finer.

    But the anti-vaxxers don’t care about that. What they care about is proving that evil vaccines and Big Pharma are poisoning us. And however they have to torture the data to get there, that’s what they’ll do.

    Mathematical Malpractice Watch: Hurricanes

    Monday, June 2nd, 2014

    There’s a new paper out that claims that hurricanes with female names tend to be deadlier than ones with male names based on hurricane data going back to 1950. They attribute this to gender bias, the idea that people don’t take hurricanes with female-names seriously.

    No, this is not the onion.

    I immediately suspected a bias. For one thing, even with their database, we’re talking about 92 events, many of which killed zero people. More important, all hurricanes had female names until 1979. What else was true before 1979? We had a lot less advanced warning of hurricanes. In fact, if you look up the deadliest hurricanes in history, they are all either from times before we named them or when hurricanes all had female names. In other words, they may just be measuring the decline in hurricane deadliness.

    Now it’s possible that the authors use some sophisticated model that also account for hurricane strength. If so, that might mitigate my analysis. But I’m dubious. I downloaded their spreadsheet, which is available for the journal website. Here is what I found:

    Hurricanes before 1979 averaged 27 people killed.

    Hurricanes since 1979 average 16 people killed.

    Hurricanes since 1979 with male names average … 16 people killed.

    Hurricanes since 1979 with female names averaged … 16 people killed.

    Maybe I’m missing something. How did this get past a referee?

    Update: Ed Yong raises similar points here. The authors say that cutting the sample at 1979 made the numbers too small and so therefore use an index of how feminine or masculine the names were. I find that dubious when a plain and simple average will give you an answer. Moreover, they try this qualifier in the comments:

    What’s more, looking only at severe hurricanes that hit in 1979 and afterwards (those above $1.65B median damage), 16 male-named hurricane each caused 23 deaths on average whereas 14 female-named hurricanes each caused 29 deaths on average. This is looking at male/female as a simple binary category in the years since the names started alternating. So even in that shorter time window since 1979, severe female-named storms killed more people than did severe male-named storms.

    You be the judge. I average 54 post-1978 storms totally 1200 deaths and get even numbers. They narrow it to 30 totally 800 deaths and claim a bias based on 84 excess deaths. That really crosses as stretching to make a point.

    Update: My friend Peter Yoachim did a K-S test of the data and found a 97% chance that the male- and female-named hurricanes were drawn from the same distribution. This is a standard test of the null hypothesis and wasn’t done at all. Ridiculous.

    Absolutely Nothing Happened in Sector 83 by 9 by 12 Today

    Wednesday, May 28th, 2014

    Last night, the science social media sphere exploded with the news of a potential … something … in our nearest cosmic neighbor, M31. The Swift mission, which I am privileged to work for, reported the discovery of a potential bright X-ray transient in M31, a sign of a high-energy event. For a while, we had very little to go on — Goddard had an unfortunately timed power outage. Some thought (and some blogs actually reported) that we’d seen a truly extraordinary event — perhaps even a nearby gamma-ray burst. But it turned out to be something more mundane. My friend and colleague Phil Evans has a great explanation:

    It started with the Burst Alert Telescope, or BAT, on board Swift. This is designed to look for GRBs, but will ‘trigger’ on any burst of high-energy radiation that comes from an area of the sky not known to emit such rays. But working out if you’ve had such a burst is not straightforward, because of noise in the detector, background radiation etc. So Swift normally only triggers if it’s really sure the burst of radiation is real; for the statisticians among you, we have a 6.5-σ threshold. Even then, we occasionally get false alarms. But we also have a program to try to spot faint GRBs in nearby galaxies. For this we accept lower significance triggers from BAT if they are near a known, nearby galaxy. But these lower significance triggers are much more likely to be spurious. Normally, we can tell that they are spurious because GRBs (almost always) have a glow of X-rays detectable for some time after the initial burst, an ‘afterglow’. The spurious triggers don’t have this, of course.

    In this case, it was a bit more complicated There was an X-ray source consistent with the BAT position. The image to the right shows the early X-ray data. The yellow circle shows the BAT error box – that is, the BAT told us it thought it had seen something in that circle. The orange box shows what the XRT could see at the time, and they grey dots are detected X-rays. The little red circle marks where the X-ray source is.

    Just because the X-ray object was already known about, and was not something likely to go GRB doesn’t mean it’s boring. If the X-ray object was much brighter than normal, then it is almost certainly what triggered the BAT and is scientifically interesting. Any energetic outburst near to Earth is well worth studying. Normally when the Swift X-ray telescope observes a new source, we get various limited data products sent straight to Earth, and normally some software (written by me!) analyses those data. In this case, there was a problem analysing those data products, specifically the product from which we normally estimate the brightness. So the scientists who were online at the time were forced to use rougher data, and from those it looked like the X-ray object was much brighter than normal. And so, of course, that was announced.

    The event occurred at about 6:15 EDT last night. I was feeding kids and putting them to bed but got to work on it after a couple of hours. At about 9:30, my wife asked what I was up to and I told her about a potential event in M31, but was cautious. I said something like: “This might be nothing; but if it is real, it would be huge.” I wish I could say I had some prescience about what the later analysis would show, but this was more my natural pessimism. That skeptical part of my mind kept going on about how unlikely a truly amazing event was (see here).

    My role would turn out to be a small one. It turned out that Swift had observed the region before. And while Goddard and its HEASARC data archive were down, friend and fellow UVOT team member Caryl Gronwall reminded me that the MAST archive was not. We had not observed the suspect region of M31 in the same filters that Swift uses for its initial observations. But we knew there was a globular cluster near the position of the even and, by coincidence, I had just finished a proposal on M31′s globular clusters. I could see that the archival measures and the new measure were consistent with a typical globular cluster. Then we got a report from the GTC. Their spectrum only showed the globular cluster.

    This didn’t disprove the idea of a transient, of course. Many X-ray transients don’t show a signature in the optical and it might not have been the globular cluster anyway. But it did rule out some of the more exotic explanations. Then the other shoe dropped this morning when the XRT team raced to their computers, probably still in their bathrobes. Their more detailed analysis showed that the bright X-ray source was a known source and had not brightened. So … no gamma-ray burst. No explosive event.

    Phil again:

    I imagine that, from the outside, this looks rather chaotic and disorganised. And the fact that this got publicity across the web and Twitter certainly adds to that! But in fact this highlights the challenges facing professional astronomers. Transient events are, by their nature, well, transient. Some are long lived, but others not. Indeed, this is why Swift exists, to enable us to respond very quickly to the detection of a GRB and gather X-ray, UV and optical data within minutes of the trigger. And Swift is programmed to send what it can of that data straight to the ground (limited bandwidth stops us from sending everything), and to alert the people on duty immediately. The whole reason for this is to allow us to quickly make some statements about the object in question so people can decide whether to observe it with other facilities. This ability has led to many fascinating discoveries, such as the fact that short GRBs are caused by two neutron stars merging, the detection of a supernova shock breaking out of a star and the most distant star even seen by humans, to name just 3. But it’s tough. We have limited data, limited time and need to say something quick, while the object is still bright. People with access to large telescopes need to make a rapid decision, do they sink some of their limited observing time into this object? This is the challenge that we, as time-domain astronomers, face on a daily basis. Most of this is normally hidden from the world at large because of course we only publish and announce the final results from the cases where the correct decisions were made. In this case, thanks to the power of social media, one of those cases where what proved to be the wrong decision has been brought into the public eye. You’ve been given a brief insight into the decisions and challenges we have to face daily. So while it’s a bit embarrassing to have to show you one of the times where we got it wrong, it’s also good to show you the reality of science. For every exciting news-worthy discovery, there’s a lot of hard graft, effort, false alarms, mistakes, excitement and disappointment. It’s what we live off. It’s science.


    People sometimes ask me why I get so passionate about issues like global warming or vaccination or evolution. While the political aspects of these issues are debatable, I get aggravated when people slag the science, especially when it is laced with dark implications of “follow the money” or claims that scientists are putting out “theories” without supporting evidence. Skeptics claims, for example, that scientists only support global warming theory or vaccinations because they would not get grant money for claiming otherwise.

    It is true: scientists like to get paid, just like everyone else. We don’t do this for free (mostly). But money won’t drag you out of bed at 4 in the morning to discover a monster gamma-ray burst. Money doesn’t keep you up until the wee hours pounding on a keyboard to figure out what you’ve just seen. Money didn’t bounce my Leicester colleagues out of bed at the crack of dawn to figure out what we were seeing. Money doesn’t sustain you through the years of grad school and the many years of soft-money itinerancy. Hell, most scientists could make more money if they left science. One of the best comments I ever read on this was on an old slash-dot forum: “Doing science for the money is like having sex for the exercise.”

    What really motivates scientists is the answer. What really motivates them is finding out something that wasn’t known before. I have been fortunate in my life to have experienced that joy of discovery a few times. There have been moments when I realized that I was literally the only person on Earth to know something, even if that something was comparatively trivial, like the properties of a new dwarf galaxy. That’s the thrill. And despite last night’s excitement being in vain, it was still thrilling to hope that we’d seen something amazing. And hell, finding out it was not an amazing event was still thrilling. It’s amazing to watch the corrective mechanisms of the scientific method in action, especially over the time span of a few hours.

    Last night, science was asked a question: did something strange happen in M31? By this morning, we had the answer: no. That’s not a bad day for science. That’s a great one.

    One final thought: one day, something amazing is going to happen in the Local Universe. Some star will explode, some neutron stars will collide or something we haven’t even imagined will happen. It is inevitable. The question is not whether it will happen. The question is: will we still be looking?

    Low Class Cleavage

    Monday, March 31st, 2014

    It’s the end of the month, so time to put up a few posts I’ve been tinkering with.

    No, just give the Great Unwashed a pair of oversized breasts and a happy ending, and they’ll oink for more every time.

    – Charles Montgomery Burns

    A few months ago, this study was brought to my attention:

    It has been suggested human female breast size may act as signal of fat reserves, which in turn indicates access to resources. Based on this perspective, two studies were conducted to test the hypothesis that men experiencing relative resource insecurity should perceive larger breast size as more physically attractive than men experiencing resource security. In Study 1, 266 men from three sites in Malaysia varying in relative socioeconomic status (high to low) rated a series of animated figures varying in breast size for physical attractiveness. Results showed that men from the low socioeconomic context rated larger breasts as more attractive than did men from the medium socioeconomic context, who in turn perceived larger breasts as attractive than men from a high socioeconomic context. Study 2 compared the breast size judgements of 66 hungry versus 58 satiated men within the same environmental context in Britain. Results showed that hungry men rated larger breasts as significantly more attractive than satiated men. Taken together, these studies provide evidence that resource security impacts upon men’s attractiveness ratings based on women’s breast size.

    Sigh. It seems I am condemned to writing endlessly about mammary glands. I don’t have an objection to the subject but I do wish someone else would approach these “studies” with any degree of skepticism.

    This is yet another iteration of the breast size study I lambasted last year and it runs into the same problems: the use of CG figures instead of real women, the underlying inbuilt assumptions and, most importantly, ignoring the role that social convention plays in this kind of analysis. To put it simply: men may feel a social pressure to choose less busty CG images, a point I’ll get to in a moment. I don’t see that this study sheds any new light on the subject. Men of low socioeconomic status might still feel less pressure to conform to social expectations, something this study does not seem to address at all. Like most studies of human sexuality, it makes the fundamental mistake of assuming that what people say is necessary reflective of what they think or do and not what is expected of them.

    The authors think that men’s preference for bustier women when they are hungry supports their thesis that the breast fetish is connected to feeding young (even though is zero evidence that large breasts nurse better than small ones). I actually think their result has no bearing on their assumption. Why would hungrier men want fatter women? Because they want to eat them? To nurse off them? I can think of good reasons why hungry men would feel less bound by social convention, invest a little less thought in a silly social experiment and just press the button for the biggest boobs. I think that hungry men are more likely to give you an honest opinion and not care that preferring the bustier woman is frowned upon. Hunger is known to significantly alter people’s behavior in many subtle ways but these authors narrow it to one dimension, a dimension that may not even exist.

    And why not run a parallel test on women? If bigger breasts somehow provoke a primal hunger response, might that preference be built into anyone who nursed in the first few years of life?

    No, this is another garbage study that amounts to saying that “low-class” men like big boobs while “high-class” men are more immune to the lure of the decolletage and so … something. I don’t find that to be useful or insightful or meaningful. I find that it simply reinforces an existing preconception.

    There is a cultural bias in some of the upper echelons of society against large breasts and men’s attraction to them. That may sound crazy in a society that made Pamela Anderson a star. But large breasts and the breast fetish are often seen, by elites, as a “low class” thing. Busty women in high-end professions sometimes have problems being taken seriously. Many busty women, including my wife, wear minimizer bras so they’ll be taken more seriously (or look less matronly). I’ve noticed that in the teen shows my daughter sometimes watches, girls with curves are either ditzy or femme fatales. In adult comedies, busty women are frequently portrayed as ditzy airheads. Men who are attracted to buxom women are often depicted as low-class, unintelligent and uneducated. Think Al Bundy.

    This is, of course, a subset of a mentality that sees physical attraction itself as a low-class animalistic thing. Being attracted to a woman because she’s a Ph.D. is obviously more cultured, sophisticated and enlightened than being attracted to a woman because she’s a DD. I don’t think attraction is monopolar like that. As I noted before, a man’s attraction to a woman is affected by many factors — her personality, her intelligence, her looks. Breast size is just one slider on the circuit board that it is men’s sexuality and probably not even the most important. But it’s absurd to pretend the slider doesn’t exist or that it is somehow less legitimate than the others. We are animals, whatever our pretensions.

    Last year, a story exploded on the blogosphere about a naive physics professor who was duped into becoming a drug mule by the promise that he would marry Denise Milani, an extremely buxom non-nude model. What stunned me in reading about the story was the complete lack of any sympathy for him. Granted, he is an arrogant man who isn’t particularly sympathetic. But a huge amount of abuse was heaped on him, much of it focusing on his fascination with a model and particularly a model with extremely large and likely artificial breasts. The tone was that there must be something idiotic and crude about the man to fall for such a ruse and for such a woman.

    The reaction to the story not only illuminated a cultural bias but how that bias can become particularly potent when the breasts in question are implants. The expression “big fake boobs” is a pejorative that men and women love to hurl at women they consider low class or inferior. Take Jenny McCarthy. There are very good reasons to criticize McCarthy for her advocacy of anti-vaccine hysteria (although I think the McCarthy criticism is a bit overblown since most people are getting this information elsewhere and McCarthy wasn’t the one who committed research fraud). But no discussion of McCarthy is complete until someone has insulted her for having implants and the existence of those implants has been touted as a sign of her obvious stupidity and the stupidity of those who follow her.

    McCarthy actually doesn’t cross me as that stupid; she crosses me as badly misinformed. And it’s not like there aren’t hordes of very smart people who haven’t bought into the anti-vaccine nonsense even sans McCarthy. But putting that aside, I don’t know what McCarthy’s breasts have to do with anything. Do people honestly think it would make a difference is she was an A-cup?

    To return to this study and the one I lambasted last year: what I see is not only bad science but a subtle attempt by science to reinforce the stereotype that large breasts and an attraction to them are animalistic, low-class and uneducated. Bullshit speculation claims that men’s attraction to breasts is some primitive instinct. And more bullshit research claims that wealthy educated men can resist this primitive instinct but poorer less-educated men wallow in their animalistic desires. And when these garbage studies come out, blogs are all too eager to hype them, saying, “See! We told you those guys who liked big boobs were ignorant brutes!”

    I think this is just garbage. The most “enlightened” academic is just as likely to ogle a busty woman when she walks by. He might be better trained at not being a jerk about it because he walks in social circles where wolf-whistles and come-ons are unacceptable. And he lives in a society where, if a bunch of social scientists are leering over you, you pretend to like the less busty woman. But all men live secret erotic lives in their heads. It’s extremely difficult to tease that information out and certainly not possible with an experiment as crude and obvious as this.

    Once again, we see the biggest failing in sex research: asking people what they want instead of getting some objective measure. There are better approaches, some of which I mentioned in my previous article. If I were to approach this topic, I would look at the google search database used in A Billion Wicked Thoughts to see if areas of high education (e.g., college towns) were less likely to look at porn in general and porn involving busty women in particular. That might give you some useful information. But there’s a danger that it wouldn’t enforce the bias we’ve built up against big breasts and the men who love them.

    Mathematical Malpractice Watch: A Trilogy of Error

    Wednesday, February 12th, 2014

    Three rather ugly instances of mathematical malpractice have caught my attention in the last month. Let’s check them out.

    The Death of Facebook or How to Have Fun With Out of Sample Data

    Last month, Princeton researchers came out with the rather spectacular claim that the social network Facebook would be basically dead within a few years. The quick version is that they fit an epidemiological model to the rise and fall of MySpace. They then used that same model, varying the parameters, to fit Google trends on searches for Facebook. They concluded that Facebook would lose 80% of its customers by 2017.

    This was obviously nonsese as detailed here and here. It suffered from many flaws, notably assuming that the rise and fall of MySpace was necessarily a model for all social networks and the dubious method of using Google searches instead of publicly available traffic data as their metric.

    But there was a deeper flaw. The authors fit a model of a sharp rise and fall. They then proclaim that this model works because Facebook’s google data follows the first half of that trend and a little bit of the second. But while the decline in Facebook Google searches is consistent with their model, it is also consistent with hundreds of others. It would be perfectly consistent with a model that predicts a sharp rise and then a leveling off as the social network saturates. Their data are consistent with but not discriminating against just about any model.

    The critical part of the data — the predicted sharp fall in Facebook traffic — is out of sample (meaning it hasn’t happened yet). But based on a tiny sliver of data, they have drawn a gigantic conclusion. It’s Mark Twain and the length of the Mississippi River all over again.

    We see this a lot in science, unfortunately. Global warming models often predict very sharp rises in temperature — out of sample. Models of the stock market predict crashes or runs — out of sample. Sports twerps put together models that predict Derek Jeter will get 4000 hits — out of sample.

    Anyone who does data fitting for a living knows this danger. The other day, I fit a light curve to a variable star. Because of an odd intersection of Fourier parameters, the model predicted a huge rise in brightness in the middle of its decay phase because there were no data to constrain it there. So it fit a small uptick in the decay phase as though it were the small beginning of a massive re-brightening.

    The more complicated the model, the more danger there is of drawing massive conclusions from tiny amounts of data or small trends. If the model is anything other than a straight line, be very very wary at out-of-sample predictions, especially when they are predicting order-of-magnitude changes.

    A Rape Epidemic or How to Reframe Data:

    The CDC recently released a study that claimed that 1.3 million women were raped and 12.6 million more were subject to sexual violence in 2010. This is six or more times the estimates of the FBI’s extremely rigorous NCVS estimate. Christina Hoff Summers has a breakdown of why the number is so massive:

    It found them by defining sexual violence in impossibly elastic ways and then letting the surveyors, rather than subjects, determine what counted as an assault. Consider: In a telephone survey with a 30 percent response rate, interviewers did not ask participants whether they had been raped. Instead of such straightforward questions, the CDC researchers described a series of sexual encounters and then they determined whether the responses indicated sexual violation. A sample of 9,086 women was asked, for example, “When you were drunk, high, drugged, or passed out and unable to consent, how many people ever had vaginal sex with you?” A majority of the 1.3 million women (61.5 percent) the CDC projected as rape victims in 2010 experienced this sort of “alcohol or drug facilitated penetration.”

    What does that mean? If a woman was unconscious or severely incapacitated, everyone would call it rape. But what about sex while inebriated? Few people would say that intoxicated sex alone constitutes rape — indeed, a nontrivial percentage of all customary sexual intercourse, including marital intercourse, probably falls under that definition (and is therefore criminal according to the CDC).

    Other survey questions were equally ambiguous. Participants were asked if they had ever had sex because someone pressured them by “telling you lies, making promises about the future they knew were untrue?” All affirmative answers were counted as “sexual violence.” Anyone who consented to sex because a suitor wore her or him down by “repeatedly asking” or “showing they were unhappy” was similarly classified as a victim of violence. The CDC effectively set a stage where each step of physical intimacy required a notarized testament of sober consent.

    In short, they did what is called “reframing”. They took someone’s experiences, threw away that person’s definition of them and substituted their own definition.

    This isn’t the first time this has happened with rape stats nor the first time Summers had uncovered this sort of reframing. Here is an account of how researchers decided that women who didn’t think they had been raped were, in fact, raped, so they could claim a victimization rate of one in four.

    Scientists have to classify things all the time based on a variety of criteria. The universe is a messy continuum; to understand it, we have to sort things into boxes. I classify stars for a living based on certain characteristics. The problem with doing that here is that women are not inanimate objects. Nor are they lab animals. They can have opinions of their own about what happened to them.

    I understand that some victims may reframe their experiences to try to lessen the trauma of what happened to them. I understand that a woman can be raped but convince herself it was a misunderstanding or that it was somehow her fault. But to a priori reframe any woman’s experience is to treat them like lab rats, not human beings capable of making judgements of their own.

    But it also illustrates a mathematical malpractice problem: changing definitions. This is how 10,000 underage prostitutes in the United States becomes 200,000 girls “at risk”. This is how small changes in drug use stats become an “epidemic”. If you dig deep into the studies, you will find the truth. But the banner headline — the one the media talk about — is hopelessly and deliberately muddled.

    Sometimes you have to change definitions. The FBI changed their NCVS methodology a few years ago on rape statistics and saw a significant increase in their estimates. But it’s one thing to hone; it’s another to completely redefine.

    (The CDC, as my friend Kevin Wilson pointed out, mostly does outstanding work. But they have a tendency to jump with both feet into moral panics. In this case, it’s the current debate about rape culture. Ten years ago, it was obesity. They put out a deeply flawed study that overestimated obesity deaths by a factor of 14. They quickly admitted their screwup but … guess which number has been quoted for the last decade on obesity policy?)

    You might ask why I’m on about this. Surely any number of rapes is too many. The reason I wanted to talk about this, apart from my hatred of bogus studies, is that data influences policy. If you claim that 1.3 million women are being raped every year, that’s going to result in a set of policy decisions that are likely to be very damaging and do very little to address the real problem.

    If you want a stat that means something, try this one: the incidence of sexual violence has fallen 85% over the last 30 years. That is from the FBI’s NCVS data so even if they are over- or under-estimating the amount of sexual violence, the differential is meaningful. That data tells you something useful: that whatever we are doing to fight rape culture, it is working. Greater awareness, pushing back against blaming the victim, changes to federal and state laws, changes to the emphasis of attorneys general’s offices and the rise of internet pornography have all been cited as contributors to this trend.

    That’s why it’s important to push back against bogus stats on rape. Because they conceal the most important stat; the one that is the most useful guide for future policy and points the way toward ending rape culture.

    The Pending Crash or How to Play with Scales:

    Yesterday morning, I saw a chart claiming that the recent stock market trends are an eerie parallel of the run-up to the 1929 crash. I was immediately suspicious because, even if the data were accurate, we see this sort of crap all the time. There are a million people who have made a million bucks on Wall Street claiming to pattern match trends in the stock market. They make huge predictions, just like the Facebook study above. And those predictions are always wrong. Because, again, the out of sample data contains the real leverage.

    This graph is even worse than that, though. As Quartz points out, the graph makers used two different y-axes. In one, the the 1928-29 rise of the stock market was a near doubling. In the other, the 2013-4 rise was an increase of about 25%. When you scale them appropriately, the similarity vanishes. Or, alternatively, the pending “crash” would be just an erasure of that 25% gain.

    I’ve seen this quite a bit and it’s beginning to annoy me. Zoomed-in graphs of narrow ranges of the y-axis are used to draw dramatic conclusions about … whatever you want. This week, it’s the stock market. Next week, it’s global warming skeptics looking at little spikes on a 10-year temperature plot instead of big trends on a 150-year one. The week after, it will be inequality data. Here is one from Piketty and Saez, which tracks wealth gains for the rich against everyone else. Their conclusion might be accurate but the plot is useless because it is scaled to intervals of $5 million. So even if the bottom 90% were doing better, even if their income was doubling, it wouldn’t show up on the graph.

    Halloween Linkorama

    Sunday, November 3rd, 2013

    Three stories today:

  • Bill James once said that, when politics is functioning well, elections should have razor thin margins. The reason is that the parties will align themselves to best exploit divisions in the electorate. If one party is only getting 40% of the vote, they will quickly re-align to get higher vote totals. The other party will respond and they will reach a natural equilibrium near 50% I think that is the missing key to understanding why so many governments are divided. The Information Age has not only given political parties more information to align themselves with the electorate, it has made the electorate more responsive. The South was utterly loyal the Democrats for 120 years. Nowadays, that kind of political loyalty is fading.
  • I love this piece about how an accepted piece of sociology turned out to be complete gobbledygook.
  • Speaking of gobbledygook, here is a review of the article about men ogling women. It sounds like the authors misquoted their own study.
  • Rush is Wrong on Religion

    Friday, September 20th, 2013

    I see that Rush Limbaugh has dived into the latest climate nontroversy. That makes this is a good time to post this, which I wrote several months ago. Sorry to make this Global Warming Week. I hate that debate. But with the way the Daily Fail’s nonsense is propagating, I have no choice.