A new short story:
Part I: Dreams in the Long Dark
Part II: Co-Orbital
Part II will be up in a few weeks.
A new short story:
Part I: Dreams in the Long Dark
Part II: Co-Orbital
Part II will be up in a few weeks.
I just finished John Carreyrou’s new book “Bad Blood”, about the rise and fall of Theranos. Theranos, for those of you who haven’t kept up, was a Silicon Valley startup founded by Stanford dropout Elizabeth Holmes. It exploded onto the scene with claims that it could do hundreds of blood tests from a single drop of blood, eliminating the need for painful blood draws. For a time, they were the next Big Thing, signing contracts with Walgreen’s and Safeway, being valued at $9 billion and claiming they would upend the medical industry.
And it was all a lie. Holmes had a vision — a simple finger stick allowing diagnosis of all kinds of things. But what she lacked was the expertise, the patience or the ethics to make it happen. As Carreyrou explains, there are reasons we do tests based on blood draws — the need for volume, the difference between venous and capillary blood. To do what Theranos was claiming, it would not have been enough to use existing technology in new ways. They would have had to develop entirely new methods. But that would have taken decades of hard work and still may have failed. Holmes (and her business and personal partner Ramesh “Sunny” Balwani) didn’t have the patience for that. They tried to simply package existing tech into a smaller space. And when that didn’t work, they resorted to fraud, resulting in completely unreliable test results. They then used legal threats and bullying to try to silence anyone who questioned what they were doing. But it all ended when Carreyrou and the Wall Street Journal laid bare their deceit, helped by courageous whistleblowers. And then the federal agencies came down on them. Theranos is now, effectively, a dead shell and Holmes and Balwani are facing criminal charges.
The book is appalling and utterly gripping. I finished the last 100 pages in one sitting. The book skips a bit in time and Carreyrou’s writing is a bit rough in some places, probably a result of getting the book out so quickly. But that’s an extremely minor quibble. It’s not only a good story, it’s an important one. There are several things I gleaned from the book.
Anyway, I highly recommend the book. This is a story worth reading. And a story worth learning from.
I just got back from Clarksville, Georgia, where I watched the total solar eclipse. I had flown down to Atlanta both to visit family and to see the eclipse. My brother and I had a vague plan that we would rendezvous at his house north of Atlanta and then see how far we could drive. Clarksville, within the path of totality, was where we stopped. There were a few hundred people gathered in the center of town to watch.
Despite being an astronomer, I had never seen a total solar eclipse before. I lived in Atlanta during the 1979 eclipse, but our teachers wouldn’t let us go outside. We watched it on TV. I was an undergraduate at Carleton during the 1994 annular eclipse. I didn’t see much of it, however, because I was working one of the telescopes, showing people Venus, which was visible during the eclipse. So I went into this as cold as my 10-year-old daughter.
(Funny story about the ’94 eclipse. One of the visitors during the eclipse was a girl I had a gigantic crush-from-a-distance on. She smiled at me and we exchanged a few words. But nothing more came of it since I was still shy as all hell. Talking to my crush during an eclipse under the light of Venus was a lovely moment. And nothing coming of it made it a perfect encapsulation of my love life at the time.)
It was surprising how bright the sun remained despite increasing lunar coverage. But as the totality approached, an odd twilight settled over the town. I began to noticed the shadows and odd crescent shape in dappled light. We watched through our glasses as the sun got smaller and smaller. It got slowly darker and darker. And then, to a cheer from the watching crowd, it was gone.
Words can not describe the next 100 seconds. I felt like I was on another world or maybe at the pole as summer ends or begins. It got so dark that the street lights came on. We could now see a circle of light around the Sun and the glowing solar corona. The entire 360-degree horizon turned a sunset red. We snapped a few pictures; my brother took some video. And, after what seemed a very short 100 seconds, the Sun peeked out again from behind the moon. The glasses went back on, the light rose and, not long after, we began the long drive home.
It was a magical event; one of the few things that lived up to the hype. I was so impressed by it that I’ve already decided to travel to Texas for the 2024 eclipse. I’ve been in astronomy for 25 years and I’ve seen a lot of cool stuff. But this was the most amazing thing I’d ever seen.
I have a short story coming up soon. The spark that lit the story and this post was this tweet from Neil deGrasse Tyson:
Earth needs a virtual country: #Rationalia, with a one-line Constitution: All policy shall be based on the weight of evidence
— Neil deGrasse Tyson (@neiltyson) June 29, 2016
This tweet set off an intense internet debate on the merits of such a country. Many people — mostly of a Lefty persuasion — embraced the idea. Many people — mostly of a Righty persuasion — wrote a number of good and readable critiques of this idea, going over some ideas I’ll discuss later.
Tyson later expanded on this idea, basically arguing, even if Tyson doesn’t realize it, for a negative view of governing: that policy should be implemented only after the massive weight of evidence shows that it would advance the cause being supported. But even with this caveat, there are three principle problems with Rationalia.
So a piece of news that floated out today was that the Doomsday Clock was advanced to 2.5 minutes until midnight:
We are creeping closer to the apocalypse, according to a panel of scientists and scholars.
The Chicago-based Bulletin of Atomic Scientists has moved the “Doomsday Clock,” a symbolic countdown to the end of the world, to two and a half minutes to midnight.
It marks the first time since 1953 — after hydrogen bomb tests in the US and then Soviet Union — that humanity has been this close to global disaster.
The group cited US President Donald Trump’s “disturbing comments” about the use of nuclear weapons and views on climate change among other factors, including cyberthreats and the rise in nationalism, that have contributed to the darkened forecast.
“The board’s decision to move the clock less than a full minute reflects a simple reality: As this statement is issued, Donald Trump has been the US president only a matter of days,” the organization said in a statement.
I’ve trying to sugarcoat this but there is simply is no way to do so. So I’ll just be blunt: any clock that thinks the world is closer to doomsday now than we were in the past is a clock that is badly in need of repair.
According to the BAS, we are in greater danger than we have been since 1953. Let’s look over that 64-year span and take a year almost at random: 1962. In October of 1962, we had the Cuban Missile Crisis. The world teetered on the brink of nuclear war. At that time, one side was run by a drunken mass murderer and the other was run by a novice President taking enough medication to stock a drug store. And yet the Doomsday Clock was left at seven minutes to midnight at a time when we were almost literally seven minutes away from Armageddon.
Oh, it gets better. In 1962, the United States was on the brink of starting its long bloody involvement in Vietnam. There were active civil wars going on in Laos, Sudan, the Congo, Yemen, Guatemala, Burma, Malaysia and Nicaragua as well as Communist insurgencies in other countries. By contrast, today is literally the most peaceful era in human history with fewer national and domestic armed conflicts than we’ve ever had as well as less violent crime. Blood and tears may dominate the news. But for most of human history, they dominated everyone’s life. It’s not just 1962 that was more dangerous. It’s almost every year up until the present.
The BAS says that their clock has advanced, at least in part, because of concerns about the environment (which muddies the original purpose of the clock). But is the environment worse now than it was when half the planet was starving, cars were belching lead into the air and our rivers were so polluted they could literally catch fire? By every standard that can be measured — with the exception of greenhouse gases — our planet is better off now than it was 50 years ago. Or 40 years ago. Or 30 years ago. Smog is down, sulphur dioxide is down, species are rebounding to the point of being taken off the endangered list, the ozone layer is healing, etc., etc. And even global warming isn’t hopeless, Trump or no Trump. Greenhouse gas emissions in the United States have fallen in recent years. Greenhouse intensity — that is emissions per economic dollar — is plunging.
I don’t mean to downplay the challenges we face. We still have enough nuclear weapons to ignite a cataclysmic holocaust. And global warming is a very real challenge. Nor do I mean to downplay the concerns about a Trump Administration, many of which I share. But to pretend that the world is closer to annihilation that it was during the last century is an idea that is simply not supported by the facts at hand. All it does is make the Doomsday Clock even more irrelevant.
(More from Tom Nichols.)
Election season is upon us which means that poll-watching season is upon us. Back in 2012, I wrote a long post about the analysis of the polls. Specifically, I focused on the 2000 election in which Bush led the polls going in, Real Clear Politics projected a Bush landslide and … it ended in a massive recount and a popular-electoral split. I identified the factors that I thought contributed to this:
In the end, I think it was all of the above: they overestimated Nader’s support, the polls shifted late and RCP had a bit of a bias. But I also think RCP was simply ahead of its time. In 2000, we simply did not have the relentless national and state level polls we have now. And we did not have the kind of information that can tease out the subtle biases and nuances that Nate Silver can.
Of course, I wrote that on the eve of the 2012 election, where Obama significantly outperformed his polls, easily winning an election that, up until the last minute, looked close.
The election is now three days away which means that everyone is obsessed with polls. But this year, a split has developed. Sam Wang is projecting a 98% chance of a Clinton win with Clinton pulling in about 312 electoral votes. HuffPo projects a 99% chance of Clinton winning the popular vote. Nate Silver, however, is his usual conservative self, currently giving Clinton only a 64% chance of winning. So who should we side with?
To me, it’s obvious. I would definitely take Silver on this.
Put aside everything you know about the candidates, the election and the polls. If someone offered you a 50-to-1 or a 100-to-1 bet on any major party candidate winning the election, would you take it? I certainly would. I would have bet $10 on Mondale in 1984 if it was a potential $1000 payoff. And he lost by 20 points.
It seems a huge stretch to give 98 or 99% odds to Clinton, considering:
Basically, I think Wang and HuffPo are not accounting enough for the possibility that the polls are significantly off. In the last 40 years, we’ve had one Presidential election (1980) where the polls were off by a whopping seven points. That’s enough for Trump to win easily (or for Clinton to win in a landslide).
Moreover, Wang’s and HuffPo’s results seem in contradiction to each other. If Clinton really did have a 98% chance of winning, wouldn’t you think she’d get more than 312 electoral votes? That’s the kind of certainty I would expect with a pending landslide of 400 or 500 electoral votes. A 42-electoral vote margin of errors is *really* small. All you would need is for the polling to be wrong in two big states for Trump to eek out a win (note: there are more than two big battleground states).
This brings me to another point. Pollsters and Democrats have been talking about Clinton’s “firewall” of supposedly safe states that guarantee a win in the electoral college. But that firewall is a fantasy. When Clinton dipped in the polls in September, suddenly numerous blue states like Pennsylvania and Michigan were in play. And, in fact, Silver projects a bigger chance that Trump wins in an electoral-popular split than Clinton because many of his states are safer. The talk about a “firewall” is the result of people becoming drunk on state-level polling. We have 50 states in this country. Statistically, at least one should buck a 98% polling certainty. There are only twenty states that Real Clear Politics rates as “leans” or “tossup”. Statistically, at least a couple of those should buck the polling.
Here’s another way of thinking about it. There have been 56 elections in American history. If Clinton really were a 98% or 99% favorite, a Trump would be the biggest upset in American electoral history. I find that claim to be absurd. Bigger than Dewey and Truman? Bigger than Polk’s election? Bigger than Kennedy’s? Bigger than Reagan turning a close race into a blowout?
I should point out that having long tails of probability also means there is a greater chance of a Clinton landslide. That’s possible, I guess. But, admitting to my priors here, I find a Trump upset more likely than a Clinton landslide. Clinton is deeply unpopular with large parts of the country. She’s not popular with young people. Here in State College, Clinton signs and stickers are few and far between. This was not the case in 2008 and 2012, both of which were won handily by Obama. I really don’t see a Clinton landslide materializing, although I’ll cop to it if I’m wrong about that.
Prediction is hard, especially about the future. I think a basic humility requires us to be open to the idea that we could be badly wrong. And 1-2% is way too small a value to assign to that. I think Clinton has the edge right now. But I would put her odds at more like 2-1 or 4-1. And I will not be shocked if Trump pulls this out.
Because it may be a cliche. But there’s only poll that counts: the one taken on Tuesday.
Update: One of my Twitter correspondents makes a good case that the variations in the polls are less reflective of changes in candidate support than in supporter enthusiasm. In the end, the election will come down to turnout — i.e., how likely the “likely” part of “likely voters” is.
In 1980, doom-monger and perennial wrong person Paul Ehrlich made a wager with Julian Simon. Simon bet that the price of any commodities Ehrlich cared to nominate would go down over the next ten years due to increasing productivity. Ehrlich bet that they would go up because of the global pandemics and starvation that were supposed to happen but never did. At the end of the ten years, Ehrlich had to write Simon a check of $576.07.
Since the day the wager was concluded, Ehrlich and his fellow doom-sayers have been trying to weasel out of the result. You can see this on the Wikipedia page on the wager:
The prices of all five metals increased between 1950 and 1975, but Ehrlich believes three of the five went down during the 1980s because of the price of oil doubling in 1979, and because of a world-wide recession in the early 1980s.
Ehrlich might have a point if the entire motivation for the bet weren’t his prediction of a global catastrophe by the end of the 80’s.
Ehrlich would likely have won if the bet had been for a different ten-year period. Asset manager Jeremy Grantham wrote that if the Simon–Ehrlich wager had been for a longer period (from 1980 to 2011), then Simon would have lost on four of the five metals. He also noted that if the wager had been expanded to “all of the most important commodities,” instead of just five metals, over that longer period of 1980 to 2011, then Simon would have lost “by a lot.”
This doesn’t mean much. In 2011, we were in the middle of a precious metal crunch that sent prices skyrocketing. Since then, prices have tumbled. All five metals are down since 2011 and all but chromium are down around 50%. I can’t find good historical records, but copper, at least, is cheaper than it was in 1989, in inflation-adjusted terms. I’m also not clear by what Grantham means by “all of the most important commodities”. Oil is about the same as it was in 1980 in inflation adjusted terms. Wheat is cheaper.
Moreover, even if the commodities prices were higher, it’s still because Ehrlich was wrong. Ehrlich was predicting prices would go higher because we would run out of things. But when prices did go higher, they did because of increased global demand. They went higher because of the post-Cold War economic growth and development that has, so far, lifted two billion people out of poverty. This is growth that Ehrlich insisted (and still insists) was impossible.
(To be honest, I’m not sure why the Grantham quote is even on Wikipedia. Grantham is an investor, not an economist, and this quote was an off-hand remark in a quarterly newsletter about sound investment strategies, not a detailed analysis. The quote is so obscure and oblique to the larger issues that its inclusion crosses me as a desperate effort to defend Ehrlich’s honor.)
What really amuses me about the Ehrlich-Simon bet, however, was that he proposed a second wager. The second wager was not like the first. It was far more specific, far narrower and far more tailored to things that were not really in dispute. Simon rejected the second wager for the following reasons:
Let me characterize their offer as follows. I predict, and this is for real, that the average performances in the next Olympics will be better than those in the last Olympics. On average, the performances have gotten better, Olympics to Olympics, for a variety of reasons. What Ehrlich and others says is that they don’t want to bet on athletic performances, they want to bet on the conditions of the track, or the weather, or the officials, or any other such indirect measure.
Exactly. The two-and-a-half decades since the second wager have been the best in human history. We have had fewer wars than at any period. Fewer murders. Less disease. Less starvation. More wealth. Ehrlich’s second wager didn’t care about any of that. It was about details like global temperature and arable land and wild fish catches, not the big picture of human prosperity. Below, I will go through the specifics of the second wager and you will see, over and over again, how narrow Ehrlich’s points were. In many cases, he would have “won” on issues where he really lost — correctly predicting trivial details of a trend that contradicted everything he was saying.
In fact, even if Simon had taken up Ehrlich on his useless hideously biased bet, Ehrlich might have lost the bet anyway, depending on the criteria they agreed on. Let’s go through the bet Eherlich proposed, point by point:
So … with Ehrlich counting global warming three times … with Ehrlich tailoring the questions incredibly narrowly … with Ehrlich betting twice on global farmland … Ehrlich wins the bet. But it’s a hollow victory. Ehrlich wins on nine of the bets, but only two or three of the larger issues. Had Simon taken him up on it and tweaked a few questions — going to global SO2 emissions or global fishing catches or one measure for global warming — Ehrlich would have lost and lost badly.
That’s the point here. If you put aside the details, the second wager amounted to Ehrlich betting that the disaster that failed to happen in the 80’s would happen in the 90’s. He was wrong. Again. And if you “extended the bet” the way his defenders do for the initial bet, he was even wronger.
And yet … he’s still a hero to the environmental movement. It just goes to show you that there’s no path to fame easier than being wrong with authority.
One issue that I am fairly militant about is vaccination. Vaccines are arguably the greatest invention in human history. Vaccines made smallpox, a disease that slaughtered billions, extinct. Polio, which used to maim and kill millions, is on the brink of extinction. A couple of weeks ago, Rubella became extinct in the Americas:
After 15 years of a widespread vaccination campaign with the MMR (measles-mumps-rubella) vaccine, the Pan American Health Organization and the World Health Organization announced yesterday that rubella no longer circulates in the Americas. The only way a person could catch it is if they are visiting another country or if it is imported into a North, Central or South American country.
Rubella, also known as German measles, was previously among a pregnant woman’s greatest fears. Although it’s generally a mild disease in children and young adults, the virus wreaks the most damage when a pregnant woman catches it because the virus can cross the placenta to the fetus, increasing the risk for congenital rubella syndrome.
Congenital rubella syndrome can cause miscarriage or stillbirth, but even the infants who survive are likely to have birth defects, heart problems, blindness, deafness, brain damage, bone and growth problems, intellectual disability or damage to the liver and spleen.
Rubella used to cause tens of thousands of miscarriages and birth defects every year. Now it too could be pushed to extinction.
Of course, many deadly diseases are now coming back thanks to people refusing to vaccinate their kids. There is an effort to blame this on “anti-government” sentiment. But while that plays role, the bigger role is by liberal parents who think vaccines cause autism (you’ll notice we’re getting outbreaks in California, not Alabama). As I’ve noted before, the original research that showed a link between vaccines and autism is now known to have been a fraud. Recently, we got another even more proof:
On the heels of a measles outbreak in California fueled by vaccination fears that scientists call unfounded, another large study has shown no link between the measles-mumps-rubella vaccine and autism.
The study examined insurance claims for 96,000 U.S. children born between 2001 and 2007, and found that those who received MMR vaccine didn’t develop autism at a higher rate than unvaccinated children, according to results published Tuesday by the Journal of the American Medical Association, or JAMA. Even children who had older siblings with autism—a group considered at high risk for the disorder—didn’t have increased odds of developing autism after receiving the vaccine, compared with unvaccinated children with autistic older siblings.
96,000 kids — literally 8000 times the size of the sample Wakefield had. No study has ever reproduced Wakefield’s results. That’s because no study has been a complete fraud.
There’s something else, though. This issue became somewhat personal for me recently. My son Ben came down with a bad cough, a high fever and vomiting. He was eventually admitted to the hospital for a couple of days with pneumonia, mainly to get rehydrated. He’s fine now and playing in the next room as I write this. But it was scary.
I mention this because one of the first questions the nurses and doctors asked us was, “Has he been vaccinated?”
My father, the surgeon, likes to say that medicine is as much art as science. You can know the textbooks by heart. But the early symptoms of serious diseases and not-so-serious one are often similar. An inflamed appendix can look like benign belly pain. Pneumonia can look like a cold. “Flu-like symptoms” can be the early phase of anything from a bad cold to ebola. But they mostly get it right because experience with sick people has honed their instincts. They might not be able to tell you why they know it’s not just a cold, but they can tell you (with Ben, the doctor’s instinct told him it wasn’t croup and he ordered a chest X-ray that spotted the pneumonia).
Most doctors today have never seen measles. Or mumps. Or rubella. Or polio. Or anything else we routinely vaccinate for. Thus, they haven’t built up the experience to recognize these conditions. Orac, the writer of the Respectful Insolence blog, told me of a sick child who had Hib. It was only recognized because an older doctor had seen it before.
When I told the doctors Ben had been vaccinated, their faces filled with relief. Because it meant that they didn’t have to think about a vast and unfamiliar terrain of diseases that are mostly eradicated. It wasn’t impossible that he would have a disease he was vaccinated against — vaccines aren’t 100%. But it was far less likely. They could narrow their focus on a much smaller array of possibilities.
Medicine is difficult. The human body doesn’t work like it does in a textbook. You don’t punch symptoms into a computer and come up with a diagnosis. Doctors and nurses are often struggling to figure out what’s wrong with a patient let alone how to treat it. Don’t cloud the waters even further by making them have to worry about diseases they’ve never seen before.
Vaccinate. Take part in the greatest triumph in human history. Not just to finally rid ourselves of these hideous diseases but to make life much easier when someone does get sick.
So far, I have seen five of last year’s Best Picture nominees — Birdman, Boyhood, The Grand Budapest Hotel, The Imitation Game and Whiplash. I’ve also seen a few other 2014 films — Gone Girl, Guardians of the Galaxy and The Edge of Tomorrow — that rank well on IMDB. I’ll have a post at some point about all of them when I look at 2014 in film. But right now, they would all be running behind Interstellar, which I watched last night.
I try very hard to mute my hopes for movies but I was anticipating Interstellar since the first teaser came out. I’m glad to report that it’s yet another triumph for Nolan. The film is simply excellent. The visuals are spectacular and clear, the characters well-developed, the minimalist score is one of Zimmer’s best so far. The ending and the resolution of the plot could be argued with but it’s unusual for me to watch a three-hour movie in one sitting unless it’s Lord of the Rings. I definitely recommend it, especially to those are fans of 2001 or Tree of Life.
That’s not the reason I’m writing about it though.
One of the remarkable things about Interstellar is that it works very hard to get the science right. There are a few missteps, usually for dramatic reasons. For example, the blight affecting Earth works far faster than it would in real life. The spacecraft seem to have enormous amounts of fuel for planetary landings. The astronauts don’t use probes and unmanned landers to investigate planets before landing. And, as I mentioned, the resolution of the plot ventures well into the realm of science fiction and pretty much into fantasy.
But, most of the film is beautifully accurate. The plan to save Earth (and the backup plan) is a realistic approach. Trips through the stellar systems take months or years. Spacecraft have to rotate to create gravity (including a wonderful O’Neill Cylinder). Space is silent — an aesthetic I notice is catching on in sci-fi films as directors figure out how eerie silence is. General and special relativity play huge roles in the plot. Astrophysicist Kip Thorne insisted on being as scientifically accurate as possible and it shows.
And the result is a better film. The emotional thrust of Cooper’s character arc is entirely built on the cruel tricks relativity plays on him. The resolution of Dr. Mann’s arc is built entirely on rock solid physics including the daring stunt Coop uses to save the day. The incredible sequences near the black hole could be taken right of a physics textbook, including a decision that recalls The Cold Equations.
We’re seeing this idea trickle into more and more of science fiction. Battlestar Galactica had muted sounds in space. Moon has reasonably accurate scientific ideas. Her had a sound approach to AI. Serenity has a silent combat scene in space, as did, for a moment, Star Trek. Gravity has some serious issues with orbital dynamics, but much of the rest was rock solid.
I’m hoping this will continue, especially if the rumors of a Forever War movie are true. A science fiction movie doesn’t need accurate science to be good. In fact, it can throw science out the window and be great (e.g., Stars Wars). But I hope that Interstellar blazes a path for more science fiction movies that are grounded, however shakily at times, in real science. This could breath new life into a genre that’s been growing staler with every passing year.
I don’t say this as an astrophysicist (one available for consultation for any aspiring filmmakers). I say this as a movie buff. I say this as someone who loves good movies and think great movies can be made that show science in all its beautiful, glorious and heart-stopping accuracy.
Post Scriptum: Many of my fellow astronomers disagree with me on Interstellar, both on the quality of the film and its scientific accuracy. You can check out fellow UVa alum Phil Plait here, although note that in saying it got the science wrong, he actually got the science wrong. Pro Tip: if you’re going to say Kip Thorne got the science wrong, be sure to do your homework.
Campus sexual violence continues to be a topic of discussion, as it should be. I have a post going up on the other site about the kangaroo court system that calls itself campus justice.
But in the course of this discussion, a bunch of statistical BS has emerged. This centers on just how common sexual violence is on college campuses, with estimates ranging from the one-in-five stat that has been touted, in various forms, since the 1980’s, to a 0.2 percent rate touted in a recent op-ed.
Let’s tackle that last one first.
According to the FBI “[t]he rate of forcible rapes in 2012 was estimated at 52.9 per 100,000 female inhabitants.”
Assuming that all American women are uniformly at risk, this means the average American woman has a 0.0529 percent chance of being raped each year, or a 99.9471 percent chance of not being raped each year. That means the probability the average American woman is never raped over a 50-year period is 97.4 percent (0.999471 raised to the power 50). Over 4 years of college, it is 99.8 percent.
Thus the probability that an American woman is raped in her lifetime is 2.6 percent and in college 0.2 percent — 5 to 100 times less than the estimates broadcast by the media and public officials.
This estimate is way too low. It is based on taking one number and applying high school math to it. It misses the mark because it uses the wrong numbers and some poor assumptions.
First of all, the FBI’s stats are on documented forcible rape and does not account for under-reporting and does not includes sexual assault. The better comparison is the National Crime Victimization Survey, which estimates about 300,000 rapes or sexual assaults in 2013 for an incidence rate of 1.1 per thousand. But even that number needs some correction because about 2/3 of sexual violence is visited upon women between the ages of 12 and 30 and about a third among college-age women. The NCVS rate indicates about a 10% lifetime risk or about 3% college-age risk for American women. This is lower than the 1-in-5 stat but much higher than 1-in-500.
(*The NCVS survey shows a jump in sexual violence in the 2000’s. That’s not because sexual violence surged; it’s because they changed their methodology, which increased their estimates by about 20%.)
So what about 1-in-5? I’ve talked about this before, but it’s worth going over again: the one-in-five stat is almost certainly a wild overestimate:
The statistic comes from a 2007 Campus Sexual Assault study conducted by the National Institute of Justice, a division of the Justice Department. The researchers made clear that the study consisted of students from just two universities, but some politicians ignored that for their talking point, choosing instead to apply the small sample across all U.S. college campuses.
The CSA study was actually an online survey that took 15 minutes to complete, and the 5,446 undergraduate women who participated were provided a $10 Amazon gift card. Men participated too, but their answers weren’t included in the one-in-five statistic.
If 5,446 sounds like a high number, it’s not — the researchers acknowledged that it was actually a low response rate.
But a lot of those responses have to do with how the questions were worded. For example, the CSA study asked women whether they had sexual contact with someone while they were “unable to provide consent or stop what was happening because you were passed out, drugged, drunk, incapacitated or asleep?”
The survey also asked the same question “about events that you think (but are not certain) happened.”
That’s open to a lot of interpretation, as exemplified by a 2010 survey conducted by the U.S. Centers for Disease Control and Prevention, which found similar results.
I’ve talked about the CDC study before and its deep flaws. Schow points out that the victimization rate they are claiming is way more than the National Crime Victimization Survey (NCVS), the FBI and the Rape, Abuse and Incest National Network (RAINN) estimates. All three of those agencies use much more rigorous data collection methods. NCVS does interviews and asks the question straight up: have you been raped or sexually assaulted? I would trust the research methods of these agencies, who have been doing this for decades, over a web-survey of two colleges.
Another survey recently emerged from MIT which claimed 1-in-6 women are sexually assaulted. But only does this suffer from the same flaws as the CSA study (a web survey with voluntary participation), it’s not even claiming what it claims:
When it comes to experiences of sexual assault since starting at MIT:
1 in 20 female undergraduates, 1 in 100 female graduate students, and zero male students reported being the victim of forced sexual penetration 3 percent of female undergraduates, 1 percent of male undergraduates, and 1 percent of female grad students reported being forced to perform oral sex 15 percent of female undergraduates, 4 percent of male undergraduates, 4 percent of female graduate students, and 1 percent of male graduate students reported having experienced “unwanted sexual touching or kissing”
All of these experiences are lumped together under the school’s definition of sexual assault.
When students were asked to define their own experiences, 10 percent of female undergraduates, 2 percent of male undergraduates, three percent of female graduate students, and 1 percent of male graduate students said they had been sexually assaulted since coming to MIT. One percent of female graduate students, one percent of male undergraduates, and 5 percent of female undergraduates said they had been raped.
Note that even with a biased study, the result is 1-in-10, not 1-in-5 or 1-in-6.
OK, so web surveys are a bad way to do this. What is a good way? Mark Perry points out that the one-in-five stat is inconsistent with another number claimed by advocates of new policies: a reporting rate of 12%. If you assume a reporting rate near that and use the actual number of reported assaults on major campuses, you get a rate of around 3%.
Further research is consistent with this rate. For example, here, we see that UT Austin has 21 reported incidents of sexual violence. That’s one in a thousand enrolled women. Texas A&M reported nine, one in three thousand women. Houston reported 11, one in 2000 women. If we are to believe the 1-in-5 stat, that’s a reporting rate of half a percent. A reporting rate of 10%, which is what most people accept, would mean … a 3-5% risk for five years of enrollment.
So … Mark Perry finds 3%. Texas schools show 3-5%. NCVS and RAINN stats indicate 2-5%. Basically, any time we use actual numbers based on objectives surveys, we find the number of women who are in danger of sexual violence during their time on campus is 1-in-20, not 1-in-5.
One other reason to disbelieve the 1-in-5 stat. Sexual violence in our society is down — way down. According to the Bureau of Justice Statistics, rape has fallen from 2.5 per 1000 to 0.5 per thousand, an 80% decline. The FBI’s data show a decline from 40 to about 25 per hundred thousand, a 40% decline (they don’t account for reporting rate, which is likely to have risen). RAINN estimates that the rate has fallen 50% in just the last twenty years. That means 10 million fewer sexual assaults.
Yet, for some reason, sexual assault rates on campus have not fallen, at least according to the favored research. They were claiming 1-in-5 in the 80’s and they are claiming 1-in-5 now. The sexual violence rate on campus might fall a little more slowly than the overall society because campus populations aren’t aging the way the general population is and sexual violence victims are mostly under 30. But it defies belief that the huge dramatic drops in violence and sexual violence everywhere in the world would somehow not be reflected on college campuses.
Interestingly, the decline in sexual violence does appear if you polish the wax fruit a bit. The seminal Koss study of the 1980’s claimed that one-in-four women were assaulted or raped on college campuses. As Christina Hoff Summer and Maggie McNeill pointed out, the actual rate was something like 8%. A current rate of 3-5% would indicate that sexual violence on campus has dropped in proportion to that of sexual violence in the broader society.
It goes without saying, of course that 3-5% of women experiencing sexual violence during their time at college is 3-5% too many. As institutions of enlightenment (supposedly), our college campuses should be safer than the rest of society. I support efforts to clamp down on campus sexual violence, although not in the form that it is currently taking, which I will address on the other site.
But the 1-in-5 stat isn’t reality. It’s a poll-test number. It’s a number picked to be large enough to be scary but not so large as to be unbelievable. It is being used to advance an agenda that I believe will not really address the problem of sexual violence.
Numbers means things. As I’ve argued before, if one in five women on college campuses are being sexually assaulted, this suggests a much more radical course of action than one-in-twenty. It would suggest that we should shut down every college in the country since they are the most dangerous places for women in the entire United States. But 1-in-20 suggests that an overhaul of campus judiciary systems, better support for victims and expulsion of serial predators would do a lot to help.
In other words, let’s keep on with the policies that have dropped sexual violence 50-80% in the last few decades.
Clearing out some old posts.
A while ago, I encountered a story on Amy Alkon’s site about a man fooled into fathering a child:
Here’s how it happened, according to Houston Press. Joe Pressil began dating his girlfriend, Anetria, in 2005. They broke up in 2007 and, three months later, she told him she was pregnant with his child. Pressil was confused, since the couple had used birth control, but a paternity test proved that he was indeed the father. So Pressil let Anetria and the boys stay at his home and he agreed to pay child support.
Fast forward to February of this year, when 36-year-old Pressil found a receipt – from a Houston sperm bank called Omni-Med Laboratories – for “cryopreservation of a sperm sample” (Pressil was listed as the patient although he had never been there). He called Omni-Med, which passed him along to its affiliated clinic Advanced Fertility. The clinic told Pressil that his “wife” had come into the clinic with his semen and they performed IVF with it, which is how Anetria got pregnant.
The big question, of course, is how exactly did Anetria obtain Pressil’s sperm without him knowing about it? Simple. She apparently saved their used condoms. Gag. (Anetria denies these claims.) [tagbox tag=”IVF”]
“I couldn’t believe it could be done. I was very, very devastated. I couldn’t believe that this fertility clinic could actually do this without my consent, or without my even being there,” Pressil said, adding that artificial insemination is against his religious beliefs. “That’s a violation of myself, to what I believe in, to my religion, and just to my manhood,” Pressil said.
I’ve now seen this story show up on a couple of other sites. The only links in Google are for the original claim and her denial. I can’t find out how it was resolved. But I suspect his claim was dismissed. The reason I suspect this is because his story is total bullshit.
Here’s a conversation that has never happened:
Patient: “Hi, I have this condom full of sperm. God knows how I got it or who it belongs to. Can you harvest my eggs and inject this into them?”
Doctor: “No problem!”
I’ve been through IVF (Ben was conceived naturally after two failed cycles). It is a very involved process. We had to have interviews, then get tests for venereal diseases and genetic conditions. I then had to show up and make my donation either on site or in nearby hotel. And no, I was not allowed to bring in a condom. Condoms contain spermicides and lubricants that murder sperm and latex is not sperm’s friend. Even in a sterile container, sperm cells don’t last very long unless they are placed in a special refrigerator. Freezing sperm is a slow process that takes place in a solution that keeps the cells from shattering from ice crystal formation.
And that’s only the technical side of the story. There’s also the legal issue that no clinic is going to expose themselves to a potential multi-million dollar lawsuit by using the sperm of a man they don’t have a consent form from.
So, no, you can’t just have a man fill a condom, throw it in your freezer and get it injected into your eggs. It doesn’t work that way. This is why I believe the woman’s lawyer, who claims Pressil agreed to IVF and signed consent forms.
I’ve seen the frozen sperm canard come up on TV shows and movies from time to time. It annoys me. This is something conjured up by people who haven’t done their research.
A couple of years ago, Mother Jones did a study of mass shootings which attempted to characterize these awful events. Some of their conclusions were robust — such as the finding that most mass shooters acquire their guns legally. However, their big finding — that mass shootings are on the rise — was highly suspect.
Recently, they doubled down on this, proclaiming that Harvard researchers have confirmed their analysis1. The researchers use an interval analysis to look at the time differences between mass shootings and claim that the recent run of short intervals proves that the mass shootings have tripled since 2011.2
Fundamentally, there’s nothing wrong with the article. But practically, there is: they have applied a sophisticated technique to suspect data. This technique does not remove the problems of the original dataset. If anything, it exacerbates them.
As I noted before, the principle problem with Mother Jones’ claim that mass shootings were increasing was the database. It had a small number of incidents and was based on media reports, not by taking a complete data set and paring it down to a consistent sample. Incidents were left out or included based on arbitrary criteria. As a result, there may be mass shootings missing from the data, especially in the pre-internet era. This would bias the results.
And that’s why the interval analysis is problematic. Interval analysis itself is useful. I’ve used it myself on variable stars. But there is one fundamental requirement: you have to have consistent data and you have to account for potential gaps in the data.
Let’s say, for example, that I use interval analysis on my car-manufacturing company to see if we’re slowing down in our production of cars. That’s a good way of figuring out any problems. But I have to account for the days when the plant is closed and no cars are being made. Another example: let’s say I’m measuring the intervals between brightness peaks of a variable star. It will work well … if I account for those times when the telescope isn’t pointed at the star.
Their interval analysis assumes that the data are complete. But I find that suspect given the way the data were collected and the huge gaps and massive dispersion of the early intervals. The early data are all over the place, with gaps as long as 500-800 days. Are we to believe that between 1984 and 1987, a time when violent crime was surging, that there was only one mass shooting? The more recent data are far more consistent with no gap greater than 200 days (and note how the data get really consistent when Mother Jones began tracking these events as they happened, rather than relying on archived media reports).
Note that they also compare this to the average of 172 days. This is the basis of their claim that the rate of mass shootings has “tripled”. But the distribution of gaps is very skewed with a long tail of long intervals. The median gap is 94 days. Using the median would reduce their slew of 14 straight below-average points to 11 below-median points. It would also mean that mass shootings have increased by only 50%. Since 1999, the median is 60 days (and the average 130). Using that would reduce their slew of 14 straight short intervals to four and mean that mass shootings have been basically flat.
The analysis I did two years ago was very simplistic — I looked at victims per year. That approach has its flaws but it has one big strength — it is less likely to be fooled by gaps in the data. Huge awful shootings dominate the number of victims and those are unlikely to have been missed in Mother Jones’ sample.
Here is what you should do if you want to do this study properly. Start with a uniform database of shootings such as those provided by law enforcement agencies. Then go through the incidents, one by one, to see which ones meet your criteria.
In Jesse Walker’s response to Mother Jones, in which he graciously quotes me at length, he notes that a study like this has been done:
The best alternative measurement that I’m aware of comes from Grant Duwe, a criminologist at the Minnesota Department of Corrections. His definition of mass public shootings does not make the various one-time exceptions and other jerry-riggings that Siegel criticizes in the Mother Jones list; he simply keeps track of mass shootings that took place in public and were not a byproduct of some other crime, such as a robbery. And rather than beginning with a search of news accounts, with all the gaps and distortions that entails, he starts with the FBI’s Supplementary Homicide Reports to find out when and where mass killings happened, then looks for news reports to fill in the details. According to Duwe, the annual number of mass public shootings declined from 1999 to 2011, spiked in 2012, then regressed to the mean.
(Walker’s article is one of those “you really should read the whole thing” things.)
This doesn’t really change anything I said two year ago. In 2012, we had an awful spate of mass shootings. But you can’t draw the kind of conclusions Mother Jones wants to from rare and awful incidents. And it really doesn’t matter what analysis technique you use.
1. That these researchers are from Harvard is apparently a big deal to Mother Jones. As one of my colleague used to say, “Well, if Harvard says it, it must be true.”↩
2. This is less alarming than it sounds. Even if we take their analysis at face value, we’re talking about six incidents a year instead of two for a total of about 30 extra deaths or about 0.2% of this country’s murder victims or about the same number of people that are crushed to death by their furniture. We’re also talking about two years of data and a dozen total incidents.↩
When I was a graduate student, one of the big fields of study was the temperature of the cosmic microwave background. The studies were converging on a value of 2.7 degrees with increasing precision. In fact, they were converging a little too well, according to one scientist I worked with.
If you measure something like the temperature of the cosmos, you will never get precisely the right answer. There is always some uncertainty (2.7, give or take a tenth of a degree) and some bias (2.9, give or take a tenth of a degree). So the results should span a range of values consistent with what we know about the limitations of the method and the technology. This scientist claimed that the range was too small. As he said, “You get the answer. And if it’s not the answer you wanted, you smack your grad student and tell him to do it right next time.”
It’s not that people were faking the data or tiling their analysis. It’s that knowing the answer in advance can cause subtle confirmation biases. Any scientific analysis is going to have a bias — an analytical or instrumentation effect that throws off the answer. A huge amount of work is invested in ferreting out and correcting for these biases. But there is a danger when a scientist thinks he knows the answer in advance. If they are off from the consensus, they might pore through their data looking for some effect that biased the results. But if they are close, they won’t look as carefully.
Megan McArdle flags two separate instances of this in the social sciences. The first is the long-standing claim that conservatives are authoritarian while liberals are not:
Jonathan Haidt, one of my favorite social scientists, studies morality by presenting people with scenarios and asking whether what happened was wrong. Conservatives and liberals give strikingly different answers, with extreme liberals claiming to place virtually no value at all on things like group loyalty or sexual purity.
In the ultra-liberal enclave I grew up in, the liberals were at least as fiercely tribal as any small-town Republican, though to be sure, the targets were different. Many of them knew no more about the nuts and bolts of evolution and other hot-button issues than your average creationist; they believed it on authority. And when it threatened to conflict with some sacred value, such as their beliefs about gender differences, many found evolutionary principles as easy to ignore as those creationists did. It is clearly true that liberals profess a moral code that excludes concerns about loyalty, honor, purity and obedience — but over the millennia, man has professed many ideals that are mostly honored in the breach.
[Jeremy] Frimer is a researcher at the University of Winnipeg, and he decided to investigate. What he found is that liberals are actually very comfortable with authority and obedience — as long as the authorities are liberals (“should you obey an environmentalist?”). And that conservatives then became much less willing to go along with “the man in charge.”
Frimer argues that conservatives tend to support authority because they think authority is conservative; liberals tend to oppose it for the same reason. Liberal or conservative, it seems, we’re all still human under the skin.
Exactly. The deference to authority for conservatives and liberals depends on who is wielding said authority. If it’s a cop or a religious figure, conservatives tend to trust them and liberals are skeptical. If it’s a scientist or a professor, liberals tend to trust them and conservatives are rebellious.
Let me give an example. Liberals love to cite the claim that 97% of climate scientists agree that global warming is real. In fact, this week they are having 97 hours of consensus where they have 97 quotes from scientists about global warming. But what is this but an appeal to authority? I don’t care if 100% of scientists agree on global warming: they still might be wrong. If there is something wrong with the temperature data (I don’t think there is) then they are all wrong.
The thing is, that appeal to authority does scrape something useful. You should accept that global warming is very likely real. But not because 97% of scientists agree. The “consensus” supporting global warming is about as interesting as a “consensus” opposing germ theory. It’s the data supporting global warming that is convincing. And when scientists fall back on the data, not their authority, I become more convinced.
If I told liberals that we should ignore Ferguson because 97% of cops think the shooting justified, they wouldn’t say, “Oh, well that settles it.” If I said that 97% of priests agreed that God exists, they wouldn’t say, “Oh, well that settles it.” Hell, this applies even to things that aren’t terribly controversial. Liberals are more than happy to ignore the “consensus” on the unemployment effects of minimum wage hikes or the safety of GMO crops.
I’m drifting from the point. The point is that the studies showing the conservatives are more “authoritarian” were biased. They only asked about certain authority figures, not all of them. And since this was what the mostly liberal social scientists expected, they didn’t question it. McArdle gets into this in her second article, which takes on the claim that conservative views come from “low-effort thought” based on two small studies.
In both studies, we’re talking about differences between groups of 18 to 19 students, and again, no mention of whether the issue might be disinhibition — “I’m too busy to give my professor the ‘right’ answer, rather than the one I actually believe” — rather than “low-effort thought.”
I am reluctant to make sweeping generalizations about a very large group of people based on a single study. But I am reluctant indeed when it turns out those generalizations are based on 85 drunk people and 75 psychology students.
I do not have a scientific study to back me up, but I hope that you’ll permit me a small observation anyway: We are all of us fond of low-effort thought. Just look at what people share on Facebook and Twitter. We like studies and facts that confirm what we already believe, especially when what we believe is that we are nicer, smarter and more rational than other people. We especially like to hear that when we are engaged in some sort of bruising contest with those wicked troglodytes — say, for political and cultural control of the country we both inhabit. When we are presented with what seems to be evidence for these propositions, we don’t tend to investigate it too closely. The temptation is common to all political persuasions, and it requires a constant mustering of will to resist it.
One of these studies found that drunk students were more likely to express conservative views than sober ones and concluded that this was because it was easier to think conservatively when alcohol is inhibiting your through process. The bias there is simply staggering. They didn’t test the students before they started drinking (heavy drinkers might skew conservative). They didn’t consider social disinhibition — which I have mentioned in studies claiming that hungry or “stupid” men like bigger breasts. This was a study designed with its conclusion in mind.
All sciences are in danger of confirmation bias. My advisor was very good about side-stepping it. When we got the answer we expected, he would say, “something is wrong here” and make us go over the data again. But the social sciences seem more subject to confirmation bias for various reasons: the answers in the social sciences are more nebulous, the biases are more subtle, the “observer effect” is more real and, frankly, some social scientists lack the statistical acumen to parse data properly (see the Hurricane study discussed earlier this year). But I also think there is an increased danger because of the immediacy of the issues. No one has a personal stake in the time-resolved behavior of an active galactic nucleus. But people have very personal stakes in politics, economics and sexism.
Megan also touches on what I’ve dubbed the Scientific Peter Principle: that a study garnering enormous amounts of attention is likely erroneous. The reason is that when you do something wrong in a study, it will usually manifest as a false result, not a null result. Null results are usually the result of doing your research right, not doing it wrong. Take the sexist hurricane study earlier this year. Had the scientists done their research correctly: limiting their data to post-1978 or doing a K-S test, they would have found no connection between the femininity of hurricane names and their deadliness. As a result, we would never have heard about it. In fact, other scientists may have already done that analysis and either not bothered to publish it or publish it quietly.
But because they did their analysis wrong — assigning an index to the names, only sub-sampling the data in ways that supported the hypothesis — they got a result. And because they had a surprising result, they got publicity.
This happens quite a bit. The CDC got lots of headlines when they exaggerated the number of obesity deaths by a factor of 14. Scottish researchers got attention when they erroneously claimed that smoking bans were saving lives. The EPA got headlines when they deliberately biased their analysis to claim that second-hand smoke was killing thousands.
Cognitive bias, in combination with the Scientific Peter Principle, is incredibly dangerous.
There’s been a kerfuffle recently about a supposed CDC whistleblower who has revealed malfeasance in the primary CDC study that refuted the connection between vaccines and autism. Let’s put aside that the now-retracted Lancet study the anti-vaxxers tout as the smoking gun was a complete fraud. Let’s put aside that other studies have reached the same conclusion. Let’s just address the allegations at hand, which include a supposed cover up. These allegations are in a published paper (now under further review) and a truly revolting video from Andrew Wakefield — the disgraced author of the fraudulent Lancet study that set off this mess — that compares this “cover-up” to the Tuskegee experiments.
According to the whistle-blower, his analysis shows that while most children do not have an increased risk of autism (which, incidentally, discredits Wakefield’s study), black males vaccinated before 36 months show a 240% increased risk (not 340, as has been claimed). You can catch the latest from Orac. Here’s the most important part:
So is Hooker’s result valid? Was there really a 3.36-fold increased risk for autism in African-American males who received MMR vaccination before the age of 36 months in this dataset? Who knows? Hooker analyzed a dataset collected to be analyzed by a case-control method using a cohort design. Then he did multiple subset analyses, which, of course, are prone to false positives. As we also say, if you slice and dice the evidence more and more finely, eventually you will find apparent correlations that might or might not be real.
In other words, what he did was slice and dice the sample to see if one of those slices would show a correlation. But by pure chance, one of those slices would show a correlation, even there wasn’t one. As best illustrated in this cartoon, if you run twenty tests for something that has no correlation, statistics dictate that at least one of those will show a spurious correlation at the 95% confidence level. This is one of the reasons many scientists, especially geneticists, are turning to Bayesian analysis, which can account for this.
If you did a study of just a few African-American boys and found a connection between vaccination and autism, it would be the sort of preliminary shaky result you would use to justify looking at a larger sample … such as the full CDC study that the crackpot’s own analysis shows refutes such a connection. To take a large comprehensive study, narrow it down to a small sample and then claim the result of this small sample override those of the large one is ridiculous. It’s the opposite of how epidemiology works (and there is no suggestion that there is something about African American males that makes them more susceptible to vaccine-induced autism).
This sort of ridiculous cherry-picking happens a lot, mostly in political contexts. Education reformers will pore over test results until they find that fifth graders slightly improved their reading scores and claim their reform is working. When the scores revert back the next year, they ignore it. Drug warriors will pore over drug stats and claim that a small drop in heroine use among people in their 20’s indicates that the War on Drugs is finally working. When it reverts back to normal, they ignore it.
You can’t pick and choose little bits of data to support your theory. You have to be able to account for all of it. And you have to be aware of how often spurious results pop up even in the most objective and well-designed studies, especially when you parse the data finer and finer.
But the anti-vaxxers don’t care about that. What they care about is proving that evil vaccines and Big Pharma are poisoning us. And however they have to torture the data to get there, that’s what they’ll do.