The Imitation Game

April 16th, 2015

When I went through my year-by-year breakdown of the Oscars, I said this:

Here’s the thing that strikes me about the last 35 years of film history. Over that span, the IMDB ratings for individual years have become populated by a much broader variety of films than ever before. Surprisingly, traditional Oscar fare does well. For all the lashing IMDB gets, great films are popular there. IMDB loves Scorsese, loves Kubrik and loves good film. The difference is the variety — IMDB also loves foreign films, actions films, art films and animated films. These are things the Academy tends to ignore as they pick the same shit every year. Oscar bait has pretty much become its own genre. In other words, the problem is not so much that the Academy has regressed, it’s that they haven’t kept up.

If there is a movie from 2014 that defines “Oscar bait”, it’s The Imitation Game. It’s a good film and I recommend it, especially if you’re over 60. The directing is solid. The acting is superb (although anytime I saw Charles Dance, I thought, ‘don’t help him! It’s Tywin Lannister!’). The script is fine. But it has a a flaw in that it screams “give me an Oscar!” at 100 decibels. At times, it’s like watching a movie by a precocious 16-year-old: “Look how good this movie is! Isn’t Benedict Cumberbatch awesome! Look, Keira Knightly! You love Keira Knightly! Conflict! Homophobia! Sacrifice!”

The Imitation Game doesn’t have the confidence to be its own movie. Instead, it’s a descendant of A Beautiful Mind. Instead of portraying Alan Turing as the eccentric but perfectly sociable person he was, they make him an autistic savant, not far removed from Russell Crowe’s John Nash. Instead of showing the years-long struggle to break Enigma, we get the sudden breakthrough from picking up girls in a bar, a scene so similar to the breakthrough scene of A Beautiful Mind, it’s almost insulting (and stupid; the technique that “breaks” Enigma in the movie is cryptology 101). It’s got plenty of good dialogue and the characters are well-defined. But the plot is surprisingly weak.

(Spoiler Warning: The worst part of the film, for me, is when Turing’s team starts holding back information so that the Nazis won’t know the code has been broken. That decision was real but it was made way higher up the chain of command. Portraying it as Turing’s decision was so unrealistic it jarred me out of the film. I think it would have been better for him to see how little of his intelligence was used and get frustrated.)

As I said, the movie’s fine. I give it a 7/10. It’s worth a rental. I was happy for Graham Moore when he won the Oscar and his acceptance speech was amazing. But like a lot of movies, The Imitation Game is a shadowy reflection of an even better movie: one that tries less hard to win an Oscar and, ironically, would have been a better candidate for one. I hate to go all nerdboy, but I think sticking closer to history: where Enigma was a long struggle, where Turing was the eccentric but brilliant leader of a massive team, where the decision of when to use Enigma decrypts was made by higher up, would have made for a better and more satisfying film.

As it is, this one will likely be forgotten in a few years. Just another piece of Oscar bait. That’s a pity. Cumberbatch, Tyldum and, to some extent, Moore, deserve better.

Exploring What Now?

April 1st, 2015

This is kind of … odd.

At a recent conference on sex trafficking in Orlando, Florida, members of a panel warned attendees about the dangers of space exploration, saying it would be a gold mine for future sex traffickers.

“Space is going to be like a frontier town,” said Nicholas Kristoff, who chaired the panel on Sex Trafficking: the Long Term View. “There will be no law enforcement in space, which means that girls, some as young as 11, could be easily trafficked to colonies on Mars or in the asteroid belt where they would have to service up to 50 men a day in zero gravity.”

Asked for comment by e-mail, Julie Bindel expressed support for a ban on space exploration, noting that science fiction films have frequently depicted prostitution in space. She derided series like Firefly for presenting an unrealistic and unrepresentative model of future sex work. She noted that the film Total Recall featured numerous prostitutes including one with three breasts. “No little girl grows up wanting to be a triple-breasted Martian prostitute.” She further attributed the recent push for space exploration, particularly the Mars One TV show, as being due to efforts by the “pimp lobby” to create a completely new market for sex outside of the bounds of law enforcement.

After the panel, the organizers pointed to a recent study by Dominique Roe Sepowitz and the Office of Sex Trafficking Intervention Research which claimed that ads for interplanetary prostitution have increased 200% in the last year alone and presented evidence that shuttle launches are associated with major increases in sex trafficking. “We have good evidence that women and girls were trafficked into Cape Kennedy during the Apollo program as well,” she said. “Everywhere there is a rocket launch, there is sex trafficking.”

Such opposition is not new. In her seminal book Intercourse, feminist icon Andrea Dworkin noted that rockets have a phallic shape. “The push for more space exploration is clearly an effort to thrust these phallic rockets into the universe’s unconsenting vagina.” She advocated for an “enthusiastic consent” standard from other planets before further human exploration.

I don’t even know what to say about this. We haven’t even gotten a man to the moon in 40 years and we have people worried about the future of sex in space. Takes all types, I guess.

Update: This post was part of Maggie’s April Fool.

The Princess Bride, at 28

March 27th, 2015

One of the great pleasures of being a dad is introducing my kids to the things I like, especially movies. About a year ago, I showed Abby Star Wars for the first time1. And just recently I showed her The Princess Bride

I hadn’t seen Bride for a very long time because I got kind of sick of it for a while. Seeing it after so long, I had a few thoughts on why I would now regard it as a clear classic and easily the best film of 1987.

  • The action scenes in the Princess Bride are few, but they are remarkably well done. There is a clarity and a flow that is missing from a lot of modern action scenes. It’s obvious what’s going on and what’s at stake. This really jumped out at me when I was watching it with Abby. During the duel between Inigo and Westley, she actually gasped when Inigo was backed up toward the cliff. It hit me that she understood the terrain and the danger Inigo was being forced into. Many modern action films have no clarity like that. You wouldn’t see the cliff until he fell over it in slo-motion CGI and then turned around in midair to jump over Westley.
  • The moral difference between Westley and Humperdinck is critical to the resolution of the plot. Westley wins only because he showed mercy in sparing Fezzik and Inigo. By contrast, Humperdinck’s cruelty and cowardice drive Miracle Max to enable his defeat, leave him with few loyal subjects except the unreliable Rugen, and turn Buttercup against him.

    I once saw an interview with Matt Parker and Trey Stone where they talked about plot. They said that a bad plot is just a series of events — X happens, then Y happens, then Z happens. A good plot flows from what has happened before: X happens because Y happened, which causes Z to happen. The fates of the characters in The Princess Bride do not turn on strange coincidences and kick-ass karate moves; it turns on their character and the decisions they make.

  • It’s disappeared into the internet, but Joe Posnanski once wrote a great piece about the decline of Rob Reiner’s directorial career. Here are the movies Reiner has directed, with IMDB ratings and my comments:

    This is Spinal Tap (8.0) – Regarded as a classic comedy. And is.

    The Sure Thing (7.0) – I haven’t seen this but have heard good things.

    Stand By Me (8.1) – Good adaptation of King novella.

    The Princess Bride (8.2, #183 on top movies of all time) – Recognized as a classic.

    When Harry Met Sally … (7.6) – Very good romantic comedy.

    Misery (7.8) – Excellent thriller. Made Kathy Bates a household name.

    A Few Good Men (7.6) – Very good film. One of my dad’s favorites.

    North (4.4) – Ouch. This is where it all seemed to go wrong. Roger Ebert famously said he “hated, hated, hated” this movie and said that Reiner would recover from it faster than Ebert would. He was wrong. After making seven straight good to great movies, Reiner would never make another great movie.

    The American President (6.8) – Haven’t seen it; never will.

    Ghost of Mississippi (6.6) – How do you put James Woods in a movie about civil rights and come out mediocre?

    The Story of Us (5.9) – Haven’t seen it; never will. This was the impetus behind Posnanski’s post: he hated The Story of Us, especially as he went in anticipating a good movie.

    Alex and Emma (5.5) – Haven’t seen it; never will.

    Rumor Has It … (5.5) – There was some noise at the time that represented a return to form for Reiner. It didn’t.

    The Bucket List (7.4) – This IMDB rating seems weird to me. It was savaged by critics. But it did make some money and people seemed to like it, despite its bullshit.

    Flipped (7.7) – This must be IMDB’s recency bias.

    The Magic of Belle Isle (7.0) – This is a very low rating for a recent movie starring Morgan Freeman. That’s three straight 7′s. We’re not back to the days when Reiner was producing minor classic. And given IMDB’s bias on recent movies, I would take this with a grain of salt. Still, it suggested he might be recovering.

    And So It Goes (5.5) – Or maybe not.

    No film has returned to Reiner’s early form. No film has even gotten close. No one goes online and says, “Hey, there’s a new Rob Reiner film coming out!” Posnanski likened Reiner’s decline to a great young baseball player that seems headed for the Hall of Fame based on his first few years but suddenly forgets how to play at age 26. Reiner has rebounded a bit but he’s now a utility player and pinch hitter. I don’t think he’ll ever recover the form he had in the his first few films. And that’s a pity, because his early films were great.

  • I’m not overly fond of the Bechdel test, but Robin Wright is almost the only woman in the cast. The Ancient Booer, however, was pretty awesome (the actress died last year the ripe age of 91). As was Carol Kane.
  • Bride is one of the first movies I can remember that became a hit on home video. It got good reviews. It got a few token award nominations (Oscar for Best Song; WGA for Best Screenplay). But it only did OK box office business. I didn’t see it in the theaters. But then it became a big hit on home video and became a cult classic and then a classic, full stop.

    Looking back on it, the lack of attention the film received was kind of embarrassing and a big demonstration of the lacunae award givers have for both comedy and fantasy. Baby Boom, Broadcast News, Dirty Dancing and Moonstruck were the Golden Globe nominees for Best Comedy. Bride was probably better than all four (although News and Moonstruck were and are well-regarded). Best Picture nominees were The Last Emperor, Fatal Attraction, Broadcast News, Hope and Glory and Moonstruck. That’s not an unreasonable slate but Bride has outlasted all of them. Most silly is the lack of a nomination for Best Screenplay: Bride is generally and correctly regarded as having a classic screenplay, certainly better than the five nominees from that year.

  • Roger Ebert defined a “family film” as one that appeals to both kids and adults. Bride is definitely that. It worked for Abby as a straight-up adventure tale with a beautiful princess, a handsome rogue and an evil prince. But for me (and her, as she gets older), the sly comedy is the selling point. It has affection for the material it mocks. That affection is the key difference between making a funny send-up of fairy tales and an unfunny one.
  • It’s been 28 years and I still get goosebumps when Inigo confronts Count Rugen.
  • Anyway, in a few years I’ll get to introduce Ben to the movie, which will allow me to experience it for the first time all over again.

    In keeping with an earlier post, I showed her the movies in the order of IV, V, I, II, III, VI. I was stunned at how well this worked. It massively improves the prequel trilogy, making the parallels to the original trilogy stronger. And it moves the reveal of Lea to Episode III, where it is much better done than in Episode VI. I hesitated on showing Episode III to Abby because of the violence, but she bore it well. The violence didn’t bother her as much as the psychological trauma of seeing Anakin fall to evil. Anyway, I highly recommend this order if you have any good opinion of the prequel trilogies (and maybe even if you don’t).

    Whither the Heavyweights?

    February 17th, 2015

    Joe Posnanski has a great post up on the subject of Mike Tyson and Tiger Woods. His argument, as far as Tyson goes, is that Tyson was over-rated as a fighter. Tyson could beat the hell out of lesser opponents and make it look absurdly easy. But against better opponents, he was frequently not only beaten but beaten badly. I’ll let you read Joe instead of excerpting because it’s one of those “you should read the whole thing” deals.

    Here’s the thing though. Maybe I’m out of touch, but it seems to me that Tyson was the last heavyweight champion that really captured the public imagination. Oh, there have been popular heavyweights since — Holyfield, Lewis, Jones. But they weren’t Iron Mike. They weren’t household names. They weren’t the subject of landmark video games. And I doubt they’ll be making cameo appearances in movies 20 years from now.

    For a while, Tyson was beloved. He had a great story and a winning smile and just destroyed people in the ring. I think Will Smith put it best: people didn’t just want Tyson to win at boxing; they wanted him to win at life. And when he got into trouble — when he created trouble for himself — it was heart-breaking.

    But Tyson was the last in a string of boxing champions that had captured the public’s imagination, from Sullivan to Braddock to Marciano to Ali (especially Ali) to Frasier to Forman. These men defined the sport. The current champion — whom I had to look up — isn’t in that class. I don’t think anyone really has been since Tyson.

    Maybe we’re an interim, waiting for the next fighter who will grab the American people’s attention. But I actually think that boxing’s day has simply passed. It’s a bit too violent, a bit too sensational, a bit too shaky for modern America. Team sports have taken over. It still makes money and has some cache. But I don’t see it ever returning to its glory days.

    Parity Returns to College Football?

    January 13th, 2015

    So another College Football Season is done. Time to revisit my Bowl Championship System:

    A few years ago, I invented my own Bowl Championship Points system in response to the Bowl Championship Cup. You can read all about it here, including my now hilarious prediction that the 2013 national title game would be a close matchup. The basic idea is that the Championship Cup was silly, as evidenced by ESPN abandoning it. It decides which conference “won” the bowl season by straight win percentage with three or more bowls. So it is almost always won by a mid-major conference that wins three or four bowls. The Mountain West has claimed five of them, usually on the back of a 4-2 or 3-1 record.

    My system awards points to conferences that play in a lot of bowls and a lot of BCS bowls. As such, it is possible for a mid-major to win, but they have to have a great year. The Mountain West won in 2010-2011, when they won four bowls including a BCS game. But it will usually go to a major conference.

    Here are the winners of the Bowl Championship Points system for the time I’ve been keeping it.

    1998-1999: Big Ten (12 points, 5-0, 2 BCS wins)
    1999-2000: Big Ten (10 points, 5-2, 2 BCS wins)
    2000-2001: Big East (8 points, 4-1, 1 BCS win)
    2001-2002: SEC (9 points, 5-3, 2 BCS wins)
    2002-2003: Big Ten (9 points, 5-2, 1 BCS win)
    2003-2004: ACC/SEC (9 points each)
    2004-2005: Big 12 (6 points, 4-3, 1 BCS win)
    2005-2006: Big 12 (8 points, 5-3, 1 BCS win)
    2006-2007: Big East/SEC (11 points each)*
    2007-2008: SEC (14 points, 7-2, 2 BCS wins)
    2008-2009: SEC/Pac 12 (11 points each)*
    2009-2010: SEC (10 points, 6-4, 2 BCS wins)
    2010-2011: Mountain West (8 points, 4-1, 1 BCS win)
    2011-2012: Big 12 (11 points, 6-2, 1 BCS Win)
    2012-2013: SEC (10 points, 6-3, 1 BCS win)
    2013-2014: SEC (11 points, 7-3, 0 BCS wins)

    (*In 2006-7, the Big East went 5-0 in bowls. But the SEC went 6-3, with two BCS wins and a national title. To my mind, that was equally impressive.)

    (**In 2008-9, the Pac 12 went 5-0 in bowls. But the SEC went 6-2, with a BCS win and a national title. Again, depth is important to winning the points system.)

    I have long been saying that the SEC’s dominance was waning, based on the points system. They had a good year last year, but their performance had slowly been declining from its 2008 peak. And to the extent that the SEC did dominate, it was a result of being one of the only conferences that played defense, not “SEC speed”. Last year, I saw the Pac 12 rising and predicted we were moving toward two super-conferences — the SEC and the Pac 12 — dominating the college football scene. But this year, the Big Ten moved into the discussion. In retrospect, that’s not surprising given that two of their best Bowl teams were able to play again.

    So who wins for 2014? Based on the points, the title is split between the Big 10 and the Pac 12. The Pac 12 went 6-3 with one playoff win. The Big 10 went 6-5 with three playoff wins. As a tie-breaker, I’m perfectly willing to give the title to the Big 10 based on Ohio State winning the championship. While they were barely above .500, I think the outstanding performance of their top teams is more impressive than Conference USA’s 4-1 performance in lesser bowls, which would have won the Bowl Championship Cup.

    But what really jumps out this year is the parity. The SEC went 7-5 for nine points as well. Conference USA went 4-1 for seven points. For the first time since 2010-11, no conference had negative points. I think it’s safe to say that the Big Ten is back and can now claim, along with the SEC and Pac 12, to be one of the best conferences in the country. That’s good for the Big Ten. But I also think it’s good for college football. We’re better off when the game is competitive.

    Toys, Kids and the Crisis of Abundance

    January 2nd, 2015

    One of the reasons I like having kids is the toys. Not because I like to play with them (although I do), but because there is nothing in the world quite like the look a child gets in their eyes when they get a toy, especially an unexpected and delightful one. When Abby was about three months old, I brought home a teething ring and a rattle. She was sitting in her car seat in the kitchen and saw me and her little eyes lit up. She just knew it was something for her. And every now and then, I’ll see that same delight.

    This week, however, I’ve been in one of my moods. Not a bad mood but a mood that makes me clean up the entire house from top to bottom. In doing so, I filled two huge garbage bags with nothing but crap. Papers with drawings on them, little trinkets and toys from kids’ parties, Happy Meals, giveaways and $1 trinkets that she simply had to have. And I don’t think my child is that unusual in that regard. It seems that every parent’s house is filling with these little pieces of crap. You have to dump it regularly or you’ll be overwhelmed.

    For children in the US — at least in the middle class and above — toys are no longer this rare and wonderful treat. They’re something they get on a regular basis, something they expect to see. Oh, they’ll still have delight when a really good one comes along. But it makes me indescribably sad to see these dozens of little toys, unwanted and unloved, to see the few minutes of happiness she got out of them before casting them aside. And I know that the same thing will happen with my son.

    (Interesting, I think she feels the same way. There are toys she hasn’t played with in ages but I have to sneak them out of the house because she doesn’t want to part with them.)

    I also can’t help but think of the long-term impact. I’m no radical environmentalist, but it pains me to think of the resources and energy spent making millions of McDonald’s Teenage Mutant Ninja Turtle toys that will just end up in landfills, that will bring very little real joy to the world.

    It’s just another aspect of our crisis of abundance. We’re so rich and things are so cheap that they no longer have any value.

    The 2015 HOF Class

    December 30th, 2014

    Baseball Think Factory is compiling publicly released Hall of Fame ballots to get an idea of how this year’s balloting will go. You can check here to see how well their Ballot Collecting Gizmo did last year when compared to the final vote.

    Just to get this out of the way, I think publicly releasing Hall of Fame votes is a great idea and should be actively encouraged by the BBWAA and the Hall. When writers have to publicly defend their votes, you get much more thoughtful results (the odd Murray Chass aside — and at least he provides exercise for your neck muscles). Look at that second link and compare the public and private ballots. The difference is quite noticeable. For example, 99.5% of those who publicly released their ballots voted for Greg Maddux. This makes sense, since he was one of the greatest pitchers of all time. But only 95.9% of the private ballots did. It’s a small difference, but it shows the effect of accountability. It is much easier to vote against Maddux because of his era or some dim-witted “no one should get in on the first ballot” logic when you don’t have to defend that attitude in public.

    Note that almost every player did better on the public ballots than the private ones except a few like Don Mattingly and Lee Smith. I think this actually a generational thing: older writers not wanting to throw their ballots out to the internet wolves and also favoring older players.

    Looking at BBTF, it looks like Johnson, Martinez and Smoltz will get in this year. Biggio is doing even better than last year, when he fell two votes shy, but I would still hesitate to say he’ll make it. Piazza is currently polling at 77.8% which means he will likely not make it as the difference between his public and private numbers was very strong last year, probably due to unsubstantiated PED rumors. Bagwell, Raines, Schilling and Mussina look likely to take small steps forward.

    What’s interesting, however, is that this looks to be the year we will see the big purge of the ballot that the HOF has clearly been wanting. One problem the HOF ballot has had in recent years is a super-abundance of candidates. Joe Posnanaki recently commented that he regarded Fred McGriff as a marginal HOFer and had him 17th on his ballot. There are ways to improve the process, including Bill James recent suggestion. But I think we’re going to see the glut of candidates finally shrink this year. Why?

    At least three and possibly four men will get inducted. Don Mattingly will drop off the ballot as his time expires. And looking at the votes and considering how the private balloting has gone, it is quite possible that Sammy Sosa, Mark McGwire and Gary Sheffield will also drop off the ballot. In fact, of the new arrivals, it’s possible that none will be on the ballot again next year. That’s a big reduction in the backlog.

    Next year will see Ken Griffey Jr. and Trevor Hoffman probably voted in the first ballot. It’s possible Jim Edmonds or Billy Wagner will linger around. But that will crack the door open for Bagwell, Raines, Schilling and/or Mussina. Then in 2017, we’ll see Pudge Rodriguez (in on first ballot), Vladimir Guerrero (in after a few years) and Manny Ramirez (excluded by steroid allegations). That will keep the door open. Then things get interesting again in 2018.

    In short, the storm has passed and the Hall has apparently passed its judgement on the PED era. Pitchers are in. Great players without specific allegations are in. Palmeiro, McGwire, Sosa and Sheffield are out. Bonds and Clemens are in limbo but almost certainly will not make it before their eligibility expires.

    I think the Hall will have to go back and address the steroid era again, especially once they find out that one (or likely several) current HOFers used steroids. It’s going to be difficult to have a Hall without Bonds, Clemens, Sosa, McGwire, A. Rodriguez, Palmeiro, Sheffield and Manny Ramirez. But I think it will be at least a decade before we get there. The hysteria over PED’s is waning. But it’s not over yet.

    The End of the Era

    December 20th, 2014

    It’s my blog. I can vent if I feel the need.

    On October 21, 1983, the Atlanta Braves’ effort to become a serious team ended for almost a decade. On that day, the Braves completed a trade made two months earlier for Cleveland Indians pitcher Len Barker. Going to the Indians was Brook Jacoby, a young third baseman who would nail down the hot corner in Cleveland for a decade, go to a couple of allstar games and tally over a thousand hits and a hundred home runs. In their defense, the Braves thought they had third base nailed down in Bob Horner, who had already smashed 158 home runs through age 25 and looked like a future Hall of Famer. There was no way to know that Horner would be out of baseball by 30 due to injuries.

    But the real prize for the Indians was Brett Butler, Atlanta’s excellent and popular center fielder. Butler was a strong leadoff man who put up a .344 OBP and swiped 39 bases. He would go on to become on of the best leadoff men in history, a borderline HOF candidate who smashed 2375 hits, stole 558 bases and had a lifetime .377 OBP. He was a great player. It was obvious to everyone that he would at least become a good player and score a tons of runs hitting in front of Dale Murphy and Bob Horner. But the Braves traded him for Len Barker because … I guess … Barker had thrown a perfect game. Barker would go 10-20 in 232.1 innings with a 4.64 ERA. That was over three years, not one. He would be out of baseball within four years.

    The Barker-Butler trade is well-known as one of the worst in history. But it was more than just a bad trade. For the Braves, it was the end of an era. In 1982, the Braves had one of their best seasons, winning 89 games to take the division, then losing the NLCS to the Cardinals. In 1983, they won 88 games but a late-season collapse let the Dodgers win the division. With Joe Torre at the helm and a team that included Dale Murphy, Bob Horner, Glenn Hubbard, Phil Niekro — all great players — and some young pitching, they looked poised to turn around “Loserville” as Atlanta was known (and, to some extent, still is). They looked like they would become the first team from Atlanta, in any sport, to become a serious presence.

    But the next year, they fell to 80 wins. Then Horner got hurt and went to Japan. Torre got fired. Niekro got traded. Brad Komminsk flopped. The farm system imploded. And the Braves returned to being one of the worst teams in baseball.

    This was why 1991 was not only a miracle year, it was one the great miracle years in sports. The Braves didn’t just go worst-to-first and come within a Lonnie Smith hesitation of a championship. They went from a truly terrible team, a nothing on the sports radar, to a dynasty. They were good, they were young and they were run by two great men who knew what they were doing.

    And the result was one of the great runs in sports history: 14 straight division titles, five pennants and a championship. An average of 98 wins per season. Four players — Chipper Jones, Greg Maddux, Tom Glavine and John Smoltz — are in the Hall of Fame or soon will be. A few more — Fred McGriff, Javy Lopez — have borderline cases. Still more were just great damned players. Their manager is in the Hall of Fame and you could make an argument for their General Manager and their Pitching Coach. It was an amazing time to be a Braves fan. You turned on the TV and knew you were watching a great team that would usually win. If they fell behind in the standings, you knew it was only a matter of time until they would catch up. It was a joy to turn on TBS and watch them dominate. The “Braves Way” was a real thing: great pitching, great defense, timely hitting.

    The thing is that the Braves weren’t just a great team, they were a smart team. They developed great prospects (Lopez, Klesko, Marcus Giles, Rafael Furcal, Chipper Jones, Ryan Klesko, Andruw Jones, David Justice), they traded for great players (Fred McGriff especially), they signed impact free agents (Greg Maddux, Andres Galarraga). They had a great major league team and a great farm system. If someone got injured or left to free agency, they had the depth to replace them. Year after year, everything they touched was gold.

    That era has long been over, as exemplified by this summer’s capstone — the induction of Maddux, Glavine and Cox into the Hall of Fame. But now we see we are back to the bad old days. It turns out that capstone was also a gravestone:

    Here Lieth the Braves Dynasty: 1991-2005

    Last year, I thought maybe the good days were back after almost a decade of middling shuffling semi-contention. They won 96 games, took the division and looked like a team poised for a multi-year run. True, they had albatross contracts in Dan Uggla and BJ Upton. But they had a slew of great young players — Freddie Freeman, Evan Gattis, Jason Heyward, Justin Upton, Andrelton Simmons, Julio Teheran, Craig Kimbrel, Mike Minor, Kris Medlen. They’d signed a number of them to long-term contracts.

    But it was more than just that. The Braves were fun to watch again. I looked forward to every game and would watch them on mlb.tv while messaging my brother. It felt like 1991 all over again, like we were returning to the good old days.

    What a difference a year makes. The Braves had a lousy 2014 season, with the bats completely collapsing and several of their young pitchers getting hurt. They finished under .500 and looked terrible the last few months. I couldn’t watch them, it was so maddening.

    But as disappointing as the season was, there were still reasons for optimism. They had one of the best pitching staffs in the league. Their defense was very good. They still had the young core that had looked so promising a year earlier. A change of hitting coach (or maybe manager) and they looked good to bounce back in 2015 and fulfill their destiny as the next Braves dynasty.

    Well, that apparently wasn’t good enough. A month ago, they traded away Jason Heyward — a 24 y/o Atlanta native and one of the best players on the team — for a disappointing pitcher from the Cardinals. They traded Tommy La Stella, one of their few prospects who could get on base, for an oft-injured former pitching prospect. Yesterday, they traded Justin Upton, their second best player, for some minor league prospects, the best of which is a disappointing first-round pitcher coming off arm surgery. The rumor is that they’re accumulating capital to make some major plays in the international market. I’m dubious. I don’t see Liberty Media — the cheapskates owners who wrecked the dynasty — shelling out for the top-tier talent.

    It was the Heyward trade was the watershed for me — an awful echo of the Len Barker trade. The Braves traded away their most popular player — a young talent who is still years away from his prime — for the ultimate bag of magic beans: a young pitcher. And the language surrounding the trade was even more disheartening. The Braves talked about “years of control” — i.e., how many more years they have before the players reach free agency. They talked about how they’re building for 2017, when their stadium opens. They talked about how they were trying to get out from under some bad contracts.

    I understand the theory behind all that. The problem is that these are the things said by loser organizations. Loser organizations are always rebuilding, always aiming to contend a few years from now, always worried about years-of-control and payroll implications. Smart teams worry about those things too but they also know how to hold onto their best players and how to build a team that will contend, full stop, not just in some nebulous future window. They don’t trade away almost all of their on-base skills for minor league scraps and pitchers with injury risks. They don’t trade away talented young players and sign older less-talented players to replace them. They don’t look at the team that kept runs off the board better than almost anyone last year but couldn’t string three hits together and think their real need is mawr pitching.

    The Braves aren’t some ancient team at the end of a great run trading away their aging stars. They were one of the youngest teams in the majors with some of their best players locked up long term. This isn’t the Red Sox rebuilding when their stars all aged overnight. This is like the Royals tearing up their young team two years before those players took them to the World Series.

    (And it’s made worse by the signing of Nick Markakis to a 4-year deal. Markakis is six years older than Heyward. He’s four years older than Upton. And he’s not nearly as good as either of them. The Braves outfield has gotten older while shedding all of its on-base skills, all of its power and all of its defense. This is not how you build a team that will contend three years from now. This is how you become the Marlins.)

    Looking at the destruction of a good team, the trading away of good young players for scraps, the obsession over payroll (for an organization awash in money), I can’t help but think of the bad old days when the Braves would trade away Brett Butler and sign Ken Oberkfell, when they’d break Pascual Perez and trade for Danny Heep or Ozzie Virgil, when they talked excitedly about potentially signing the remnants of an aging Jim Rice. Yesterday’s Upton trade simply confirmed my suspicions. The Braves are no longer a serious organization. They had a team that could have contended when they opened their new stadium. Now they don’t.

    I’m probably being overly bitter and pessimistic. But I’m dubious that this team will contend anytime in the next five years and I’m certain they will not approach anything like a dynasty as long as Liberty Media are in charge. They’re simply too cheap and too stupid to build the kind of powerhouse they used to be known for.

    No, we’re heading back to the bad old days when the Braves were the joke of the National League. And with the Hawks still unserious and the Falcons “contending” at 5-9, I fear that the days of Loserville have returned.

    Addendum: Braves’ apologists are saying this team couldn’t afford to keep Upton and Heyward. This is garbage. Uggla’s contract comes off the books next year. And the Braves’ organization has a revenue stream of $253 million. They could easily pay those two outfielders $40 million a year and not break a sweat. This is just an excuse from a cheapskate owner.

    How Many Women?

    November 1st, 2014

    Campus sexual violence continues to be a topic of discussion, as it should be. I have a post going up on the other site about the kangaroo court system that calls itself campus justice.

    But in the course of this discussion, a bunch of statistical BS has emerged. This centers on just how common sexual violence is on college campuses, with estimates ranging from the one-in-five stat that has been touted, in various forms, since the 1980′s, to a 0.2 percent rate touted in a recent op-ed.

    Let’s tackle that last one first.

    According to the FBI “[t]he rate of forcible rapes in 2012 was estimated at 52.9 per 100,000 female inhabitants.”

    Assuming that all American women are uniformly at risk, this means the average American woman has a 0.0529 percent chance of being raped each year, or a 99.9471 percent chance of not being raped each year. That means the probability the average American woman is never raped over a 50-year period is 97.4 percent (0.999471 raised to the power 50). Over 4 years of college, it is 99.8 percent.

    Thus the probability that an American woman is raped in her lifetime is 2.6 percent and in college 0.2 percent — 5 to 100 times less than the estimates broadcast by the media and public officials.

    This estimate is way too low. It is based on taking one number and applying high school math to it. It misses the mark because it uses the wrong numbers and some poor assumptions.

    First of all, the FBI’s stats are on documented forcible rape and does not account for under-reporting and does not includes sexual assault. The better comparison is the National Crime Victimization Survey, which estimates about 300,000 rapes or sexual assaults in 2013 for an incidence rate of 1.1 per thousand. But even that number needs some correction because about 2/3 of sexual violence is visited upon women between the ages of 12 and 30 and about a third among college-age women. The NCVS rate indicates about a 10% lifetime risk or about 3% college-age risk for American women. This is lower than the 1-in-5 stat but much higher than 1-in-500.

    (*The NCVS survey shows a jump in sexual violence in the 2000′s. That’s not because sexual violence surged; it’s because they changed their methodology, which increased their estimates by about 20%.)

    So what about 1-in-5? I’ve talked about this before, but it’s worth going over again: the one-in-five stat is almost certainly a wild overestimate:

    The statistic comes from a 2007 Campus Sexual Assault study conducted by the National Institute of Justice, a division of the Justice Department. The researchers made clear that the study consisted of students from just two universities, but some politicians ignored that for their talking point, choosing instead to apply the small sample across all U.S. college campuses.

    The CSA study was actually an online survey that took 15 minutes to complete, and the 5,446 undergraduate women who participated were provided a $10 Amazon gift card. Men participated too, but their answers weren’t included in the one-in-five statistic.

    If 5,446 sounds like a high number, it’s not — the researchers acknowledged that it was actually a low response rate.

    But a lot of those responses have to do with how the questions were worded. For example, the CSA study asked women whether they had sexual contact with someone while they were “unable to provide consent or stop what was happening because you were passed out, drugged, drunk, incapacitated or asleep?”

    The survey also asked the same question “about events that you think (but are not certain) happened.”

    That’s open to a lot of interpretation, as exemplified by a 2010 survey conducted by the U.S. Centers for Disease Control and Prevention, which found similar results.

    I’ve talked about the CDC study before and its deep flaws. Schow points out that the victimization rate they are claiming is way more than the National Crime Victimization Survey (NCVS), the FBI and the Rape, Abuse and Incest National Network (RAINN) estimates. All three of those agencies use much more rigorous data collection methods. NCVS does interviews and asks the question straight up: have you been raped or sexually assaulted? I would trust the research methods of these agencies, who have been doing this for decades, over a web-survey of two colleges.

    Another survey recently emerged from MIT which claimed 1-in-6 women are sexually assaulted. But only does this suffer from the same flaws as the CSA study (a web survey with voluntary participation), it’s not even claiming what it claims:

    When it comes to experiences of sexual assault since starting at MIT:

  • 1 in 20 female undergraduates, 1 in 100 female graduate students, and zero male students reported being the victim of forced sexual penetration
  • 3 percent of female undergraduates, 1 percent of male undergraduates, and 1 percent of female grad students reported being forced to perform oral sex
  • 15 percent of female undergraduates, 4 percent of male undergraduates, 4 percent of female graduate students, and 1 percent of male graduate students reported having experienced “unwanted sexual touching or kissing”
  • All of these experiences are lumped together under the school’s definition of sexual assault.

    When students were asked to define their own experiences, 10 percent of female undergraduates, 2 percent of male undergraduates, three percent of female graduate students, and 1 percent of male graduate students said they had been sexually assaulted since coming to MIT. One percent of female graduate students, one percent of male undergraduates, and 5 percent of female undergraduates said they had been raped.

    Note that even with a biased study, the result is 1-in-10, not 1-in-5 or 1-in-6.

    OK, so web surveys are a bad way to do this. What is a good way? Mark Perry points out that the one-in-five stat is inconsistent with another number claimed by advocates of new policies: a reporting rate of 12%. If you assume a reporting rate near that and use the actual number of reported assaults on major campuses, you get a rate of around 3%.

    Hmmm.

    Further research is consistent with this rate. For example, here, we see that UT Austin has 21 reported incidents of sexual violence. That’s one in a thousand enrolled women. Texas A&M reported nine, one in three thousand women. Houston reported 11, one in 2000 women. If we are to believe the 1-in-5 stat, that’s a reporting rate of half a percent. A reporting rate of 10%, which is what most people accept, would mean … a 3-5% risk for five years of enrollment.

    So … Mark Perry finds 3%. Texas schools show 3-5%. NCVS and RAINN stats indicate 2-5%. Basically, any time we use actual numbers based on objectives surveys, we find the number of women who are in danger of sexual violence during their time on campus is 1-in-20, not 1-in-5.

    One other reason to disbelieve the 1-in-5 stat. Sexual violence in our society is down — way down. According to the Bureau of Justice Statistics, rape has fallen from 2.5 per 1000 to 0.5 per thousand, an 80% decline. The FBI’s data show a decline from 40 to about 25 per hundred thousand, a 40% decline (they don’t account for reporting rate, which is likely to have risen). RAINN estimates that the rate has fallen 50% in just the last twenty years. That means 10 million fewer sexual assaults.

    Yet, for some reason, sexual assault rates on campus have not fallen, at least according to the favored research. They were claiming 1-in-5 in the 80′s and they are claiming 1-in-5 now. The sexual violence rate on campus might fall a little more slowly than the overall society because campus populations aren’t aging the way the general population is and sexual violence victims are mostly under 30. But it defies belief that the huge dramatic drops in violence and sexual violence everywhere in the world would somehow not be reflected on college campuses.

    Interestingly, the decline in sexual violence does appear if you polish the wax fruit a bit. The seminal Koss study of the 1980′s claimed that one-in-four women were assaulted or raped on college campuses. As Christina Hoff Summer and Maggie McNeill pointed out, the actual rate was something like 8%. A current rate of 3-5% would indicate that sexual violence on campus has dropped in proportion to that of sexual violence in the broader society.

    It goes without saying, of course that 3-5% of women experiencing sexual violence during their time at college is 3-5% too many. As institutions of enlightenment (supposedly), our college campuses should be safer than the rest of society. I support efforts to clamp down on campus sexual violence, although not in the form that it is currently taking, which I will address on the other site.

    But the 1-in-5 stat isn’t reality. It’s a poll-test number. It’s a number picked to be large enough to be scary but not so large as to be unbelievable. It is being used to advance an agenda that I believe will not really address the problem of sexual violence.

    Numbers means things. As I’ve argued before, if one in five women on college campuses are being sexually assaulted, this suggests a much more radical course of action than one-in-twenty. It would suggest that we should shut down every college in the country since they are the most dangerous places for women in the entire United States. But 1-in-20 suggests that an overhaul of campus judiciary systems, better support for victims and expulsion of serial predators would do a lot to help.

    In other words, let’s keep on with the policies that have dropped sexual violence 50-80% in the last few decades.

    A Fishy Story

    October 30th, 2014

    Clearing out some old posts.

    A while ago, I encountered a story on Amy Alkon’s site about a man fooled into fathering a child:

    Here’s how it happened, according to Houston Press. Joe Pressil began dating his girlfriend, Anetria, in 2005. They broke up in 2007 and, three months later, she told him she was pregnant with his child. Pressil was confused, since the couple had used birth control, but a paternity test proved that he was indeed the father. So Pressil let Anetria and the boys stay at his home and he agreed to pay child support.
    Fast forward to February of this year, when 36-year-old Pressil found a receipt – from a Houston sperm bank called Omni-Med Laboratories – for “cryopreservation of a sperm sample” (Pressil was listed as the patient although he had never been there). He called Omni-Med, which passed him along to its affiliated clinic Advanced Fertility. The clinic told Pressil that his “wife” had come into the clinic with his semen and they performed IVF with it, which is how Anetria got pregnant.

    The big question, of course, is how exactly did Anetria obtain Pressil’s sperm without him knowing about it? Simple. She apparently saved their used condoms. Gag. (Anetria denies these claims.) [tagbox tag="IVF"]

    “I couldn’t believe it could be done. I was very, very devastated. I couldn’t believe that this fertility clinic could actually do this without my consent, or without my even being there,” Pressil said, adding that artificial insemination is against his religious beliefs. “That’s a violation of myself, to what I believe in, to my religion, and just to my manhood,” Pressil said.

    I’ve now seen this story show up on a couple of other sites. The only links in Google are for the original claim and her denial. I can’t find out how it was resolved. But I suspect his claim was dismissed. The reason I suspect this is because his story is total bullshit.

    Here’s a conversation that has never happened:

    Patient: “Hi, I have this condom full of sperm. God knows how I got it or who it belongs to. Can you harvest my eggs and inject this into them?”

    Doctor: “No problem!”

    I’ve been through IVF (Ben was conceived naturally after two failed cycles). It is a very involved process. We had to have interviews, then get tests for venereal diseases and genetic conditions. I then had to show up and make my donation either on site or in nearby hotel. And no, I was not allowed to bring in a condom. Condoms contain spermicides and lubricants that murder sperm and latex is not sperm’s friend. Even in a sterile container, sperm cells don’t last very long unless they are placed in a special refrigerator. Freezing sperm is a slow process that takes place in a solution that keeps the cells from shattering from ice crystal formation.

    And that’s only the technical side of the story. There’s also the legal issue that no clinic is going to expose themselves to a potential multi-million dollar lawsuit by using the sperm of a man they don’t have a consent form from.

    So, no, you can’t just have a man fill a condom, throw it in your freezer and get it injected into your eggs. It doesn’t work that way. This is why I believe the woman’s lawyer, who claims Pressil agreed to IVF and signed consent forms.

    I’ve seen the frozen sperm canard come up on TV shows and movies from time to time. It annoys me. This is something conjured up by people who haven’t done their research.

    Mathematical Malpractice Watch: Non-Citizen Voters

    October 29th, 2014

    Hmmm:

    How many non-citizens participate in U.S. elections? More than 14 percent of non-citizens in both the 2008 and 2010 samples indicated that they were registered to vote. Furthermore, some of these non-citizens voted. Our best guess, based upon extrapolations from the portion of the sample with a verified vote, is that 6.4 percent of non-citizens voted in 2008 and 2.2 percent of non-citizens voted in 2010.

    The authors go on to speculate that non-citizen voting could have been common enough to swing Al Franken’s 2008 election and possibly even North Carolina for Obama in 2008. Non-citizens vote overwhelmingly Democrat.

    I do think there is a point here which is that non-citizens may be voting in our elections, which they are not supposed to do. Interestingly, photo ID — the current policy favored by Republicans — would do little to address this as most of the illegal voters had ID. The real solution … to all our voting problems … would be to create a national voter registration database that states could easily consult to verify someone’s identity, citizenship, residence and eligibility status. But this would be expensive, might not work and would very likely require a national ID card, which many people vehemently oppose.

    However …

    The sample is very small: 21 non-citizens voting in 2008 and 8 in 2010. This is intriguing but hardly indicative. It could be a minor statistical blip. And there have been critiques that have pointed out that this is based on a … wait for it … web survey. So the results are highly suspect. It’s likely that fair number of these non-citizen voters are, in fact, non-correctly-filling-out-a-web-survey voters.

    To their credit, the authors acknowledge this and say that while it is possible non-citizens swung the Franken Election (only 0.65% would have had to vote), speculating on other races is … well, speculation.

    So far, so good.

    The problem is how the blogosphere is reacting to it. Conservative sites are naturally jumping on this while liberals are talking about the small number statistics. But those liberal sites are happy to tout small numbers when it’s, say, a supposed rise in mass shootings.

    In general, I lean toward to the conservatives on this. While I don’t think voter fraud is occurring on the massive scale they presume, I do think it’s more common than the single-digit or double-digit numbers liberals like to hawk. Those numbers are themselves based on small studies in environments where voter ID is not required. We know how many people have been caught. But assuming that represents the limit of the problem is like assuming the number of speeders on a highway is equal to the number of tickets that are given out. One of the oft-cited studies is from the President’s Commission on Election Administration, which was mostly concerned with expanding access, not tracking down fraud.

    Here’s the thing. While I’m convinced the number of fraudulent votes is low, I note that, every time we discuss this, that number goes up. It used to be a handful. Now it’s a few dozen. This study hints it could be hundreds, possibly thousands. There are 11 million non-citizens living in this country (including my wife). What these researchers are indicating is that, nationally, their study could mean a many thousands of extra votes for Democrats. Again, their study is very small and likely subject to significant error (as all web surveys are). It’s also likely the errors bias high. But even if they have overestimated the non-citizen voting by a factor of a hundred, that still means a few thousands incidents of voter fraud. That’s getting to the point where this may be a concern, no?

    Do I think this justifies policy change? I don’t think a web-survey of a few hundred people justifies anything. I do think this indicates the issue should be studied properly and not just dismissed out of hand because only a few dozen fake voters have actually been caught.

    The Latest Plagiarism Kerfuffles

    October 28th, 2014

    Over the last few years, a number of reporters and writers have turned out to be serial plagiarists. Oh, they don’t admit this. They’ll say they forgot to put in quotemarks or that rules don’t apply to them. But if you and I did that, we’d be kicked out of school. Or maybe not.

    The most recent accusation is CJ Werleman. His excuse has crumbled now that researchers have dug up over a dozen liftings of text from other people. But I want to focus on his excuse because it is illustrative:

    The Harris zombies now accuse me of plagiarism. From 5 books & 100+ op-eds, they cite 2 common cliches and two summaries of cited studies

    This sounds reasonable. After all, there are only so many ways you can state the same facts. And when you look at the quotes, if you were of a generous disposition, you might accept this response. The quotes aren’t completely verbatim. Maybe he did just happen to phrase things the same way other writers did.

    But I find this excuse unlikely. In a great post on plagiarism, McArdle writes the following:

    A while back, Terry Teachout, the Wall Street Journal’s drama critic, pointed out something fascinating to me: If you type even a small fragment of your own work into Google, as few as seven words, with quotation marks around the fragment to force Google to only search on those words in that order, then you are likely to find that you are the only person on the Internet who has ever produced that exact combination of words. Obviously this doesn’t work with boilerplate like “GE rose four and a quarter points on stronger earnings”, or “I love dogs,” but in general, it’s surprisingly true.

    I’ve tested this and it is true. With a lot of my posts, if I type in a non-generic line, the only site that comes up is mine. In fact, verbatim Google searches are a good way to find content scrapers and plagiarists.

    Whenever I site anyone on the internet, I will link them, usually quote them and then, if necessary, summarize or rephrase the other points they are making. I try hard to avoid simply rewriting what they said like I’m a fifth grader turning in a book report. So when you hear someone using the excuse that similar ideas require similar phrasing, it’s largely baloney. If two passage of text are nearly identical, it’s very likely that one was copied from the other.

    I’ve become more sensitive to plagiarism since I’ve become a victim over the last ten years. I’ve had content scraped, I’ve had my ideas presented as though they were someone else’s and I’ve had outright word-for-word copying (on a now defunct story site). It’s difficult to describe just how dirty being plagiarized makes you feel. I even shied away … at first … from making accusations because I was so embarrassed. Here’s what I wrote the first time it happened:

    Plagiarism is not just stealing someone’s words. It is stealing their mind. It is a cruel violation. The hard work and original thought of one person is stolen by a second. The people who have lost their careers because of plagiarism have deserved everything they’ve gotten and I am now determined, more than ever, to make sure I quote people properly and always give credit where it’s due.

    Plagiarists need to be called out. Words are the currency of writers and, for many, how they make their living. Plagiarizing someone is no different than stealing their car or cleaning out their bank account. In fact, I would argue that it’s a lot worse.

    Mother Jones Revisited

    October 18th, 2014

    A couple of years ago, Mother Jones did a study of mass shootings which attempted to characterize these awful events. Some of their conclusions were robust — such as the finding that most mass shooters acquire their guns legally. However, their big finding — that mass shootings are on the rise — was highly suspect.

    Recently, they doubled down on this, proclaiming that Harvard researchers have confirmed their analysis1. The researchers use an interval analysis to look at the time differences between mass shootings and claim that the recent run of short intervals proves that the mass shootings have tripled since 2011.2

    Fundamentally, there’s nothing wrong with the article. But practically, there is: they have applied a sophisticated technique to suspect data. This technique does not remove the problems of the original dataset. If anything, it exacerbates them.

    As I noted before, the principle problem with Mother Jones’ claim that mass shootings were increasing was the database. It had a small number of incidents and was based on media reports, not by taking a complete data set and paring it down to a consistent sample. Incidents were left out or included based on arbitrary criteria. As a result, there may be mass shootings missing from the data, especially in the pre-internet era. This would bias the results.

    And that’s why the interval analysis is problematic. Interval analysis itself is useful. I’ve used it myself on variable stars. But there is one fundamental requirement: you have to have consistent data and you have to account for potential gaps in the data.

    Let’s say, for example, that I use interval analysis on my car-manufacturing company to see if we’re slowing down in our production of cars. That’s a good way of figuring out any problems. But I have to account for the days when the plant is closed and no cars are being made. Another example: let’s say I’m measuring the intervals between brightness peaks of a variable star. It will work well … if I account for those times when the telescope isn’t pointed at the star.

    Their interval analysis assumes that the data are complete. But I find that suspect given the way the data were collected and the huge gaps and massive dispersion of the early intervals. The early data are all over the place, with gaps as long as 500-800 days. Are we to believe that between 1984 and 1987, a time when violent crime was surging, that there was only one mass shooting? The more recent data are far more consistent with no gap greater than 200 days (and note how the data get really consistent when Mother Jones began tracking these events as they happened, rather than relying on archived media reports).

    Note that they also compare this to the average of 172 days. This is the basis of their claim that the rate of mass shootings has “tripled”. But the distribution of gaps is very skewed with a long tail of long intervals. The median gap is 94 days. Using the median would reduce their slew of 14 straight below-average points to 11 below-median points. It would also mean that mass shootings have increased by only 50%. Since 1999, the median is 60 days (and the average 130). Using that would reduce their slew of 14 straight short intervals to four and mean that mass shootings have been basically flat.

    The analysis I did two years ago was very simplistic — I looked at victims per year. That approach has its flaws but it has one big strength — it is less likely to be fooled by gaps in the data. Huge awful shootings dominate the number of victims and those are unlikely to have been missed in Mother Jones’ sample.

    Here is what you should do if you want to do this study properly. Start with a uniform database of shootings such as those provided by law enforcement agencies. Then go through the incidents, one by one, to see which ones meet your criteria.

    In Jesse Walker’s response to Mother Jones, in which he graciously quotes me at length, he notes that a study like this has been done:

    The best alternative measurement that I’m aware of comes from Grant Duwe, a criminologist at the Minnesota Department of Corrections. His definition of mass public shootings does not make the various one-time exceptions and other jerry-riggings that Siegel criticizes in the Mother Jones list; he simply keeps track of mass shootings that took place in public and were not a byproduct of some other crime, such as a robbery. And rather than beginning with a search of news accounts, with all the gaps and distortions that entails, he starts with the FBI’s Supplementary Homicide Reports to find out when and where mass killings happened, then looks for news reports to fill in the details. According to Duwe, the annual number of mass public shootings declined from 1999 to 2011, spiked in 2012, then regressed to the mean.

    (Walker’s article is one of those “you really should read the whole thing” things.)

    This doesn’t really change anything I said two year ago. In 2012, we had an awful spate of mass shootings. But you can’t draw the kind of conclusions Mother Jones wants to from rare and awful incidents. And it really doesn’t matter what analysis technique you use.


    1. That these researchers are from Harvard is apparently a big deal to Mother Jones. As one of my colleague used to say, “Well, if Harvard says it, it must be true.”

    2. This is less alarming than it sounds. Even if we take their analysis at face value, we’re talking about six incidents a year instead of two for a total of about 30 extra deaths or about 0.2% of this country’s murder victims or about the same number of people that are crushed to death by their furniture. We’re also talking about two years of data and a dozen total incidents.

    Now You See the Bias Inherent in the System

    September 11th, 2014

    When I was a graduate student, one of the big fields of study was the temperature of the cosmic microwave background. The studies were converging on a value of 2.7 degrees with increasing precision. In fact, they were converging a little too well, according to one scientist I worked with.

    If you measure something like the temperature of the cosmos, you will never get precisely the right answer. There is always some uncertainty (2.7, give or take a tenth of a degree) and some bias (2.9, give or take a tenth of a degree). So the results should span a range of values consistent with what we know about the limitations of the method and the technology. This scientist claimed that the range was too small. As he said, “You get the answer. And if it’s not the answer you wanted, you smack your grad student and tell him to do it right next time.”

    It’s not that people were faking the data or tiling their analysis. It’s that knowing the answer in advance can cause subtle confirmation biases. Any scientific analysis is going to have a bias — an analytical or instrumentation effect that throws off the answer. A huge amount of work is invested in ferreting out and correcting for these biases. But there is a danger when a scientist thinks he knows the answer in advance. If they are off from the consensus, they might pore through their data looking for some effect that biased the results. But if they are close, they won’t look as carefully.

    Megan McArdle flags two separate instances of this in the social sciences. The first is the long-standing claim that conservatives are authoritarian while liberals are not:

    Jonathan Haidt, one of my favorite social scientists, studies morality by presenting people with scenarios and asking whether what happened was wrong. Conservatives and liberals give strikingly different answers, with extreme liberals claiming to place virtually no value at all on things like group loyalty or sexual purity.

    In the ultra-liberal enclave I grew up in, the liberals were at least as fiercely tribal as any small-town Republican, though to be sure, the targets were different. Many of them knew no more about the nuts and bolts of evolution and other hot-button issues than your average creationist; they believed it on authority. And when it threatened to conflict with some sacred value, such as their beliefs about gender differences, many found evolutionary principles as easy to ignore as those creationists did. It is clearly true that liberals profess a moral code that excludes concerns about loyalty, honor, purity and obedience — but over the millennia, man has professed many ideals that are mostly honored in the breach.

    [Jeremy] Frimer is a researcher at the University of Winnipeg, and he decided to investigate. What he found is that liberals are actually very comfortable with authority and obedience — as long as the authorities are liberals (“should you obey an environmentalist?”). And that conservatives then became much less willing to go along with “the man in charge.”

    Frimer argues that conservatives tend to support authority because they think authority is conservative; liberals tend to oppose it for the same reason. Liberal or conservative, it seems, we’re all still human under the skin.

    Exactly. The deference to authority for conservatives and liberals depends on who is wielding said authority. If it’s a cop or a religious figure, conservatives tend to trust them and liberals are skeptical. If it’s a scientist or a professor, liberals tend to trust them and conservatives are rebellious.

    Let me give an example. Liberals love to cite the claim that 97% of climate scientists agree that global warming is real. In fact, this week they are having 97 hours of consensus where they have 97 quotes from scientists about global warming. But what is this but an appeal to authority? I don’t care if 100% of scientists agree on global warming: they still might be wrong. If there is something wrong with the temperature data (I don’t think there is) then they are all wrong.

    The thing is, that appeal to authority does scrape something useful. You should accept that global warming is very likely real. But not because 97% of scientists agree. The “consensus” supporting global warming is about as interesting as a “consensus” opposing germ theory. It’s the data supporting global warming that is convincing. And when scientists fall back on the data, not their authority, I become more convinced.

    If I told liberals that we should ignore Ferguson because 97% of cops think the shooting justified, they wouldn’t say, “Oh, well that settles it.” If I said that 97% of priests agreed that God exists, they wouldn’t say, “Oh, well that settles it.” Hell, this applies even to things that aren’t terribly controversial. Liberals are more than happy to ignore the “consensus” on the unemployment effects of minimum wage hikes or the safety of GMO crops.

    I’m drifting from the point. The point is that the studies showing the conservatives are more “authoritarian” were biased. They only asked about certain authority figures, not all of them. And since this was what the mostly liberal social scientists expected, they didn’t question it. McArdle gets into this in her second article, which takes on the claim that conservative views come from “low-effort thought” based on two small studies.

    In both studies, we’re talking about differences between groups of 18 to 19 students, and again, no mention of whether the issue might be disinhibition — “I’m too busy to give my professor the ‘right’ answer, rather than the one I actually believe” — rather than “low-effort thought.”

    I am reluctant to make sweeping generalizations about a very large group of people based on a single study. But I am reluctant indeed when it turns out those generalizations are based on 85 drunk people and 75 psychology students.

    I do not have a scientific study to back me up, but I hope that you’ll permit me a small observation anyway: We are all of us fond of low-effort thought. Just look at what people share on Facebook and Twitter. We like studies and facts that confirm what we already believe, especially when what we believe is that we are nicer, smarter and more rational than other people. We especially like to hear that when we are engaged in some sort of bruising contest with those wicked troglodytes — say, for political and cultural control of the country we both inhabit. When we are presented with what seems to be evidence for these propositions, we don’t tend to investigate it too closely. The temptation is common to all political persuasions, and it requires a constant mustering of will to resist it.

    One of these studies found that drunk students were more likely to express conservative views than sober ones and concluded that this was because it was easier to think conservatively when alcohol is inhibiting your through process. The bias there is simply staggering. They didn’t test the students before they started drinking (heavy drinkers might skew conservative). They didn’t consider social disinhibition — which I have mentioned in studies claiming that hungry or “stupid” men like bigger breasts. This was a study designed with its conclusion in mind.

    All sciences are in danger of confirmation bias. My advisor was very good about side-stepping it. When we got the answer we expected, he would say, “something is wrong here” and make us go over the data again. But the social sciences seem more subject to confirmation bias for various reasons: the answers in the social sciences are more nebulous, the biases are more subtle, the “observer effect” is more real and, frankly, some social scientists lack the statistical acumen to parse data properly (see the Hurricane study discussed earlier this year). But I also think there is an increased danger because of the immediacy of the issues. No one has a personal stake in the time-resolved behavior of an active galactic nucleus. But people have very personal stakes in politics, economics and sexism.

    Megan also touches on what I’ve dubbed the Scientific Peter Principle: that a study garnering enormous amounts of attention is likely erroneous. The reason is that when you do something wrong in a study, it will usually manifest as a false result, not a null result. Null results are usually the result of doing your research right, not doing it wrong. Take the sexist hurricane study earlier this year. Had the scientists done their research correctly: limiting their data to post-1978 or doing a K-S test, they would have found no connection between the femininity of hurricane names and their deadliness. As a result, we would never have heard about it. In fact, other scientists may have already done that analysis and either not bothered to publish it or publish it quietly.

    But because they did their analysis wrong — assigning an index to the names, only sub-sampling the data in ways that supported the hypothesis — they got a result. And because they had a surprising result, they got publicity.

    This happens quite a bit. The CDC got lots of headlines when they exaggerated the number of obesity deaths by a factor of 14. Scottish researchers got attention when they erroneously claimed that smoking bans were saving lives. The EPA got headlines when they deliberately biased their analysis to claim that second-hand smoke was killing thousands.

    Cognitive bias, in combination with the Scientific Peter Principle, is incredibly dangerous.