All posts by Mike

Mathematical Malpractice Watch: Non-Citizen Voters

Hmmm:

How many non-citizens participate in U.S. elections? More than 14 percent of non-citizens in both the 2008 and 2010 samples indicated that they were registered to vote. Furthermore, some of these non-citizens voted. Our best guess, based upon extrapolations from the portion of the sample with a verified vote, is that 6.4 percent of non-citizens voted in 2008 and 2.2 percent of non-citizens voted in 2010.

The authors go on to speculate that non-citizen voting could have been common enough to swing Al Franken’s 2008 election and possibly even North Carolina for Obama in 2008. Non-citizens vote overwhelmingly Democrat.

I do think there is a point here which is that non-citizens may be voting in our elections, which they are not supposed to do. Interestingly, photo ID — the current policy favored by Republicans — would do little to address this as most of the illegal voters had ID. The real solution … to all our voting problems … would be to create a national voter registration database that states could easily consult to verify someone’s identity, citizenship, residence and eligibility status. But this would be expensive, might not work and would very likely require a national ID card, which many people vehemently oppose.

However …

The sample is very small: 21 non-citizens voting in 2008 and 8 in 2010. This is intriguing but hardly indicative. It could be a minor statistical blip. And there have been critiques that have pointed out that this is based on a … wait for it … web survey. So the results are highly suspect. It’s likely that fair number of these non-citizen voters are, in fact, non-correctly-filling-out-a-web-survey voters.

To their credit, the authors acknowledge this and say that while it is possible non-citizens swung the Franken Election (only 0.65% would have had to vote), speculating on other races is … well, speculation.

So far, so good.

The problem is how the blogosphere is reacting to it. Conservative sites are naturally jumping on this while liberals are talking about the small number statistics. But those liberal sites are happy to tout small numbers when it’s, say, a supposed rise in mass shootings.

In general, I lean toward to the conservatives on this. While I don’t think voter fraud is occurring on the massive scale they presume, I do think it’s more common than the single-digit or double-digit numbers liberals like to hawk. Those numbers are themselves based on small studies in environments where voter ID is not required. We know how many people have been caught. But assuming that represents the limit of the problem is like assuming the number of speeders on a highway is equal to the number of tickets that are given out. One of the oft-cited studies is from the President’s Commission on Election Administration, which was mostly concerned with expanding access, not tracking down fraud.

Here’s the thing. While I’m convinced the number of fraudulent votes is low, I note that, every time we discuss this, that number goes up. It used to be a handful. Now it’s a few dozen. This study hints it could be hundreds, possibly thousands. There are 11 million non-citizens living in this country (including my wife). What these researchers are indicating is that, nationally, their study could mean a many thousands of extra votes for Democrats. Again, their study is very small and likely subject to significant error (as all web surveys are). It’s also likely the errors bias high. But even if they have overestimated the non-citizen voting by a factor of a hundred, that still means a few thousands incidents of voter fraud. That’s getting to the point where this may be a concern, no?

Do I think this justifies policy change? I don’t think a web-survey of a few hundred people justifies anything. I do think this indicates the issue should be studied properly and not just dismissed out of hand because only a few dozen fake voters have actually been caught.

The Latest Plagiarism Kerfuffles

Over the last few years, a number of reporters and writers have turned out to be serial plagiarists. Oh, they don’t admit this. They’ll say they forgot to put in quotemarks or that rules don’t apply to them. But if you and I did that, we’d be kicked out of school. Or maybe not.

The most recent accusation is CJ Werleman. His excuse has crumbled now that researchers have dug up over a dozen liftings of text from other people. But I want to focus on his excuse because it is illustrative:

The Harris zombies now accuse me of plagiarism. From 5 books & 100+ op-eds, they cite 2 common cliches and two summaries of cited studies

This sounds reasonable. After all, there are only so many ways you can state the same facts. And when you look at the quotes, if you were of a generous disposition, you might accept this response. The quotes aren’t completely verbatim. Maybe he did just happen to phrase things the same way other writers did.

But I find this excuse unlikely. In a great post on plagiarism, McArdle writes the following:

A while back, Terry Teachout, the Wall Street Journal’s drama critic, pointed out something fascinating to me: If you type even a small fragment of your own work into Google, as few as seven words, with quotation marks around the fragment to force Google to only search on those words in that order, then you are likely to find that you are the only person on the Internet who has ever produced that exact combination of words. Obviously this doesn’t work with boilerplate like “GE rose four and a quarter points on stronger earnings”, or “I love dogs,” but in general, it’s surprisingly true.

I’ve tested this and it is true. With a lot of my posts, if I type in a non-generic line, the only site that comes up is mine. In fact, verbatim Google searches are a good way to find content scrapers and plagiarists.

Whenever I site anyone on the internet, I will link them, usually quote them and then, if necessary, summarize or rephrase the other points they are making. I try hard to avoid simply rewriting what they said like I’m a fifth grader turning in a book report. So when you hear someone using the excuse that similar ideas require similar phrasing, it’s largely baloney. If two passage of text are nearly identical, it’s very likely that one was copied from the other.

I’ve become more sensitive to plagiarism since I’ve become a victim over the last ten years. I’ve had content scraped, I’ve had my ideas presented as though they were someone else’s and I’ve had outright word-for-word copying (on a now defunct story site). It’s difficult to describe just how dirty being plagiarized makes you feel. I even shied away … at first … from making accusations because I was so embarrassed. Here’s what I wrote the first time it happened:

Plagiarism is not just stealing someone’s words. It is stealing their mind. It is a cruel violation. The hard work and original thought of one person is stolen by a second. The people who have lost their careers because of plagiarism have deserved everything they’ve gotten and I am now determined, more than ever, to make sure I quote people properly and always give credit where it’s due.

Plagiarists need to be called out. Words are the currency of writers and, for many, how they make their living. Plagiarizing someone is no different than stealing their car or cleaning out their bank account. In fact, I would argue that it’s a lot worse.

Mother Jones Revisited

A couple of years ago, Mother Jones did a study of mass shootings which attempted to characterize these awful events. Some of their conclusions were robust — such as the finding that most mass shooters acquire their guns legally. However, their big finding — that mass shootings are on the rise — was highly suspect.

Recently, they doubled down on this, proclaiming that Harvard researchers have confirmed their analysis1. The researchers use an interval analysis to look at the time differences between mass shootings and claim that the recent run of short intervals proves that the mass shootings have tripled since 2011.2

Fundamentally, there’s nothing wrong with the article. But practically, there is: they have applied a sophisticated technique to suspect data. This technique does not remove the problems of the original dataset. If anything, it exacerbates them.

As I noted before, the principle problem with Mother Jones’ claim that mass shootings were increasing was the database. It had a small number of incidents and was based on media reports, not by taking a complete data set and paring it down to a consistent sample. Incidents were left out or included based on arbitrary criteria. As a result, there may be mass shootings missing from the data, especially in the pre-internet era. This would bias the results.

And that’s why the interval analysis is problematic. Interval analysis itself is useful. I’ve used it myself on variable stars. But there is one fundamental requirement: you have to have consistent data and you have to account for potential gaps in the data.

Let’s say, for example, that I use interval analysis on my car-manufacturing company to see if we’re slowing down in our production of cars. That’s a good way of figuring out any problems. But I have to account for the days when the plant is closed and no cars are being made. Another example: let’s say I’m measuring the intervals between brightness peaks of a variable star. It will work well … if I account for those times when the telescope isn’t pointed at the star.

Their interval analysis assumes that the data are complete. But I find that suspect given the way the data were collected and the huge gaps and massive dispersion of the early intervals. The early data are all over the place, with gaps as long as 500-800 days. Are we to believe that between 1984 and 1987, a time when violent crime was surging, that there was only one mass shooting? The more recent data are far more consistent with no gap greater than 200 days (and note how the data get really consistent when Mother Jones began tracking these events as they happened, rather than relying on archived media reports).

Note that they also compare this to the average of 172 days. This is the basis of their claim that the rate of mass shootings has “tripled”. But the distribution of gaps is very skewed with a long tail of long intervals. The median gap is 94 days. Using the median would reduce their slew of 14 straight below-average points to 11 below-median points. It would also mean that mass shootings have increased by only 50%. Since 1999, the median is 60 days (and the average 130). Using that would reduce their slew of 14 straight short intervals to four and mean that mass shootings have been basically flat.

The analysis I did two years ago was very simplistic — I looked at victims per year. That approach has its flaws but it has one big strength — it is less likely to be fooled by gaps in the data. Huge awful shootings dominate the number of victims and those are unlikely to have been missed in Mother Jones’ sample.

Here is what you should do if you want to do this study properly. Start with a uniform database of shootings such as those provided by law enforcement agencies. Then go through the incidents, one by one, to see which ones meet your criteria.

In Jesse Walker’s response to Mother Jones, in which he graciously quotes me at length, he notes that a study like this has been done:

The best alternative measurement that I’m aware of comes from Grant Duwe, a criminologist at the Minnesota Department of Corrections. His definition of mass public shootings does not make the various one-time exceptions and other jerry-riggings that Siegel criticizes in the Mother Jones list; he simply keeps track of mass shootings that took place in public and were not a byproduct of some other crime, such as a robbery. And rather than beginning with a search of news accounts, with all the gaps and distortions that entails, he starts with the FBI’s Supplementary Homicide Reports to find out when and where mass killings happened, then looks for news reports to fill in the details. According to Duwe, the annual number of mass public shootings declined from 1999 to 2011, spiked in 2012, then regressed to the mean.

(Walker’s article is one of those “you really should read the whole thing” things.)

This doesn’t really change anything I said two year ago. In 2012, we had an awful spate of mass shootings. But you can’t draw the kind of conclusions Mother Jones wants to from rare and awful incidents. And it really doesn’t matter what analysis technique you use.


1. That these researchers are from Harvard is apparently a big deal to Mother Jones. As one of my colleague used to say, “Well, if Harvard says it, it must be true.”

2. This is less alarming than it sounds. Even if we take their analysis at face value, we’re talking about six incidents a year instead of two for a total of about 30 extra deaths or about 0.2% of this country’s murder victims or about the same number of people that are crushed to death by their furniture. We’re also talking about two years of data and a dozen total incidents.

Now You See the Bias Inherent in the System

When I was a graduate student, one of the big fields of study was the temperature of the cosmic microwave background. The studies were converging on a value of 2.7 degrees with increasing precision. In fact, they were converging a little too well, according to one scientist I worked with.

If you measure something like the temperature of the cosmos, you will never get precisely the right answer. There is always some uncertainty (2.7, give or take a tenth of a degree) and some bias (2.9, give or take a tenth of a degree). So the results should span a range of values consistent with what we know about the limitations of the method and the technology. This scientist claimed that the range was too small. As he said, “You get the answer. And if it’s not the answer you wanted, you smack your grad student and tell him to do it right next time.”

It’s not that people were faking the data or tiling their analysis. It’s that knowing the answer in advance can cause subtle confirmation biases. Any scientific analysis is going to have a bias — an analytical or instrumentation effect that throws off the answer. A huge amount of work is invested in ferreting out and correcting for these biases. But there is a danger when a scientist thinks he knows the answer in advance. If they are off from the consensus, they might pore through their data looking for some effect that biased the results. But if they are close, they won’t look as carefully.

Megan McArdle flags two separate instances of this in the social sciences. The first is the long-standing claim that conservatives are authoritarian while liberals are not:

Jonathan Haidt, one of my favorite social scientists, studies morality by presenting people with scenarios and asking whether what happened was wrong. Conservatives and liberals give strikingly different answers, with extreme liberals claiming to place virtually no value at all on things like group loyalty or sexual purity.

In the ultra-liberal enclave I grew up in, the liberals were at least as fiercely tribal as any small-town Republican, though to be sure, the targets were different. Many of them knew no more about the nuts and bolts of evolution and other hot-button issues than your average creationist; they believed it on authority. And when it threatened to conflict with some sacred value, such as their beliefs about gender differences, many found evolutionary principles as easy to ignore as those creationists did. It is clearly true that liberals profess a moral code that excludes concerns about loyalty, honor, purity and obedience — but over the millennia, man has professed many ideals that are mostly honored in the breach.

[Jeremy] Frimer is a researcher at the University of Winnipeg, and he decided to investigate. What he found is that liberals are actually very comfortable with authority and obedience — as long as the authorities are liberals (“should you obey an environmentalist?”). And that conservatives then became much less willing to go along with “the man in charge.”

Frimer argues that conservatives tend to support authority because they think authority is conservative; liberals tend to oppose it for the same reason. Liberal or conservative, it seems, we’re all still human under the skin.

Exactly. The deference to authority for conservatives and liberals depends on who is wielding said authority. If it’s a cop or a religious figure, conservatives tend to trust them and liberals are skeptical. If it’s a scientist or a professor, liberals tend to trust them and conservatives are rebellious.

Let me give an example. Liberals love to cite the claim that 97% of climate scientists agree that global warming is real. In fact, this week they are having 97 hours of consensus where they have 97 quotes from scientists about global warming. But what is this but an appeal to authority? I don’t care if 100% of scientists agree on global warming: they still might be wrong. If there is something wrong with the temperature data (I don’t think there is) then they are all wrong.

The thing is, that appeal to authority does scrape something useful. You should accept that global warming is very likely real. But not because 97% of scientists agree. The “consensus” supporting global warming is about as interesting as a “consensus” opposing germ theory. It’s the data supporting global warming that is convincing. And when scientists fall back on the data, not their authority, I become more convinced.

If I told liberals that we should ignore Ferguson because 97% of cops think the shooting justified, they wouldn’t say, “Oh, well that settles it.” If I said that 97% of priests agreed that God exists, they wouldn’t say, “Oh, well that settles it.” Hell, this applies even to things that aren’t terribly controversial. Liberals are more than happy to ignore the “consensus” on the unemployment effects of minimum wage hikes or the safety of GMO crops.

I’m drifting from the point. The point is that the studies showing the conservatives are more “authoritarian” were biased. They only asked about certain authority figures, not all of them. And since this was what the mostly liberal social scientists expected, they didn’t question it. McArdle gets into this in her second article, which takes on the claim that conservative views come from “low-effort thought” based on two small studies.

In both studies, we’re talking about differences between groups of 18 to 19 students, and again, no mention of whether the issue might be disinhibition — “I’m too busy to give my professor the ‘right’ answer, rather than the one I actually believe” — rather than “low-effort thought.”

I am reluctant to make sweeping generalizations about a very large group of people based on a single study. But I am reluctant indeed when it turns out those generalizations are based on 85 drunk people and 75 psychology students.

I do not have a scientific study to back me up, but I hope that you’ll permit me a small observation anyway: We are all of us fond of low-effort thought. Just look at what people share on Facebook and Twitter. We like studies and facts that confirm what we already believe, especially when what we believe is that we are nicer, smarter and more rational than other people. We especially like to hear that when we are engaged in some sort of bruising contest with those wicked troglodytes — say, for political and cultural control of the country we both inhabit. When we are presented with what seems to be evidence for these propositions, we don’t tend to investigate it too closely. The temptation is common to all political persuasions, and it requires a constant mustering of will to resist it.

One of these studies found that drunk students were more likely to express conservative views than sober ones and concluded that this was because it was easier to think conservatively when alcohol is inhibiting your through process. The bias there is simply staggering. They didn’t test the students before they started drinking (heavy drinkers might skew conservative). They didn’t consider social disinhibition — which I have mentioned in studies claiming that hungry or “stupid” men like bigger breasts. This was a study designed with its conclusion in mind.

All sciences are in danger of confirmation bias. My advisor was very good about side-stepping it. When we got the answer we expected, he would say, “something is wrong here” and make us go over the data again. But the social sciences seem more subject to confirmation bias for various reasons: the answers in the social sciences are more nebulous, the biases are more subtle, the “observer effect” is more real and, frankly, some social scientists lack the statistical acumen to parse data properly (see the Hurricane study discussed earlier this year). But I also think there is an increased danger because of the immediacy of the issues. No one has a personal stake in the time-resolved behavior of an active galactic nucleus. But people have very personal stakes in politics, economics and sexism.

Megan also touches on what I’ve dubbed the Scientific Peter Principle: that a study garnering enormous amounts of attention is likely erroneous. The reason is that when you do something wrong in a study, it will usually manifest as a false result, not a null result. Null results are usually the result of doing your research right, not doing it wrong. Take the sexist hurricane study earlier this year. Had the scientists done their research correctly: limiting their data to post-1978 or doing a K-S test, they would have found no connection between the femininity of hurricane names and their deadliness. As a result, we would never have heard about it. In fact, other scientists may have already done that analysis and either not bothered to publish it or publish it quietly.

But because they did their analysis wrong — assigning an index to the names, only sub-sampling the data in ways that supported the hypothesis — they got a result. And because they had a surprising result, they got publicity.

This happens quite a bit. The CDC got lots of headlines when they exaggerated the number of obesity deaths by a factor of 14. Scottish researchers got attention when they erroneously claimed that smoking bans were saving lives. The EPA got headlines when they deliberately biased their analysis to claim that second-hand smoke was killing thousands.

Cognitive bias, in combination with the Scientific Peter Principle, is incredibly dangerous.

Mathematical Malpractice Watch: Torturing the Data

There’s been a kerfuffle recently about a supposed CDC whistleblower who has revealed malfeasance in the primary CDC study that refuted the connection between vaccines and autism. Let’s put aside that the now-retracted Lancet study the anti-vaxxers tout as the smoking gun was a complete fraud. Let’s put aside that other studies have reached the same conclusion. Let’s just address the allegations at hand, which include a supposed cover up. These allegations are in a published paper (now under further review) and a truly revolting video from Andrew Wakefield — the disgraced author of the fraudulent Lancet study that set off this mess — that compares this “cover-up” to the Tuskegee experiments.

According to the whistle-blower, his analysis shows that while most children do not have an increased risk of autism (which, incidentally, discredits Wakefield’s study), black males vaccinated before 36 months show a 240% increased risk (not 340, as has been claimed). You can catch the latest from Orac. Here’s the most important part:

So is Hooker’s result valid? Was there really a 3.36-fold increased risk for autism in African-American males who received MMR vaccination before the age of 36 months in this dataset? Who knows? Hooker analyzed a dataset collected to be analyzed by a case-control method using a cohort design. Then he did multiple subset analyses, which, of course, are prone to false positives. As we also say, if you slice and dice the evidence more and more finely, eventually you will find apparent correlations that might or might not be real.

In other words, what he did was slice and dice the sample to see if one of those slices would show a correlation. But by pure chance, one of those slices would show a correlation, even there wasn’t one. As best illustrated in this cartoon, if you run twenty tests for something that has no correlation, statistics dictate that at least one of those will show a spurious correlation at the 95% confidence level. This is one of the reasons many scientists, especially geneticists, are turning to Bayesian analysis, which can account for this.

If you did a study of just a few African-American boys and found a connection between vaccination and autism, it would be the sort of preliminary shaky result you would use to justify looking at a larger sample … such as the full CDC study that the crackpot’s own analysis shows refutes such a connection. To take a large comprehensive study, narrow it down to a small sample and then claim the result of this small sample override those of the large one is ridiculous. It’s the opposite of how epidemiology works (and there is no suggestion that there is something about African American males that makes them more susceptible to vaccine-induced autism).

This sort of ridiculous cherry-picking happens a lot, mostly in political contexts. Education reformers will pore over test results until they find that fifth graders slightly improved their reading scores and claim their reform is working. When the scores revert back the next year, they ignore it. Drug warriors will pore over drug stats and claim that a small drop in heroine use among people in their 20’s indicates that the War on Drugs is finally working. When it reverts back to normal, they ignore it.

You can’t pick and choose little bits of data to support your theory. You have to be able to account for all of it. And you have to be aware of how often spurious results pop up even in the most objective and well-designed studies, especially when you parse the data finer and finer.

But the anti-vaxxers don’t care about that. What they care about is proving that evil vaccines and Big Pharma are poisoning us. And however they have to torture the data to get there, that’s what they’ll do.

“Five Favorites” – Review of 2014 Best Picture Nominees

Donna: Welcome to this month’s edition of “Five Favorites” with Mike Siegel! This month we’re abandoning our formula of fives to bring you a review of the nine Best Picture nominees from the 2014 Academy Awards. Now that all nine nominees are available for rental we’ve both seen them all and will be ranking them in order of how much we liked them, starting with the ones we liked least and moving up to our favorite of the nominees. Before we get going, I’d like to tip my hat to a few films that I feel deserved a place on this list, in particular “Blue Jasmine”, “August: Osage County”, “Rush”, “Kill Your Darlings” and, for my daring outsider pick, “Upstream Color”, which should have at least gotten nominations for Best Director and Best Cinematography. I’ll know Hollywood has finally caught up to the burgeoning indie scene when films like “Upstream Color” gets the award nods they really should.

Mike: So, in going over the list, I first wanted to mention a few films that got snubbed. “Rush”, “Before Midnight” and “Fruitvale Station” were all among the best films of 2013 but were not nominated. And I have yet to see “The Wind Rises” and “Blue is the Warmest Color”, which I suspect might end up on my top films of 2013 list. Still, the overall Oscar selection was not horrible. While some of the films were not my cup of tea, I can see why each was nominated and none was a horrible selection.

Onto the nominees! We both rank them in reverse order of our opinion.

*************************************

Donna’s #9: “Nebraska” – I’m just going to start by saying I have no idea how this film made in onto the Best Picture list. Sure, it’s a good film, well acted and well scripted. But there’s nothing extraordinary about it that makes it jump out at me. I have a hard time remembering details about it, and that alone knocks it out of Best Picture contention in my mind. It’s good, but not great, and just not strong or compelling enough to be on this list.

Mike’s #9: “The Wolf of Wall Street” – I feel this film was massively over-rated, as Scorsese films tend to be when he returns to his oeuvre of awful people doing awful things. Dicaprio is great and the film certainly has a lot of energy. Matthew McConaughey has a wonderful five minutes as a guest star. But it way way too long, spending far too much time reveling in the supposed excesses of its main character. And as I wrote in my long-form review, I am uncomfortable with glorifying a narcissistic convicted financial criminal.

Donna’s #8: “Captain Phillips” – I seem to be in the minority of people who weren’t incredibly moved by “Captain Phillips”, but I believe I know why. You see, before I saw “Phillips” I watched “A Hijacking”, a Swedish film about a strikingly similar true story of pirate capture. I was incredibly moved by “A Hijacking” – I found it poetic, heart breaking, well acted and edited to a devastating conclusion. So when I saw “Phillips” I couldn’t help but compare it to “A Hijacking”, and I found it lacking in every single aspect. Perhaps if I had seen “Phillips” before “Hijacking” I would feel differently, but as such, knowing a very similar and superior film is out there, I just can’t rank “Phillips” any higher than this.

Mike’s #8: “Nebraska” – I enjoyed it this film, mainly because of the acting. It’s a solid film with good characters and some humor (although a bit of it feels forced, especially with Kate). But while I like almost everything by Alexander Payne, I didn’t see why people *loved* it. It seems like the critics read a lot more into his films than I see.

Donna’s #7: “Dallas Buyers Club” – Let me be clear – as a film, “Dallas Buyers Club” wasn’t strong enough to be nominated for Best Picture. Don’t get me wrong, it’s a solid film, but for me the plot wasn’t compelling or drawn well enough to deserve a nod for the best film of the year. The reason “Dallas Buyers Club” is here is because of the incredible acting of Jared Leto and Matthew McConaughey. Both men were absolutely marvelous in their roles, with Leto putting in one of the best performances of the year as Rayon. Both men deserved their Oscar nods for acting, but as far as best picture goes, it wasn’t enough for me. Good, but not extraordinary, and thus the low placement on my list.

Mike’s #7: “Dallas Buyers Club” – The main reason to watch this was McConaughey, who thoroughly dominated the film. It also has an appealing anti-establishment story about the buyer’s clubs and provides very strong insight into the early days of the AIDS crisis without being heavy-handed. Definitely a cut above the first two and worth the investment of time.

Donna’s #6: “Philomena” – The impact of this film didn’t quite hit me for a few days after I saw it. My initial reaction to “Philomena” was that it was good, but not good enough to make the Best Picture list. But, like all good films, this one sat with me for a long time, and I feel now upon reflection that it really was worthy of this nod. I’m a huge Coogan fan so it was lovely to see him in such fine form, and Dench is always magnificent. Frears really did himself proud with this film – a powerful story indeed.

Mike’s #6: “Philomena” – The brutal and cruel history of Ireland’s mother-child homes (and the Magdalene Laundries) cannot get enough attention. The tacked-on confrontation with the nun, which did not happen in real life, was the only real false note. I was reminded of the equally false and equally flawed scene in Schindler’s List where he breaks down. That having been said, the film builds itself around two very well-developed characters played perfectly, incorporates its low key humor well and builds its sense of outrage slowly and convincingly. This may stick with me for a while.

Donna’s #5: “Her” – Spike Jonze created something intensely beautiful with this lovely little film. It’s another simple story told well, and it’s the nuances of the script that make it such a powerful statement on love, lust, and power in relationships. I’m an enormous fan of Phoenix and it was gratifying to see him shine in this film. I was honestly quite shocked he didn’t get an Oscar nod for Best Actor for this performance. The only major flaw to this film was its length – it could have easily been about twenty minutes shorter. The story raises so many great questions about the dynamics of love – I feel this film will be talked about for quite some time.

Mike’s #5: “American Hustle” – I think the 70’s palette and styles caused this film to be a bit over-rated. I am not a huge fan of David O. Russell and don’t think Bradley Cooper is that great. That having been said, the film is very good, with solid dialogue, energy, style and some great performances, particularly the female leads. Frankly, I would watch a film about Amy Adams and Jennifer Lawrence reading the newspaper.

Donna’s #4: “The Wolf of Wall Street” – I actually debated for a while whether this film would wind up above or below “Her” as I liked them both rather equally, but “The Wolf of Wall Street” was compelling enough of a film for it to take the #4 spot. As much as I love McConaughey, I think DiCaprio should have taken the Oscar for his portrayal of the seedy Jordan Belfort, as he was quite amazing in this. I loved the direction of the film as well, although it certainly suffered from about thirty minutes of bloat. A strong film by Scorcese and a worthy contender for Best Picture.

Mike’s #4: “Her” – This is a bit long, but is quite a lovely film. The idea is intriguing even if the plot kind of fumbles around with it a bit. It takes a much more mature and realistic approach to its ideas than most sci-fi, making the world feel very real and very likely (example: almost all sci-fi films avoid the subject of sex; this one doesn’t). The two leads are excellent. Phoenix got all the attention but Johannson’s voice work anchored the emotional threads. As I’ve said before, if you look beyond the banner franchises, we are getting some very good sci-fi these days and “Her” is a perfect example.

Donna’s #3: “American Hustle” – This was easily one of my favorite films of the year for a whole host of reasons. I loved all of the acting in it – Bale, Cooper, Adams and Laurence were all exceptional. The direction and pacing of the film was stylish and flamboyant in all the right ways. The script was quite compelling and kept my attention throughout. Even the music was note-perfect. I truly enjoyed everything about this – it’s honestly only a tick below my #2 choice on my list.

Mike’s #3: “Captain Phillips” – This had me on the edge of my seat for two hours. It features another great “everyman” performance from Hanks but also excellent performances by the Somali cast. It was so enthralling, I didn’t mind Greengrass’s ridiculous shaky-cam style.

Donna’s #2: “Gravity” – To me, there’s nothing like a simple story told well, and that’s exactly what “Gravity” is – a straightforward tale told with incredible finesse. Cuaron allowed Bullock and Clooney to simply do their jobs, and both acted quite well throughout. But it was the astonishing directing that stole the show here, with the exquisite long opening shot setting the tone for the film (a Cuaron trademark, perhaps, as he did the same in “Children of Men”, one of my favorite films of all time). To top it off, given how much bloat most films seem to carry these days, the ninety minute length of it was just perfect. A beautiful film in every way.

Mike’s #2: “Gravity” – You know the best thing about “Gravity”? It’s only an hour and a half long. That sounds like faint praise or even damnation. But in an era where seemingly every Oscar nominee could easily be trimmed by 15 minutes to an hour, this is the only major film in recent years that had no fat. It is tense from beginning to end, the performances are great (Bullock has matured into a first-rate actress) and the filming is simply gorgeous. The opening unbroken shot is one of the most spectacular sequences in recent memory and I desperately wish I had seen this on the big screen. The science is bit questionable (orbital dynamics doesn’t work like that) but the film was so good that I didn’t care.

Donna’s #1: “12 Years a Slave” – Honestly, this wasn’t even a contest for me. In my opinion, “12 Years a Slave” was far and away the best picture of the year for a number of reasons. All of the acting was incredibly solid – not just the leads but all of the supporting actors as well. Fassbender, Dano, Giamatti and Cumberbatch were especially strong, and Chiwetel Ejiofor was a revelation in the lead. The direction by McQueen was unflinching and riveting with good editing that moved the story along. The script was very solid, believable and so gut-wrenching it was impossible not to cry. Outside of Brad Pitt’s appearance, which to me felt hammy and overwrought, I can’t think of a real flaw in this film. It utterly deserved to win Best Picture and I’m glad it took the top prize this year.

Mike’s #1: “12 Years a Slave” – When I look over an Oscar list, I like to think about which films people will be watching ten, twenty, fifty years from now. This and maybe “Gravity” are the only ones I think will really last the test of time. “12 Years a Slave” is transcendent. Many films have taken on the issue of slavery; few with as much resonance and power as this one. The performances are excellent all around — Ejiofor, Fassbender and Nyong’o especially (Fassbender is establishing an incredibly broad range; comparing this to his performance in “Prometheus”, you wouldn’t think it was the same actor). Even the supporting cast are outstanding. McQueen’s directing shows the brutality of slavery without wallowing in it or being exploitative. And it keeps the focus on the characters and the situation. I need to watch this again to confirm my initial thoughts that it might become a classic. But it was definitely my top film among the Oscar nominees.

******************************************

Thanks for joining us for another edition of “Five Favorites” and we’ll see you again next month!

The Cold Equations

“The Cold Equations” is an expression I lifted from the short story of the same name. I have not read the story, so can not comment on its style. Frankly, the plot has always crossed me as a great idea but, when you think about it, a bit stupid. No one designs a spaceship with no margin for error.

But I like the expression because I think it is a good distillation of how problems usually need to be addressed: with less emotion and more cold facts.

There are many applications of the cold equations, but I want to focus on just one: issue advocacy. In recent weeks, two very prominent victims of sex trafficking have been revealed to be frauds. Their horrifying stories — which sex workers had been poking holes in for years — turned out to be mostly or completely made up. The organizations and people who trumpeted them are now backing away and claiming that while these stories might be problematic, the issue is important. That’s debatable but I’m not going to get into that for the moment.

What struck me was this review of how the principle purveyor of tragedy porn — Nick Kristof — has built so much of his career and his advocacy around these stories:

The disconnect inspired Kristof to delve into social science studies on the psychological roots of empathy, which led him to an emerging body of work based on what inspires people to donate to charity. In one study, researchers told American participants the story of Rokia, a (fictional) 7-year-old Malian girl who is “desperately poor and faces a threat of severe hunger, even starvation.” Then, they told them that 3 million Malawian children are now facing hunger, along with 3 million Zambian people and 11 million Ethiopians. The researchers found that Americans were more likely to empty their pockets for one little girl than they were for millions of them. If they heard Rokia’s story in the context of the dire statistics of the region, they were less inclined to give her money. And if they were informed that they were being influenced by this dynamic, the “identifiable-victim effect,” they were less likely to shell out for Rokia, but no more likely to give to the greater cause. To Kristof, the experiment underscored the “limits of rationality” in reporting on human suffering: “One death is a tragedy,” he told the students, “and a million deaths are a statistic.”

In other words, people won’t donate to causes because they hear a million people are dying. But they will if they see one cute little girl suffering. Kristof has therefore built a career on finding these cute victims to bring attention to things like genocide and human trafficking.

That sounds noble. But it isn’t. Because what happens is that attention, money, volunteers and even military intervention flow not to the most important causes but to those that have the most compelling victims. So enormous amounts of attention are given to human trafficking in Cambodia, a problem which now appears to have been massively exaggerated. In the meantime, far larger ills — the lack of access to clean water for billions, the crippling micronutrient deficiency that affects billions, the indoor pollution from burning wood and dung that harms billions — goes unaddressed. Because we have yet to have the photogenic victim with a horror story of how she pooped her guts out due to drinking contaminated water.

Looking for victims with compelling stories that goad your audience into emotional reaction is the wrong way to go about healing the troubles of the world. That’s where the cold equations comes in: we have to make decisions not based on emotion but on facts, data and reality.

I can’t find the article, but I remember reading a long time ago that one of the reasons the Bill and Melinda Gates Foundation has such a strong track record is because they carefully identify that most important causes and the ones were money can make the biggest difference. The author attributed this to Gates’ geekiness — i.e, he’s more comfortable with numbers and data than pulls on the heartstrings. Maybe that’s true; it seemed a stretch to me. But the overall thrust was correct: we need to pick our issues based on data, not sensationalism.

Look at overpopulation. Forty years ago, Paul Ehrlich started his book “The Population Bomb” with the harrowing story of a trip through Delhi and used a series of emotional appeals to call for mass sterilization and abandoning foreign aide. He got lots of attention but nothing happened. In the meantime, Norman Borlaug carefully looked at the problem and went with the unsexy non-photogenic option of breeding better strains of crops. In the end, Borlaug saved a billion lives. And Ehrlich is still a raging fool known for his hilariously bad predictions and even worse policy advocacy.

(Bjorn Lomborg has talked about this in the context of the Copenhagen Consensus, arguing that we would ameliorate a lot more human suffering by focusing on large more solvable environmental problems other than the current sexy of global warming.)

I don’t expect or want people to be robots. And sometimes emotional reactions are a good thing. The wave of people donating blood after 9/11 did little to help with the actual tragedy but it did help the Red Cross build up their donor base, providing a big long-term benefit. But emotion is in a haphazard guide to action at best. It’s not nearly as effective as using the cold equations. It’s worth noting that Kristof’s emotion-based advocacy, in the end, accomplished nothing in the Sudan and even less on sex trafficking. Imagine if that attention had gone to something useful, like cleaning up people’s drinking water.

In the end, you have to be guided by the cold equations. In the end, you have to look at the problems of the world objectively, figure out which ones give you the most benefit for the least effort and do them. That’s the only way real progress is made.

Five Favorites – “Five Most Overrated Films of All Time”

Cross-posted from Donna’s wonderful FTRQ blog.

Donna: Welcome to another edition of “Five Favorites” with myself and Mike Siegel! This month we’re straying from our favorites format to tackle a different sort of list – what we think are the “Five Most Overrated Films of All Time”. We wanted to look at films that are revered or considered truly great cinema, and yet seem to us to be either fundamentally flawed or just plain bad. Our inclusion criteria for this list was that the films in question had to be either on the AFI Top 100 list, a best picture nominee or winner, or on the IMDB top 250. In other words, we only wanted to consider films that have been truly lauded as landmark, important, or extremely popular.

For myself this was a difficult task. Because I haven’t seen a lot of classic cinema or recent popular films, I found that I had seen maybe seen fifty percent or less of the movies on each inclusion list. On one hand this made my selection easier as I simply had less to work with, but on the other I feel Mike will have a far more comprehensive list than I will because of my lack of knowledge in this area. I also wanted to make sure I wasn’t simply picking movies I didn’t like. Quite honestly there are plenty of movies I just don’t like on these lists – “2001: A Space Odyssey”, “A Few Dollars More” or “V for Vendetta” are good examples. But I can recognize that all those films have true goodness and even genius in them, even if I don’t enjoy them. It was important to me to only pick films I felt were either truly flawed or ones that defied my every attempt at understanding why they are so loved. In the end, yes, whether or not I liked a film was part of the equation, but I did my best not to make it a popularity contest.

Mike: My approach was identical, although I’m probably not as versed in classic cinema as Donna likes to think! J I’ve already done a long series of posts on my own site where I went through the Oscar Winners one at a time to see which ones were bad. So I excluded Best Pictures from that list. One tweak I put in was to recommend better movies, when I could think of them. I also, like Donna, left off movies that I think are over-rated but where I can see why people like them, such as “Donnie Darko” or “A Christmas Story” or “Forrest Gump”. I found that a number of my picks were stand-ins for general categories of movies I think are over-rated.  You’ll see what I mean when we get there.

Donna’s #5: “The Green Mile” – At the time of writing, this film was #45 on the IMDB’s Top 250 list, and I have never ever understood why this film was so loved. I know I am treading on somewhat sacred ground for putting this on my list, but hear me out. I read “The Green Mile” books when they came out – I think most every fan of King did given that these were his first releases after a long spell of silence. I adored the books and have always thought that the detail of them was their genius – it was possible to live in and fully understand the world Kind created in this tale. Most importantly, there was a “why” for everything. Nothing happened in these books for no reason – King always gave us a “why” for each and every moment. When I saw the film, I was angered beyond belief at it, as was my husband, so much so that we had to keep pausing the film to yell and complain about it. Why? Because all that precious detail, all the “whys” that made the book so believable, was missing from the film. Now, I realize that most book-to-film adaptations suffer from a loss of detail, but in this case I feel that loss is egregious. The “why” for nearly everything that happened in the film was omitted, and without that “why” the film made little to no sense. In the world of the film, there was simply no reason given for most of what took place, and that, for me, destroyed the integrity of the story. I would start a list of examples but honestly I would be here all day. The only reason I felt I could even follow the film was that I was filling in the missing details from my reading of the books. I believe that is why most people don’t notice how many things have been stricken from the film – they remember the books too well. Without those books this film would have no context or rationale to it, and that is why I feel it is one of the most overrated films of all time. I could go on, but I won’t – I’ve ranted enough as it is I think.

Mike’s #5: “Rope” – Regarded as a marginal classic, rated #242 on IMDB and praised effusively for inventive technique of using long unbroken takes, I find this film to be over-rated like a lot of Hitch’s early stuff. I haven’t seen it since college, when I reviewed it for the Carletonian. But I found the characters to be wooden, the suspense to be a bit trite and Stewart’s character to a bit of a snotty professor type.  It’s not a bad movie and I would recommend seeing it.  But IMDB gives it an 8.0 and many critics give it four stars.

Donna’s #4: “Hachi: A Dog’s Tale” – At the time of writing, this film was #189 on the IMDB Top 250 list, and this entry on my list probably needs far less explanation than my last. I mean… seriously? “Hachi”? Sure, it’s a cute enough movie. It’s sweet and sappy and sentimental and based on a true story, so I get that people enjoy it. But a Top 250 movie? Not a chance. It just isn’t good enough in any way. The acting is stiff, the plot overly saccharine, the directing absolutely average. In fact, “average” is probably the best way to describe this film – there is simply nothing extraordinary about it. So why is this film so beloved? It’s honestly beyond me. If you want to watch a tearjearker animal move, why not “Black Beauty”, “Old Yeller” or “Lassie Comes Home” – they are all superior films and will certainly make you cry. I simply have never understood why this film seems to hit people as hard as it does, and I certainly cannot understand how it wound up on the IMDB Top 250 list, so I am including it as my #4 pick.

Mike’s #4: “Butch Cassidy and the Sundance Kid” – This is another pick that isn’t a bad movie, per se. It’s been 20 years since I watched it and I probably should watch it again. But it’s status as a classic (on the AFI and IMDB lists) is unmerited.  It drags in the later parts and I didn’t care for the characters. This is representative of a class of movies from the 60’s and 70’s that are badly over-rated.  Movies like “Bonnie and Clyde” and “Six Easy Pieces” and “The Graduate” are frequently over-rated because, in their day, they were revolutionary.  Now that the language of cinema has evolved, they’re still good, but not amazing. “Butch Cassidy” is not bad and IMDB gives it a sterling 8.1 rating. I just don’t think it’s that good. I’d give it a 6, maybe. You’d be much better off watching The Man With No Name trilogy, which is truly great.

Donna’s #3: “Bringing Up Baby” – This film appears at #88 on the AFI’s Top 100 of all time list, and I know I’m not alone in not understanding the appeal of this movie. This film has divided audiences from the start. Some think it a hilarious and side-splitting romp, while others find it contrived, unbelievable, silly and inane. I’m solidly in the latter camp – I disliked this film immensely. I didn’t enjoy the comedy, I hated the acting, and I found the whole setup ridiculous and cringe-worthy. I do not understand what about this film is appealing or funny, I really don’t. And it’s because I, like so many others, just cannot understand the appeal, I have to put this on my list.

Mike’s #3: “Django Unchained” – #51 on IMDB and regarded by many as the best film of 2012, this is really a stand-in for over-rated Quentin Tarantino films in general. “Reservoir Dogs” is good; it’s not a classic. “Pulp Fiction” might be great. “Kill Bill” is a great 150 minute film squeezed into four hours. “Ingorious Basterds” is a great two hour film squeezed into 150 minutes.  And “Django Unchained” is a great two hour film squeezed into 165. It pains me to write this because Tarantino has a very real talent and an extraordinary feel for the language of film. His dialogue is fantastic, his characters memorable and the look of his films is amazing. In every film, there are at least a half dozen shots that make me say, “Wow, that’s great cinema.”  But he badly needs an editor. If his last three films were each about half an hour shorter, I would regard them as classics, rather than bloated. The line between classic and over-rated can often come down to editing.  (There are a lot of recent films you could throw into the pile of “awesome if half an hour shorter”, including both Hobbit movies and the Dark Knight Rises.)

Donna’s #2: “Duck Soup” – This film appears at #60 on the AFI Top 100 list, and again I realize I may be ruffling feathers with this pick. But, honestly, I cannot stand this film, nor can I even begin to understand the appeal of it to anyone. When I started trying to watch more classic films I saw how highly this movie was regarded. I has a vague memory of not enjoying the Marx Brothers as a child, but gladly rented this to see what it was all about. I hated it so very much I could barely finish it. It was the single most inane and insufferable film I’ve ever seen, and that’s saying a lot. I seriously do not understand how this film is funny for anyone, I really really don’t. I grant I’m not a slapstick, screwball comedy fan, but I can appreciate pretty much anything in one way or another. Not this. Never this. I don’t get it and likely never will. I grant that this film is landing at my #2 spot due to a sheer hatred of it more than a quality issue, but that’s how strong my dislike for it is.

Mike’s #2: “Birth of a Nation” This was originally on the AFI list but was eventually removed in favor of Griffith’s “Intolerance” likely because the voters became uncomfortable with the racism apparent in the film. It seems odd to compare this to Butch Cassidy above but it’s in a similar boat.  The methods and techniques it invented were revolutionary; but they don’t stun the senses as much almost a century later. What we’re left with is a film that glories the antebellum south and the Klan. Defenders will tell you to put aside the racism and admire the technique.  But it’s difficult to put aside the racism, especially when the technique is no longer that revolutionary. If you want a silent classic, Griffith’s “Intolerance” and “Broken Blossoms” are much better. And “Wings” has all the beauty of a silent epic and the captivating Clara Bow.

Donna’s #1: “Easy Rider” – This film appears at #84 on the AFI Top 100 list, and I have long felt this must be the most overrated film of all time. This film is loved and revered by so many and I have never understood why. What happens in this film? What plot actually exists here? If someone knows please tell me as I still have no idea. Fully half of this movie is long shots of Fonda and Hopper riding motorcycles, which simply bored me to tears. The acting was nonexistent, the directing ridiculous, the plot absent. Why is this a cinematic masterpiece? For what reasons? There is nothing here of value in my opinion, and I just don’t see how it got included in the AFI list. Considering I generally like films with very little plot you’d think I’d love this, but it just annoys me to no end. Putting this film at the top of my list was a no-brainer to me.

Mike’s #1: “Easy Rider” – Honestly, Donna and I did not coordinate our answers on this!  But I agree with everything she says and then some. One of the first negative reviews I wrote back in my college days was of Easy Rider.  And it has not improved with age.  It barely has a plot.  The symbolism, such as it is, is obvious (I could see Fonda was the Christ figure about 18 seconds in).  The fates of the characters is not foreshadowed at all but just occurs randomly (and I didn’t care anyway).  It glorifies dim-bulb hippie “culture”.  The LSD sequence set the stage for every incomprehensible drug montage to come. The film is frequently praised as “revolutionary” and “ground-breaking” – like just about all the films in my list.  But the difference that elevates it to #1 is that the ground it broke was almost everything that went wrong with film for the next ten years.  I really can’t understand why this movie is so well-regarded other than people’s misguided fascination with the lifestyle depicted. (Interestingly, IMDB does not regard this as a classic, giving it a 7.4 rating — good but not great.  I would say even that was over-rating. I’d give it a 4 or a 5.)  The soundtrack is OK, I guess.  But I mostly watched this movie with a look on my face saying, “Really?”

Thanks for joining us again for another edition of “Five Favorites” and we’ll see you all next month!

Mathematical Malpractice Watch: Hurricanes

There’s a new paper out that claims that hurricanes with female names tend to be deadlier than ones with male names based on hurricane data going back to 1950. They attribute this to gender bias, the idea that people don’t take hurricanes with female-names seriously.

No, this is not the onion.

I immediately suspected a bias. For one thing, even with their database, we’re talking about 92 events, many of which killed zero people. More important, all hurricanes had female names until 1979. What else was true before 1979? We had a lot less advanced warning of hurricanes. In fact, if you look up the deadliest hurricanes in history, they are all either from times before we named them or when hurricanes all had female names. In other words, they may just be measuring the decline in hurricane deadliness.

Now it’s possible that the authors use some sophisticated model that also account for hurricane strength. If so, that might mitigate my analysis. But I’m dubious. I downloaded their spreadsheet, which is available for the journal website. Here is what I found:

Hurricanes before 1979 averaged 27 people killed.

Hurricanes since 1979 average 16 people killed.

Hurricanes since 1979 with male names average … 16 people killed.

Hurricanes since 1979 with female names averaged … 16 people killed.

Maybe I’m missing something. How did this get past a referee?

Update: Ed Yong raises similar points here. The authors say that cutting the sample at 1979 made the numbers too small and so therefore use an index of how feminine or masculine the names were. I find that dubious when a plain and simple average will give you an answer. Moreover, they try this qualifier in the comments:

What’s more, looking only at severe hurricanes that hit in 1979 and afterwards (those above $1.65B median damage), 16 male-named hurricane each caused 23 deaths on average whereas 14 female-named hurricanes each caused 29 deaths on average. This is looking at male/female as a simple binary category in the years since the names started alternating. So even in that shorter time window since 1979, severe female-named storms killed more people than did severe male-named storms.

You be the judge. I average 54 post-1978 storms totally 1200 deaths and get even numbers. They narrow it to 30 totally 800 deaths and claim a bias based on 84 excess deaths. That really crosses as stretching to make a point.

Update: My friend Peter Yoachim did a K-S test of the data and found a 97% chance that the male- and female-named hurricanes were drawn from the same distribution. This is a standard test of the null hypothesis and wasn’t done at all. Ridiculous.

Absolutely Nothing Happened in Sector 83 by 9 by 12 Today

Last night, the science social media sphere exploded with the news of a potential … something … in our nearest cosmic neighbor, M31. The Swift mission, which I am privileged to work for, reported the discovery of a potential bright X-ray transient in M31, a sign of a high-energy event. For a while, we had very little to go on — Goddard had an unfortunately timed power outage. Some thought (and some blogs actually reported) that we’d seen a truly extraordinary event — perhaps even a nearby gamma-ray burst. But it turned out to be something more mundane. My friend and colleague Phil Evans has a great explanation:

It started with the Burst Alert Telescope, or BAT, on board Swift. This is designed to look for GRBs, but will ‘trigger’ on any burst of high-energy radiation that comes from an area of the sky not known to emit such rays. But working out if you’ve had such a burst is not straightforward, because of noise in the detector, background radiation etc. So Swift normally only triggers if it’s really sure the burst of radiation is real; for the statisticians among you, we have a 6.5-σ threshold. Even then, we occasionally get false alarms. But we also have a program to try to spot faint GRBs in nearby galaxies. For this we accept lower significance triggers from BAT if they are near a known, nearby galaxy. But these lower significance triggers are much more likely to be spurious. Normally, we can tell that they are spurious because GRBs (almost always) have a glow of X-rays detectable for some time after the initial burst, an ‘afterglow’. The spurious triggers don’t have this, of course.

In this case, it was a bit more complicated There was an X-ray source consistent with the BAT position. The image to the right shows the early X-ray data. The yellow circle shows the BAT error box – that is, the BAT told us it thought it had seen something in that circle. The orange box shows what the XRT could see at the time, and they grey dots are detected X-rays. The little red circle marks where the X-ray source is.

Just because the X-ray object was already known about, and was not something likely to go GRB doesn’t mean it’s boring. If the X-ray object was much brighter than normal, then it is almost certainly what triggered the BAT and is scientifically interesting. Any energetic outburst near to Earth is well worth studying. Normally when the Swift X-ray telescope observes a new source, we get various limited data products sent straight to Earth, and normally some software (written by me!) analyses those data. In this case, there was a problem analysing those data products, specifically the product from which we normally estimate the brightness. So the scientists who were online at the time were forced to use rougher data, and from those it looked like the X-ray object was much brighter than normal. And so, of course, that was announced.

The event occurred at about 6:15 EDT last night. I was feeding kids and putting them to bed but got to work on it after a couple of hours. At about 9:30, my wife asked what I was up to and I told her about a potential event in M31, but was cautious. I said something like: “This might be nothing; but if it is real, it would be huge.” I wish I could say I had some prescience about what the later analysis would show, but this was more my natural pessimism. That skeptical part of my mind kept going on about how unlikely a truly amazing event was (see here).

My role would turn out to be a small one. It turned out that Swift had observed the region before. And while Goddard and its HEASARC data archive were down, friend and fellow UVOT team member Caryl Gronwall reminded me that the MAST archive was not. We had not observed the suspect region of M31 in the same filters that Swift uses for its initial observations. But we knew there was a globular cluster near the position of the even and, by coincidence, I had just finished a proposal on M31’s globular clusters. I could see that the archival measures and the new measure were consistent with a typical globular cluster. Then we got a report from the GTC. Their spectrum only showed the globular cluster.

This didn’t disprove the idea of a transient, of course. Many X-ray transients don’t show a signature in the optical and it might not have been the globular cluster anyway. But it did rule out some of the more exotic explanations. Then the other shoe dropped this morning when the XRT team raced to their computers, probably still in their bathrobes. Their more detailed analysis showed that the bright X-ray source was a known source and had not brightened. So … no gamma-ray burst. No explosive event.

Phil again:

I imagine that, from the outside, this looks rather chaotic and disorganised. And the fact that this got publicity across the web and Twitter certainly adds to that! But in fact this highlights the challenges facing professional astronomers. Transient events are, by their nature, well, transient. Some are long lived, but others not. Indeed, this is why Swift exists, to enable us to respond very quickly to the detection of a GRB and gather X-ray, UV and optical data within minutes of the trigger. And Swift is programmed to send what it can of that data straight to the ground (limited bandwidth stops us from sending everything), and to alert the people on duty immediately. The whole reason for this is to allow us to quickly make some statements about the object in question so people can decide whether to observe it with other facilities. This ability has led to many fascinating discoveries, such as the fact that short GRBs are caused by two neutron stars merging, the detection of a supernova shock breaking out of a star and the most distant star even seen by humans, to name just 3. But it’s tough. We have limited data, limited time and need to say something quick, while the object is still bright. People with access to large telescopes need to make a rapid decision, do they sink some of their limited observing time into this object? This is the challenge that we, as time-domain astronomers, face on a daily basis. Most of this is normally hidden from the world at large because of course we only publish and announce the final results from the cases where the correct decisions were made. In this case, thanks to the power of social media, one of those cases where what proved to be the wrong decision has been brought into the public eye. You’ve been given a brief insight into the decisions and challenges we have to face daily. So while it’s a bit embarrassing to have to show you one of the times where we got it wrong, it’s also good to show you the reality of science. For every exciting news-worthy discovery, there’s a lot of hard graft, effort, false alarms, mistakes, excitement and disappointment. It’s what we live off. It’s science.

Bingo.

People sometimes ask me why I get so passionate about issues like global warming or vaccination or evolution. While the political aspects of these issues are debatable, I get aggravated when people slag the science, especially when it is laced with dark implications of “follow the money” or claims that scientists are putting out “theories” without supporting evidence. Skeptics claims, for example, that scientists only support global warming theory or vaccinations because they would not get grant money for claiming otherwise.

It is true: scientists like to get paid, just like everyone else. We don’t do this for free (mostly). But money won’t drag you out of bed at 4 in the morning to discover a monster gamma-ray burst. Money doesn’t keep you up until the wee hours pounding on a keyboard to figure out what you’ve just seen. Money didn’t bounce my Leicester colleagues out of bed at the crack of dawn to figure out what we were seeing. Money doesn’t sustain you through the years of grad school and the many years of soft-money itinerancy. Hell, most scientists could make more money if they left science. One of the best comments I ever read on this was on an old slash-dot forum: “Doing science for the money is like having sex for the exercise.”

What really motivates scientists is the answer. What really motivates them is finding out something that wasn’t known before. I have been fortunate in my life to have experienced that joy of discovery a few times. There have been moments when I realized that I was literally the only person on Earth to know something, even if that something was comparatively trivial, like the properties of a new dwarf galaxy. That’s the thrill. And despite last night’s excitement being in vain, it was still thrilling to hope that we’d seen something amazing. And hell, finding out it was not an amazing event was still thrilling. It’s amazing to watch the corrective mechanisms of the scientific method in action, especially over the time span of a few hours.

Last night, science was asked a question: did something strange happen in M31? By this morning, we had the answer: no. That’s not a bad day for science. That’s a great one.

One final thought: one day, something amazing is going to happen in the Local Universe. Some star will explode, some neutron stars will collide or something we haven’t even imagined will happen. It is inevitable. The question is not whether it will happen. The question is: will we still be looking?

Long Form Review: The Wolf of Wall Street

Purely considered as a movie, The Wolf of Wall Street is another excellent film from Scorsese. Although it is too long by about an hour, it is engaging and never really boring (just repetitive — I mean how many shots of people snorting coke off of call girls’ asses do we need?). It has a tremendous amount of energy in some sequences. It’s difficult to call the acting “good” since everyone involved gets into the spirit of things and chews the scenery with relentless abandon. Dicaprio is fine, Hill is fine and newcomer Margot Robbie is great as Naomi.

On its merits, I would probably give the movie an 8 out of 10.

But …

The Wolf of Wall Street is not a fictional tale (at least not completely). Jordan Belfort is a real life person who went to real life prison for bilking real life investors out of hundreds of millions of dollars with penny stocks and pump-and-dump schemes. The movie barely touches on this. In fact, in a condescending fourth wall scene, the movie Belfort simply waves off the details by saying the audience isn’t interested. The vast majority of the movie simply revels in the excesses of drugs, booze and sex that Belfort’s millions created (although I suspect some of that is exaggerated). Large parts of the movie play like a high-power rave.

Dicaprio and Scorsese, perhaps having realized the danger of glorifying the hedonistic lifestyle of a stock swindler in the current economy, have claimed it is a cautionary tale. I didn’t see any caution. I never saw that Belfort suffered for his crimes or was ever really undone by his lifestyle. The movie portrays his life as a non-stop party and even serious problems are cast in a darkly comic light. The only time the movie turns even a little bit grim is when his second marriage breaks up. I doubt even Belfort thinks his life was that awesome.

Frankly, I’m tired of movies that glorify Wall Street brokers. I’m tired of the glorification of Wall Street, full stop. I do not regard the high-powered end of the financial industry as something worth celebrating. There’s an early scene — probably the best in the movie — where Matthew McConaughey, in another great performance, explains how the stock broker industry works. The goal is not to make money for the clients. The goal is to keep them trading and paying commissions. No stock broker ever beats the markets consistently. This has been obvious for thirty years. Michael Lewis wrote a book about his time on Wall Street (Liar’s Poker) and speculated that the industry could not possibly last because people would eventually figure out that it was all a sham — that the brokers making massive commissions weren’t any more clued in than the clients. In fact, 20/20 (I think) once did a bit where they had a stock broker pick stocks, had a kindergarten class pick them and had a monkey pulls cards out of a rollodex. The broker came in last place and not by a little. Why is this an industry worth glorifying? Is it because it is a shadowy parallel of the equally empty and vainglorious entertainment industry?

There’s a tendency — and the movie encourages this — to say that the primary victims of Wall Street are rich and can afford to lose their money. There’s some truth to that. Some time ago, I got into a debate over Bernie Madoff’s victims. Some people insisted they had to know that his returns were ridiculous and there was something fishy going on. I agreed but pointed out that they probably didn’t know it was fraud. My basic take on human nature is that we are good but we are easily tempted. It was just so easy, with so much money being made, to persuade themselves that it was legit.

But the thing is, rich people aren’t the only victims of guys like Madoff and Belfort. Financial schemes like pump and dump affect an open market that is invested in by hundreds of millions of people, including mutual funds and pensions. Swindles undermine confidence in the entire system. Maybe you could argue that some of the victims deserved what they got. But they weren’t the only ones.

The movie doesn’t even hint at this. There’s a phone call, possibly fictitious, where Belfort persuades some middle class guy to sink his life savings into a penny stock, but even that is portrayed as triumphant.

No, I’m sorry. The context matters in this case. The movie itself I give an 8/10. But for glorifying a convicted financial criminal and, more importantly, the environment of recklessness that has sent our economy on a three-decade-long roller coaster ride while Wall Streeters made billions, I have to knock at least a point off.

Blocked by LOLGOP

I recently discovered that I have been blocked on Twitter by LOLGOP. I’m going to guess it’s because of one of two tweets, since I’ve only tweeted at them twice.

Once was when they were slamming home-schooling. I replied:

.@LOLGOP Half the home schoolers I know are liberal hippy types. You have no idea what you’re babbling about.

The second was when they published an article detailing a bunch of lies about Obamacare. I deconstructed their claims here and posted:

A few of these “lies” from @LOLGOP aren’t. Obamcare unfavorability ratings almost always higher than approval. http://t.co/Jc9grz1Oo6

Neither of those crosses me as a blocking offense. The first one’s a bit snippy but not anything particularly egregious. But I did a bit of googling and found that LOLGOP tends to block quite, um, liberally.

My policy on blocking people on Twitter is that I don’t unless they are spammers. Granted, I only have about 250 followers right now. But I’ve gotten some feedback a lot harsher than what I wrote above and a couple of week ago, an anti-vax type wouldn’t shut up on the subject. But I’ve never blocked because of this. On one or two occasions, I actually followed someone because we got into a debate and they made some interesting points.

Twitter doesn’t tell you who has blocked you but I know Neal Boortz did, as I mentioned before, also for very lame tweets. I discovered this one because LOLGOP actually said something funny (it does happen, occasionally). Maybe someone with a big Twitter account is going to have to block more people or be deluged but I’m extremely dubious of that. I’ve tweeted harsher things to people with more followers than LOLGOP and haven’t been blocked.

Seems someone with a satirical Twitter account also has a thin skin.

Low Class Cleavage

It’s the end of the month, so time to put up a few posts I’ve been tinkering with.

No, just give the Great Unwashed a pair of oversized breasts and a happy ending, and they’ll oink for more every time.

– Charles Montgomery Burns

A few months ago, this study was brought to my attention:

It has been suggested human female breast size may act as signal of fat reserves, which in turn indicates access to resources. Based on this perspective, two studies were conducted to test the hypothesis that men experiencing relative resource insecurity should perceive larger breast size as more physically attractive than men experiencing resource security. In Study 1, 266 men from three sites in Malaysia varying in relative socioeconomic status (high to low) rated a series of animated figures varying in breast size for physical attractiveness. Results showed that men from the low socioeconomic context rated larger breasts as more attractive than did men from the medium socioeconomic context, who in turn perceived larger breasts as attractive than men from a high socioeconomic context. Study 2 compared the breast size judgements of 66 hungry versus 58 satiated men within the same environmental context in Britain. Results showed that hungry men rated larger breasts as significantly more attractive than satiated men. Taken together, these studies provide evidence that resource security impacts upon men’s attractiveness ratings based on women’s breast size.

Sigh. It seems I am condemned to writing endlessly about mammary glands. I don’t have an objection to the subject but I do wish someone else would approach these “studies” with any degree of skepticism.

This is yet another iteration of the breast size study I lambasted last year and it runs into the same problems: the use of CG figures instead of real women, the underlying inbuilt assumptions and, most importantly, ignoring the role that social convention plays in this kind of analysis. To put it simply: men may feel a social pressure to choose less busty CG images, a point I’ll get to in a moment. I don’t see that this study sheds any new light on the subject. Men of low socioeconomic status might still feel less pressure to conform to social expectations, something this study does not seem to address at all. Like most studies of human sexuality, it makes the fundamental mistake of assuming that what people say is necessary reflective of what they think or do and not what is expected of them.

The authors think that men’s preference for bustier women when they are hungry supports their thesis that the breast fetish is connected to feeding young (even though is zero evidence that large breasts nurse better than small ones). I actually think their result has no bearing on their assumption. Why would hungrier men want fatter women? Because they want to eat them? To nurse off them? I can think of good reasons why hungry men would feel less bound by social convention, invest a little less thought in a silly social experiment and just press the button for the biggest boobs. I think that hungry men are more likely to give you an honest opinion and not care that preferring the bustier woman is frowned upon. Hunger is known to significantly alter people’s behavior in many subtle ways but these authors narrow it to one dimension, a dimension that may not even exist.

And why not run a parallel test on women? If bigger breasts somehow provoke a primal hunger response, might that preference be built into anyone who nursed in the first few years of life?

No, this is another garbage study that amounts to saying that “low-class” men like big boobs while “high-class” men are more immune to the lure of the decolletage and so … something. I don’t find that to be useful or insightful or meaningful. I find that it simply reinforces an existing preconception.

There is a cultural bias in some of the upper echelons of society against large breasts and men’s attraction to them. That may sound crazy in a society that made Pamela Anderson a star. But large breasts and the breast fetish are often seen, by elites, as a “low class” thing. Busty women in high-end professions sometimes have problems being taken seriously. Many busty women, including my wife, wear minimizer bras so they’ll be taken more seriously (or look less matronly). I’ve noticed that in the teen shows my daughter sometimes watches, girls with curves are either ditzy or femme fatales. In adult comedies, busty women are frequently portrayed as ditzy airheads. Men who are attracted to buxom women are often depicted as low-class, unintelligent and uneducated. Think Al Bundy.

This is, of course, a subset of a mentality that sees physical attraction itself as a low-class animalistic thing. Being attracted to a woman because she’s a Ph.D. is obviously more cultured, sophisticated and enlightened than being attracted to a woman because she’s a DD. I don’t think attraction is monopolar like that. As I noted before, a man’s attraction to a woman is affected by many factors — her personality, her intelligence, her looks. Breast size is just one slider on the circuit board that it is men’s sexuality and probably not even the most important. But it’s absurd to pretend the slider doesn’t exist or that it is somehow less legitimate than the others. We are animals, whatever our pretensions.

Last year, a story exploded on the blogosphere about a naive physics professor who was duped into becoming a drug mule by the promise that he would marry Denise Milani, an extremely buxom non-nude model. What stunned me in reading about the story was the complete lack of any sympathy for him. Granted, he is an arrogant man who isn’t particularly sympathetic. But a huge amount of abuse was heaped on him, much of it focusing on his fascination with a model and particularly a model with extremely large and likely artificial breasts. The tone was that there must be something idiotic and crude about the man to fall for such a ruse and for such a woman.

The reaction to the story not only illuminated a cultural bias but how that bias can become particularly potent when the breasts in question are implants. The expression “big fake boobs” is a pejorative that men and women love to hurl at women they consider low class or inferior. Take Jenny McCarthy. There are very good reasons to criticize McCarthy for her advocacy of anti-vaccine hysteria (although I think the McCarthy criticism is a bit overblown since most people are getting this information elsewhere and McCarthy wasn’t the one who committed research fraud). But no discussion of McCarthy is complete until someone has insulted her for having implants and the existence of those implants has been touted as a sign of her obvious stupidity and the stupidity of those who follow her.

McCarthy actually doesn’t cross me as that stupid; she crosses me as badly misinformed. And it’s not like there aren’t hordes of very smart people who haven’t bought into the anti-vaccine nonsense even sans McCarthy. But putting that aside, I don’t know what McCarthy’s breasts have to do with anything. Do people honestly think it would make a difference is she was an A-cup?

To return to this study and the one I lambasted last year: what I see is not only bad science but a subtle attempt by science to reinforce the stereotype that large breasts and an attraction to them are animalistic, low-class and uneducated. Bullshit speculation claims that men’s attraction to breasts is some primitive instinct. And more bullshit research claims that wealthy educated men can resist this primitive instinct but poorer less-educated men wallow in their animalistic desires. And when these garbage studies come out, blogs are all too eager to hype them, saying, “See! We told you those guys who liked big boobs were ignorant brutes!”

I think this is just garbage. The most “enlightened” academic is just as likely to ogle a busty woman when she walks by. He might be better trained at not being a jerk about it because he walks in social circles where wolf-whistles and come-ons are unacceptable. And he lives in a society where, if a bunch of social scientists are leering over you, you pretend to like the less busty woman. But all men live secret erotic lives in their heads. It’s extremely difficult to tease that information out and certainly not possible with an experiment as crude and obvious as this.

Once again, we see the biggest failing in sex research: asking people what they want instead of getting some objective measure. There are better approaches, some of which I mentioned in my previous article. If I were to approach this topic, I would look at the google search database used in A Billion Wicked Thoughts to see if areas of high education (e.g., college towns) were less likely to look at porn in general and porn involving busty women in particular. That might give you some useful information. But there’s a danger that it wouldn’t enforce the bias we’ve built up against big breasts and the men who love them.

Five Favorite’s: Best Action Films Since 2000

It’s time for another Five Favorites post with Donna of From the Rental Queue!

Donna: Welcome to the newest addition of “Five Favorites” with Michael Siegel! This month we decided to take on our “Five Favorite Action Films released since 2000”. For this list we wanted to focus as much as possible on pure action films. For that reason we decided to exclude the vast majority of superhero, sci-fi, or martial arts films, as we were really trying to focus on pure action. However, if we felt that
the action in a excluded movie was just too good we agreed that we would allow its inclusion. We capped the release date for this at 2000 – anything released before that year was also excluded. We wanted to focus on what the genre looks like today and not be tempted to fill our lists with old favorites.

We pooled our thoughts and came up with a short list of 30 films. Narrowing that down to just five was tough for me and I found myself unable to not pick one sci-fi film for my final list. Honorable mentions for me go out to “Valhalla Rising”, “Machete”, “Unstoppable”, and “Kick Ass”.

Mike: This was tough for me, as most of the action movies I watch slide into science fiction or superhero categories. Maybe it’s my perception, but we don’t seem to be getting the kind of pure action movies we did twenty years ago when Schwarzeneggar and Stallone ruled the box office. Almost everything these days is part of genre franchise.

Nevertheless, here is my list, with only a little bit of rule-bending. I do want to make an honorable mention of “Kill Bill”.  Kill Bill is a great action movie.  Unfortunately, that great movie is wrapped up a bloated 2-volume package.  If you edited them down to one movie and cut the total run time by about 40 minutes, it would probably be near the top of this list. Its action scenes are excellent, the acting is great and the dialogue solid. But it is a prime example of what I’ve disparaged as action movie bloat.  I also decided, at the last moment, to drop “Master and Commander” from my list because it is as much drama as action and I’ll hold it back for a post on criminally-underrated films.

Continue reading Five Favorite’s: Best Action Films Since 2000