All posts by Mike

More on Vaccination

Cross-posted.

One issue that I am fairly militant about is vaccination. Vaccines are arguably the greatest invention in human history. Vaccines made smallpox, a disease that slaughtered billions, extinct. Polio, which used to maim and kill millions, is on the brink of extinction. A couple of weeks ago, Rubella became extinct in the Americas:

After 15 years of a widespread vaccination campaign with the MMR (measles-mumps-rubella) vaccine, the Pan American Health Organization and the World Health Organization announced yesterday that rubella no longer circulates in the Americas. The only way a person could catch it is if they are visiting another country or if it is imported into a North, Central or South American country.

Rubella, also known as German measles, was previously among a pregnant woman’s greatest fears. Although it’s generally a mild disease in children and young adults, the virus wreaks the most damage when a pregnant woman catches it because the virus can cross the placenta to the fetus, increasing the risk for congenital rubella syndrome.

Congenital rubella syndrome can cause miscarriage or stillbirth, but even the infants who survive are likely to have birth defects, heart problems, blindness, deafness, brain damage, bone and growth problems, intellectual disability or damage to the liver and spleen.

Rubella used to cause tens of thousands of miscarriages and birth defects every year. Now it too could be pushed to extinction.

Of course, many deadly diseases are now coming back thanks to people refusing to vaccinate their kids. There is an effort to blame this on “anti-government” sentiment. But while that plays role, the bigger role is by liberal parents who think vaccines cause autism (you’ll notice we’re getting outbreaks in California, not Alabama). As I’ve noted before, the original research that showed a link between vaccines and autism is now known to have been a fraud. Recently, we got another even more proof:

On the heels of a measles outbreak in California fueled by vaccination fears that scientists call unfounded, another large study has shown no link between the measles-mumps-rubella vaccine and autism.

The study examined insurance claims for 96,000 U.S. children born between 2001 and 2007, and found that those who received MMR vaccine didn’t develop autism at a higher rate than unvaccinated children, according to results published Tuesday by the Journal of the American Medical Association, or JAMA. Even children who had older siblings with autism—a group considered at high risk for the disorder—didn’t have increased odds of developing autism after receiving the vaccine, compared with unvaccinated children with autistic older siblings.

96,000 kids — literally 8000 times the size of the sample Wakefield had. No study has ever reproduced Wakefield’s results. That’s because no study has been a complete fraud.

There’s something else, though. This issue became somewhat personal for me recently. My son Ben came down with a bad cough, a high fever and vomiting. He was eventually admitted to the hospital for a couple of days with pneumonia, mainly to get rehydrated. He’s fine now and playing in the next room as I write this. But it was scary.

I mention this because one of the first questions the nurses and doctors asked us was, “Has he been vaccinated?”

My father, the surgeon, likes to say that medicine is as much art as science. You can know the textbooks by heart. But the early symptoms of serious diseases and not-so-serious one are often similar. An inflamed appendix can look like benign belly pain. Pneumonia can look like a cold. “Flu-like symptoms” can be the early phase of anything from a bad cold to ebola. But they mostly get it right because experience with sick people has honed their instincts. They might not be able to tell you why they know it’s not just a cold, but they can tell you (with Ben, the doctor’s instinct told him it wasn’t croup and he ordered a chest X-ray that spotted the pneumonia).

Most doctors today have never seen measles. Or mumps. Or rubella. Or polio. Or anything else we routinely vaccinate for. Thus, they haven’t built up the experience to recognize these conditions. Orac, the writer of the Respectful Insolence blog, told me of a sick child who had Hib. It was only recognized because an older doctor had seen it before.

When I told the doctors Ben had been vaccinated, their faces filled with relief. Because it meant that they didn’t have to think about a vast and unfamiliar terrain of diseases that are mostly eradicated. It wasn’t impossible that he would have a disease he was vaccinated against — vaccines aren’t 100%. But it was far less likely. They could narrow their focus on a much smaller array of possibilities.

Medicine is difficult. The human body doesn’t work like it does in a textbook. You don’t punch symptoms into a computer and come up with a diagnosis. Doctors and nurses are often struggling to figure out what’s wrong with a patient let alone how to treat it. Don’t cloud the waters even further by making them have to worry about diseases they’ve never seen before.

Vaccinate. Take part in the greatest triumph in human history. Not just to finally rid ourselves of these hideous diseases but to make life much easier when someone does get sick.

Movie Review: Interstellar

So far, I have seen five of last year’s Best Picture nominees — Birdman, Boyhood, The Grand Budapest Hotel, The Imitation Game and Whiplash. I’ve also seen a few other 2014 films — Gone Girl, Guardians of the Galaxy and The Edge of Tomorrow — that rank well on IMDB. I’ll have a post at some point about all of them when I look at 2014 in film. But right now, they would all be running behind Interstellar, which I watched last night.

I try very hard to mute my hopes for movies but I was anticipating Interstellar since the first teaser came out. I’m glad to report that it’s yet another triumph for Nolan. The film is simply excellent. The visuals are spectacular and clear, the characters well-developed, the minimalist score is one of Zimmer’s best so far. The ending and the resolution of the plot could be argued with but it’s unusual for me to watch a three-hour movie in one sitting unless it’s Lord of the Rings. I definitely recommend it, especially to those are fans of 2001 or Tree of Life.

That’s not the reason I’m writing about it though.

One of the remarkable things about Interstellar is that it works very hard to get the science right. There are a few missteps, usually for dramatic reasons. For example, the blight affecting Earth works far faster than it would in real life. The spacecraft seem to have enormous amounts of fuel for planetary landings. The astronauts don’t use probes and unmanned landers to investigate planets before landing. And, as I mentioned, the resolution of the plot ventures well into the realm of science fiction and pretty much into fantasy.

But, most of the film is beautifully accurate. The plan to save Earth (and the backup plan) is a realistic approach. Trips through the stellar systems take months or years. Spacecraft have to rotate to create gravity (including a wonderful O’Neill Cylinder). Space is silent — an aesthetic I notice is catching on in sci-fi films as directors figure out how eerie silence is. General and special relativity play huge roles in the plot. Astrophysicist Kip Thorne insisted on being as scientifically accurate as possible and it shows.

And the result is a better film. The emotional thrust of Cooper’s character arc is entirely built on the cruel tricks relativity plays on him. The resolution of Dr. Mann’s arc is built entirely on rock solid physics including the daring stunt Coop uses to save the day. The incredible sequences near the black hole could be taken right of a physics textbook, including a decision that recalls The Cold Equations.

We’re seeing this idea trickle into more and more of science fiction. Battlestar Galactica had muted sounds in space. Moon has reasonably accurate scientific ideas. Her had a sound approach to AI. Serenity has a silent combat scene in space, as did, for a moment, Star Trek. Gravity has some serious issues with orbital dynamics, but much of the rest was rock solid.

I’m hoping this will continue, especially if the rumors of a Forever War movie are true. A science fiction movie doesn’t need accurate science to be good. In fact, it can throw science out the window and be great (e.g., Stars Wars). But I hope that Interstellar blazes a path for more science fiction movies that are grounded, however shakily at times, in real science. This could breath new life into a genre that’s been growing staler with every passing year.

I don’t say this as an astrophysicist (one available for consultation for any aspiring filmmakers). I say this as a movie buff. I say this as someone who loves good movies and think great movies can be made that show science in all its beautiful, glorious and heart-stopping accuracy.

Post Scriptum: Many of my fellow astronomers disagree with me on Interstellar, both on the quality of the film and its scientific accuracy. You can check out fellow UVa alum Phil Plait here, although note that in saying it got the science wrong, he actually got the science wrong. Pro Tip: if you’re going to say Kip Thorne got the science wrong, be sure to do your homework.

Mathematical Malpractice Watch: IUDs and Teens

Right now, the liberal blogosphere is erupting over Republican plans to not fund a program to give free IUDs to low income women:

Republican legislators in Colorado will not authorize funding for a program that gives free IUDs to low-income women — an effort that many believe was responsible for hugely driving down teen births.

Colorado has recently experienced a stunning decline in its teen birth rate. Between 2007 and 2012, federal data shows that births declined 40 percent — faster than any other state in the country.

State officials attributed part of this success to the Colorado Family Planning Initiative, which provided free IUDs to low-income women seen at 68 family planning clinics across the state. Last year, state officials estimated that young women served by those family planning clinics accounted for about three-fourths of the overall decline in Colorado’s teen birth rate.

I disagree with the Republicans on this. But the idea that free IUD program cut Colorado’s teen birth rate by 40% or 3/4 of 40% or anywhere close to 40% is high-test nonsense.

Here is the data from the CDC on teen brith rates. From the first graph, you’ll see that teen birth rates have been steadily falling for seventy years. Like most positive social trends, it has many, um, parents, each of which are flogged by whomever supports that particular issue. Availability of contraception has certainly played a role. The legalization of abortion played a role (although abortion rates peaked in the early 80’s). As social and professional barrier have fallen, many more women are delaying pregnancy for college and jobs. And there is some evidence that teenagers are waiting longer to have sex (that would be the dreaded “abstinence”).

Since 2007, however, the teen birth rate has fallen off a cliff. But not just in Colorado. It’s fallen everywhere, by an average of 30%. If anything, it’s fallen faster in red states than in blue ones (see Figure 9 of the CDC’s report). Colorado has seen the steepest decline (39%), but just behind it are the red states of Arizona (37%), Georgia (37%), North Carolina (34%), Utah (34%) and Virginia (33%).

Is Colorado’s IUD program so awesome that it dropped the teen birth rate for the entire country?

Given the extent of the program and Colorado having the largest reduction, it’s very probable that the IUD program did play a role here. But I would ballpark it at maybe 10% at the most.1 That’s not nothing and it’s probably worth continuing the program. But let’s not pretend the reduction is due only to that.

So what is causing the large reduction? Availability of contraception is playing a role, yes, but there’s something else going on. Birth rates have fallen for all women since 2007, not just teenagers. I don’t think it’s coincidence (and neither does the CDC) that the teen birth rate plunged when we hit the worst recession since the Great Depression. If you look at historical birth rates, you’ll see a similar plunge in during the 1930’s. And that was long before almost the entirety of modern birth control, least of all free birth control.

I think that’s the story here. Colorado’s program was fortuitously timed in that regard and there is likely some synergy between the economic downturn and the IUD program (i.e., the program kicked in right when a bunch of women were more eager for birth control).

One of the difficult things about Mathematical Malpractice Watch is that I frequently end up attacking people I fundamentally agree with. I think Colorado should extend their IUD program (although I’m old enough to remember, in the 90’s, when Republican governors offering incentives for low-income women to use Norplant was denounced as eugenics). But the claim that it has produced a “huge” reduction in the teen birth rate is just not true.

Actually, there is a chance that the effect is 0%. Colorado had the sharpest reduction in teen pregnancy rates. It’s easy to go in, post facto, and identify a pet policy to pin it on while ignoring the thousand other factors occurring in fifty states. It’s called the Texas Sharpshooter Fallacy. Colorado might just be a statistical outlier and we’re crediting a policy for that outlierness because we like the policy. Colorado’s barely two standard deviations from the mean. I think it’s likely the IUD fund had an effect, but I’d be pressed to prove it statistically.

Disney Post

I don’t watch a lot of TV. This is not from any hippy-dippy hatred of TV; I just don’t have a lot of time for it. Apart from Game of Thrones, Doctor Who and sports, I’d rather spend my limited spare time reading or watching movies. And this is particularly true of sitcoms, which I’ve slowly grown less tolerant of.

But there is one exception: my daughter has gotten heavily into Disney channel TV shows. They are always on when she’s in the room (and we allow the TV to be on). So I’ve become very familiar with their shows and I thought I’d write a few words about them. Because reasons.

Continue reading Disney Post

The Imitation Game

When I went through my year-by-year breakdown of the Oscars, I said this:

Here’s the thing that strikes me about the last 35 years of film history. Over that span, the IMDB ratings for individual years have become populated by a much broader variety of films than ever before. Surprisingly, traditional Oscar fare does well. For all the lashing IMDB gets, great films are popular there. IMDB loves Scorsese, loves Kubrik and loves good film. The difference is the variety — IMDB also loves foreign films, actions films, art films and animated films. These are things the Academy tends to ignore as they pick the same shit every year. Oscar bait has pretty much become its own genre. In other words, the problem is not so much that the Academy has regressed, it’s that they haven’t kept up.

If there is a movie from 2014 that defines “Oscar bait”, it’s The Imitation Game. It’s a good film and I recommend it, especially if you’re over 60. The directing is solid. The acting is superb (although anytime I saw Charles Dance, I thought, ‘don’t help him! It’s Tywin Lannister!’). The script is fine. But it has a a flaw in that it screams “give me an Oscar!” at 100 decibels. At times, it’s like watching a movie by a precocious 16-year-old: “Look how good this movie is! Isn’t Benedict Cumberbatch awesome! Look, Keira Knightly! You love Keira Knightly! Conflict! Homophobia! Sacrifice!”

The Imitation Game doesn’t have the confidence to be its own movie. Instead, it’s a descendant of A Beautiful Mind. Instead of portraying Alan Turing as the eccentric but perfectly sociable person he was, they make him an autistic savant, not far removed from Russell Crowe’s John Nash. Instead of showing the years-long struggle to break Enigma, we get the sudden breakthrough from picking up girls in a bar, a scene so similar to the breakthrough scene of A Beautiful Mind, it’s almost insulting (and stupid; the technique that “breaks” Enigma in the movie is cryptology 101). It’s got plenty of good dialogue and the characters are well-defined. But the plot is surprisingly weak.

(Spoiler Warning: The worst part of the film, for me, is when Turing’s team starts holding back information so that the Nazis won’t know the code has been broken. That decision was real but it was made way higher up the chain of command. Portraying it as Turing’s decision was so unrealistic it jarred me out of the film. I think it would have been better for him to see how little of his intelligence was used and get frustrated.)

As I said, the movie’s fine. I give it a 7/10. It’s worth a rental. I was happy for Graham Moore when he won the Oscar and his acceptance speech was amazing. But like a lot of movies, The Imitation Game is a shadowy reflection of an even better movie: one that tries less hard to win an Oscar and, ironically, would have been a better candidate for one. I hate to go all nerdboy, but I think sticking closer to history: where Enigma was a long struggle, where Turing was the eccentric but brilliant leader of a massive team, where the decision of when to use Enigma decrypts was made by higher up, would have made for a better and more satisfying film.

As it is, this one will likely be forgotten in a few years. Just another piece of Oscar bait. That’s a pity. Cumberbatch, Tyldum and, to some extent, Moore, deserve better.

Exploring What Now?

This is kind of … odd.

At a recent conference on sex trafficking in Orlando, Florida, members of a panel warned attendees about the dangers of space exploration, saying it would be a gold mine for future sex traffickers.

“Space is going to be like a frontier town,” said Nicholas Kristoff, who chaired the panel on Sex Trafficking: the Long Term View. “There will be no law enforcement in space, which means that girls, some as young as 11, could be easily trafficked to colonies on Mars or in the asteroid belt where they would have to service up to 50 men a day in zero gravity.”

Asked for comment by e-mail, Julie Bindel expressed support for a ban on space exploration, noting that science fiction films have frequently depicted prostitution in space. She derided series like Firefly for presenting an unrealistic and unrepresentative model of future sex work. She noted that the film Total Recall featured numerous prostitutes including one with three breasts. “No little girl grows up wanting to be a triple-breasted Martian prostitute.” She further attributed the recent push for space exploration, particularly the Mars One TV show, as being due to efforts by the “pimp lobby” to create a completely new market for sex outside of the bounds of law enforcement.

After the panel, the organizers pointed to a recent study by Dominique Roe Sepowitz and the Office of Sex Trafficking Intervention Research which claimed that ads for interplanetary prostitution have increased 200% in the last year alone and presented evidence that shuttle launches are associated with major increases in sex trafficking. “We have good evidence that women and girls were trafficked into Cape Kennedy during the Apollo program as well,” she said. “Everywhere there is a rocket launch, there is sex trafficking.”

Such opposition is not new. In her seminal book Intercourse, feminist icon Andrea Dworkin noted that rockets have a phallic shape. “The push for more space exploration is clearly an effort to thrust these phallic rockets into the universe’s unconsenting vagina.” She advocated for an “enthusiastic consent” standard from other planets before further human exploration.

I don’t even know what to say about this. We haven’t even gotten a man to the moon in 40 years and we have people worried about the future of sex in space. Takes all types, I guess.

Update: This post was part of Maggie’s April Fool.

The Princess Bride, at 28

One of the great pleasures of being a dad is introducing my kids to the things I like, especially movies. About a year ago, I showed Abby Star Wars for the first time1. And just recently I showed her The Princess Bride

I hadn’t seen Bride for a very long time because I got kind of sick of it for a while. Seeing it after so long, I had a few thoughts on why I would now regard it as a clear classic and easily the best film of 1987.

  • The action scenes in the Princess Bride are few, but they are remarkably well done. There is a clarity and a flow that is missing from a lot of modern action scenes. It’s obvious what’s going on and what’s at stake. This really jumped out at me when I was watching it with Abby. During the duel between Inigo and Westley, she actually gasped when Inigo was backed up toward the cliff. It hit me that she understood the terrain and the danger Inigo was being forced into. Many modern action films have no clarity like that. You wouldn’t see the cliff until he fell over it in slo-motion CGI and then turned around in midair to jump over Westley.
  • The moral difference between Westley and Humperdinck is critical to the resolution of the plot. Westley wins only because he showed mercy in sparing Fezzik and Inigo. By contrast, Humperdinck’s cruelty and cowardice drive Miracle Max to enable his defeat, leave him with few loyal subjects except the unreliable Rugen, and turn Buttercup against him.

    I once saw an interview with Matt Parker and Trey Stone where they talked about plot. They said that a bad plot is just a series of events — X happens, then Y happens, then Z happens. A good plot flows from what has happened before: X happens because Y happened, which causes Z to happen. The fates of the characters in The Princess Bride do not turn on strange coincidences and kick-ass karate moves; it turns on their character and the decisions they make.

  • It’s disappeared into the internet, but Joe Posnanski once wrote a great piece about the decline of Rob Reiner’s directorial career. Here are the movies Reiner has directed, with IMDB ratings and my comments:

    This is Spinal Tap (8.0) – Regarded as a classic comedy. And is.

    The Sure Thing (7.0) – I haven’t seen this but have heard good things.

    Stand By Me (8.1) – Good adaptation of King novella.

    The Princess Bride (8.2, #183 on top movies of all time) – Recognized as a classic.

    When Harry Met Sally … (7.6) – Very good romantic comedy.

    Misery (7.8) – Excellent thriller. Made Kathy Bates a household name.

    A Few Good Men (7.6) – Very good film. One of my dad’s favorites.

    North (4.4) – Ouch. This is where it all seemed to go wrong. Roger Ebert famously said he “hated, hated, hated” this movie and said that Reiner would recover from it faster than Ebert would. He was wrong. After making seven straight good to great movies, Reiner would never make another great movie.

    The American President (6.8) – Haven’t seen it; never will.

    Ghost of Mississippi (6.6) – How do you put James Woods in a movie about civil rights and come out mediocre?

    The Story of Us (5.9) – Haven’t seen it; never will. This was the impetus behind Posnanski’s post: he hated The Story of Us, especially as he went in anticipating a good movie.

    Alex and Emma (5.5) – Haven’t seen it; never will.

    Rumor Has It … (5.5) – There was some noise at the time that represented a return to form for Reiner. It didn’t.

    The Bucket List (7.4) – This IMDB rating seems weird to me. It was savaged by critics. But it did make some money and people seemed to like it, despite its bullshit.

    Flipped (7.7) – This must be IMDB’s recency bias.

    The Magic of Belle Isle (7.0) – This is a very low rating for a recent movie starring Morgan Freeman. That’s three straight 7’s. We’re not back to the days when Reiner was producing minor classic. And given IMDB’s bias on recent movies, I would take this with a grain of salt. Still, it suggested he might be recovering.

    And So It Goes (5.5) – Or maybe not.

    No film has returned to Reiner’s early form. No film has even gotten close. No one goes online and says, “Hey, there’s a new Rob Reiner film coming out!” Posnanski likened Reiner’s decline to a great young baseball player that seems headed for the Hall of Fame based on his first few years but suddenly forgets how to play at age 26. Reiner has rebounded a bit but he’s now a utility player and pinch hitter. I don’t think he’ll ever recover the form he had in the his first few films. And that’s a pity, because his early films were great.

  • I’m not overly fond of the Bechdel test, but Robin Wright is almost the only woman in the cast. The Ancient Booer, however, was pretty awesome (the actress died last year the ripe age of 91). As was Carol Kane.
  • Bride is one of the first movies I can remember that became a hit on home video. It got good reviews. It got a few token award nominations (Oscar for Best Song; WGA for Best Screenplay). But it only did OK box office business. I didn’t see it in the theaters. But then it became a big hit on home video and became a cult classic and then a classic, full stop.

    Looking back on it, the lack of attention the film received was kind of embarrassing and a big demonstration of the lacunae award givers have for both comedy and fantasy. Baby Boom, Broadcast News, Dirty Dancing and Moonstruck were the Golden Globe nominees for Best Comedy. Bride was probably better than all four (although News and Moonstruck were and are well-regarded). Best Picture nominees were The Last Emperor, Fatal Attraction, Broadcast News, Hope and Glory and Moonstruck. That’s not an unreasonable slate but Bride has outlasted all of them. Most silly is the lack of a nomination for Best Screenplay: Bride is generally and correctly regarded as having a classic screenplay, certainly better than the five nominees from that year.

  • Roger Ebert defined a “family film” as one that appeals to both kids and adults. Bride is definitely that. It worked for Abby as a straight-up adventure tale with a beautiful princess, a handsome rogue and an evil prince. But for me (and her, as she gets older), the sly comedy is the selling point. It has affection for the material it mocks. That affection is the key difference between making a funny send-up of fairy tales and an unfunny one.
  • It’s been 28 years and I still get goosebumps when Inigo confronts Count Rugen.
  • Anyway, in a few years I’ll get to introduce Ben to the movie, which will allow me to experience it for the first time all over again.

    In keeping with an earlier post, I showed her the movies in the order of IV, V, I, II, III, VI. I was stunned at how well this worked. It massively improves the prequel trilogy, making the parallels to the original trilogy stronger. And it moves the reveal of Lea to Episode III, where it is much better done than in Episode VI. I hesitated on showing Episode III to Abby because of the violence, but she bore it well. The violence didn’t bother her as much as the psychological trauma of seeing Anakin fall to evil. Anyway, I highly recommend this order if you have any good opinion of the prequel trilogies (and maybe even if you don’t).

    Whither the Heavyweights?

    Joe Posnanski has a great post up on the subject of Mike Tyson and Tiger Woods. His argument, as far as Tyson goes, is that Tyson was over-rated as a fighter. Tyson could beat the hell out of lesser opponents and make it look absurdly easy. But against better opponents, he was frequently not only beaten but beaten badly. I’ll let you read Joe instead of excerpting because it’s one of those “you should read the whole thing” deals.

    Here’s the thing though. Maybe I’m out of touch, but it seems to me that Tyson was the last heavyweight champion that really captured the public imagination. Oh, there have been popular heavyweights since — Holyfield, Lewis, Jones. But they weren’t Iron Mike. They weren’t household names. They weren’t the subject of landmark video games. And I doubt they’ll be making cameo appearances in movies 20 years from now.

    For a while, Tyson was beloved. He had a great story and a winning smile and just destroyed people in the ring. I think Will Smith put it best: people didn’t just want Tyson to win at boxing; they wanted him to win at life. And when he got into trouble — when he created trouble for himself — it was heart-breaking.

    But Tyson was the last in a string of boxing champions that had captured the public’s imagination, from Sullivan to Braddock to Marciano to Ali (especially Ali) to Frasier to Forman. These men defined the sport. The current champion — whom I had to look up — isn’t in that class. I don’t think anyone really has been since Tyson.

    Maybe we’re an interim, waiting for the next fighter who will grab the American people’s attention. But I actually think that boxing’s day has simply passed. It’s a bit too violent, a bit too sensational, a bit too shaky for modern America. Team sports have taken over. It still makes money and has some cache. But I don’t see it ever returning to its glory days.

    Parity Returns to College Football?

    So another College Football Season is done. Time to revisit my Bowl Championship System:

    A few years ago, I invented my own Bowl Championship Points system in response to the Bowl Championship Cup. You can read all about it here, including my now hilarious prediction that the 2013 national title game would be a close matchup. The basic idea is that the Championship Cup was silly, as evidenced by ESPN abandoning it. It decides which conference “won” the bowl season by straight win percentage with three or more bowls. So it is almost always won by a mid-major conference that wins three or four bowls. The Mountain West has claimed five of them, usually on the back of a 4-2 or 3-1 record.

    My system awards points to conferences that play in a lot of bowls and a lot of BCS bowls. As such, it is possible for a mid-major to win, but they have to have a great year. The Mountain West won in 2010-2011, when they won four bowls including a BCS game. But it will usually go to a major conference.

    Here are the winners of the Bowl Championship Points system for the time I’ve been keeping it.

    1998-1999: Big Ten (12 points, 5-0, 2 BCS wins)
    1999-2000: Big Ten (10 points, 5-2, 2 BCS wins)
    2000-2001: Big East (8 points, 4-1, 1 BCS win)
    2001-2002: SEC (9 points, 5-3, 2 BCS wins)
    2002-2003: Big Ten (9 points, 5-2, 1 BCS win)
    2003-2004: ACC/SEC (9 points each)
    2004-2005: Big 12 (6 points, 4-3, 1 BCS win)
    2005-2006: Big 12 (8 points, 5-3, 1 BCS win)
    2006-2007: Big East/SEC (11 points each)*
    2007-2008: SEC (14 points, 7-2, 2 BCS wins)
    2008-2009: SEC/Pac 12 (11 points each)*
    2009-2010: SEC (10 points, 6-4, 2 BCS wins)
    2010-2011: Mountain West (8 points, 4-1, 1 BCS win)
    2011-2012: Big 12 (11 points, 6-2, 1 BCS Win)
    2012-2013: SEC (10 points, 6-3, 1 BCS win)
    2013-2014: SEC (11 points, 7-3, 0 BCS wins)

    (*In 2006-7, the Big East went 5-0 in bowls. But the SEC went 6-3, with two BCS wins and a national title. To my mind, that was equally impressive.)

    (**In 2008-9, the Pac 12 went 5-0 in bowls. But the SEC went 6-2, with a BCS win and a national title. Again, depth is important to winning the points system.)

    I have long been saying that the SEC’s dominance was waning, based on the points system. They had a good year last year, but their performance had slowly been declining from its 2008 peak. And to the extent that the SEC did dominate, it was a result of being one of the only conferences that played defense, not “SEC speed”. Last year, I saw the Pac 12 rising and predicted we were moving toward two super-conferences — the SEC and the Pac 12 — dominating the college football scene. But this year, the Big Ten moved into the discussion. In retrospect, that’s not surprising given that two of their best Bowl teams were able to play again.

    So who wins for 2014? Based on the points, the title is split between the Big 10 and the Pac 12. The Pac 12 went 6-3 with one playoff win. The Big 10 went 6-5 with three playoff wins. As a tie-breaker, I’m perfectly willing to give the title to the Big 10 based on Ohio State winning the championship. While they were barely above .500, I think the outstanding performance of their top teams is more impressive than Conference USA’s 4-1 performance in lesser bowls, which would have won the Bowl Championship Cup.

    But what really jumps out this year is the parity. The SEC went 7-5 for nine points as well. Conference USA went 4-1 for seven points. For the first time since 2010-11, no conference had negative points. I think it’s safe to say that the Big Ten is back and can now claim, along with the SEC and Pac 12, to be one of the best conferences in the country. That’s good for the Big Ten. But I also think it’s good for college football. We’re better off when the game is competitive.

    Toys, Kids and the Crisis of Abundance

    One of the reasons I like having kids is the toys. Not because I like to play with them (although I do), but because there is nothing in the world quite like the look a child gets in their eyes when they get a toy, especially an unexpected and delightful one. When Abby was about three months old, I brought home a teething ring and a rattle. She was sitting in her car seat in the kitchen and saw me and her little eyes lit up. She just knew it was something for her. And every now and then, I’ll see that same delight.

    This week, however, I’ve been in one of my moods. Not a bad mood but a mood that makes me clean up the entire house from top to bottom. In doing so, I filled two huge garbage bags with nothing but crap. Papers with drawings on them, little trinkets and toys from kids’ parties, Happy Meals, giveaways and $1 trinkets that she simply had to have. And I don’t think my child is that unusual in that regard. It seems that every parent’s house is filling with these little pieces of crap. You have to dump it regularly or you’ll be overwhelmed.

    For children in the US — at least in the middle class and above — toys are no longer this rare and wonderful treat. They’re something they get on a regular basis, something they expect to see. Oh, they’ll still have delight when a really good one comes along. But it makes me indescribably sad to see these dozens of little toys, unwanted and unloved, to see the few minutes of happiness she got out of them before casting them aside. And I know that the same thing will happen with my son.

    (Interesting, I think she feels the same way. There are toys she hasn’t played with in ages but I have to sneak them out of the house because she doesn’t want to part with them.)

    I also can’t help but think of the long-term impact. I’m no radical environmentalist, but it pains me to think of the resources and energy spent making millions of McDonald’s Teenage Mutant Ninja Turtle toys that will just end up in landfills, that will bring very little real joy to the world.

    It’s just another aspect of our crisis of abundance. We’re so rich and things are so cheap that they no longer have any value.

    The 2015 HOF Class

    Baseball Think Factory is compiling publicly released Hall of Fame ballots to get an idea of how this year’s balloting will go. You can check here to see how well their Ballot Collecting Gizmo did last year when compared to the final vote.

    Just to get this out of the way, I think publicly releasing Hall of Fame votes is a great idea and should be actively encouraged by the BBWAA and the Hall. When writers have to publicly defend their votes, you get much more thoughtful results (the odd Murray Chass aside — and at least he provides exercise for your neck muscles). Look at that second link and compare the public and private ballots. The difference is quite noticeable. For example, 99.5% of those who publicly released their ballots voted for Greg Maddux. This makes sense, since he was one of the greatest pitchers of all time. But only 95.9% of the private ballots did. It’s a small difference, but it shows the effect of accountability. It is much easier to vote against Maddux because of his era or some dim-witted “no one should get in on the first ballot” logic when you don’t have to defend that attitude in public.

    Note that almost every player did better on the public ballots than the private ones except a few like Don Mattingly and Lee Smith. I think this actually a generational thing: older writers not wanting to throw their ballots out to the internet wolves and also favoring older players.

    Looking at BBTF, it looks like Johnson, Martinez and Smoltz will get in this year. Biggio is doing even better than last year, when he fell two votes shy, but I would still hesitate to say he’ll make it. Piazza is currently polling at 77.8% which means he will likely not make it as the difference between his public and private numbers was very strong last year, probably due to unsubstantiated PED rumors. Bagwell, Raines, Schilling and Mussina look likely to take small steps forward.

    What’s interesting, however, is that this looks to be the year we will see the big purge of the ballot that the HOF has clearly been wanting. One problem the HOF ballot has had in recent years is a super-abundance of candidates. Joe Posnanaki recently commented that he regarded Fred McGriff as a marginal HOFer and had him 17th on his ballot. There are ways to improve the process, including Bill James recent suggestion. But I think we’re going to see the glut of candidates finally shrink this year. Why?

    At least three and possibly four men will get inducted. Don Mattingly will drop off the ballot as his time expires. And looking at the votes and considering how the private balloting has gone, it is quite possible that Sammy Sosa, Mark McGwire and Gary Sheffield will also drop off the ballot. In fact, of the new arrivals, it’s possible that none will be on the ballot again next year. That’s a big reduction in the backlog.

    Next year will see Ken Griffey Jr. and Trevor Hoffman probably voted in the first ballot. It’s possible Jim Edmonds or Billy Wagner will linger around. But that will crack the door open for Bagwell, Raines, Schilling and/or Mussina. Then in 2017, we’ll see Pudge Rodriguez (in on first ballot), Vladimir Guerrero (in after a few years) and Manny Ramirez (excluded by steroid allegations). That will keep the door open. Then things get interesting again in 2018.

    In short, the storm has passed and the Hall has apparently passed its judgement on the PED era. Pitchers are in. Great players without specific allegations are in. Palmeiro, McGwire, Sosa and Sheffield are out. Bonds and Clemens are in limbo but almost certainly will not make it before their eligibility expires.

    I think the Hall will have to go back and address the steroid era again, especially once they find out that one (or likely several) current HOFers used steroids. It’s going to be difficult to have a Hall without Bonds, Clemens, Sosa, McGwire, A. Rodriguez, Palmeiro, Sheffield and Manny Ramirez. But I think it will be at least a decade before we get there. The hysteria over PED’s is waning. But it’s not over yet.

    The End of the Era

    It’s my blog. I can vent if I feel the need.

    On October 21, 1983, the Atlanta Braves’ effort to become a serious team ended for almost a decade. On that day, the Braves completed a trade made two months earlier for Cleveland Indians pitcher Len Barker. Going to the Indians was Brook Jacoby, a young third baseman who would nail down the hot corner in Cleveland for a decade, go to a couple of allstar games and tally over a thousand hits and a hundred home runs. In their defense, the Braves thought they had third base nailed down in Bob Horner, who had already smashed 158 home runs through age 25 and looked like a future Hall of Famer. There was no way to know that Horner would be out of baseball by 30 due to injuries.

    But the real prize for the Indians was Brett Butler, Atlanta’s excellent and popular center fielder. Butler was a strong leadoff man who put up a .344 OBP and swiped 39 bases. He would go on to become on of the best leadoff men in history, a borderline HOF candidate who smashed 2375 hits, stole 558 bases and had a lifetime .377 OBP. He was a great player. It was obvious to everyone that he would at least become a good player and score a tons of runs hitting in front of Dale Murphy and Bob Horner. But the Braves traded him for Len Barker because … I guess … Barker had thrown a perfect game. Barker would go 10-20 in 232.1 innings with a 4.64 ERA. That was over three years, not one. He would be out of baseball within four years.

    The Barker-Butler trade is well-known as one of the worst in history. But it was more than just a bad trade. For the Braves, it was the end of an era. In 1982, the Braves had one of their best seasons, winning 89 games to take the division, then losing the NLCS to the Cardinals. In 1983, they won 88 games but a late-season collapse let the Dodgers win the division. With Joe Torre at the helm and a team that included Dale Murphy, Bob Horner, Glenn Hubbard, Phil Niekro — all great players — and some young pitching, they looked poised to turn around “Loserville” as Atlanta was known (and, to some extent, still is). They looked like they would become the first team from Atlanta, in any sport, to become a serious presence.

    But the next year, they fell to 80 wins. Then Horner got hurt and went to Japan. Torre got fired. Niekro got traded. Brad Komminsk flopped. The farm system imploded. And the Braves returned to being one of the worst teams in baseball.

    This was why 1991 was not only a miracle year, it was one the great miracle years in sports. The Braves didn’t just go worst-to-first and come within a Lonnie Smith hesitation of a championship. They went from a truly terrible team, a nothing on the sports radar, to a dynasty. They were good, they were young and they were run by two great men who knew what they were doing.

    And the result was one of the great runs in sports history: 14 straight division titles, five pennants and a championship. An average of 98 wins per season. Four players — Chipper Jones, Greg Maddux, Tom Glavine and John Smoltz — are in the Hall of Fame or soon will be. A few more — Fred McGriff, Javy Lopez — have borderline cases. Still more were just great damned players. Their manager is in the Hall of Fame and you could make an argument for their General Manager and their Pitching Coach. It was an amazing time to be a Braves fan. You turned on the TV and knew you were watching a great team that would usually win. If they fell behind in the standings, you knew it was only a matter of time until they would catch up. It was a joy to turn on TBS and watch them dominate. The “Braves Way” was a real thing: great pitching, great defense, timely hitting.

    The thing is that the Braves weren’t just a great team, they were a smart team. They developed great prospects (Lopez, Klesko, Marcus Giles, Rafael Furcal, Chipper Jones, Ryan Klesko, Andruw Jones, David Justice), they traded for great players (Fred McGriff especially), they signed impact free agents (Greg Maddux, Andres Galarraga). They had a great major league team and a great farm system. If someone got injured or left to free agency, they had the depth to replace them. Year after year, everything they touched was gold.

    That era has long been over, as exemplified by this summer’s capstone — the induction of Maddux, Glavine and Cox into the Hall of Fame. But now we see we are back to the bad old days. It turns out that capstone was also a gravestone:

    Here Lieth the Braves Dynasty: 1991-2005

    Last year, I thought maybe the good days were back after almost a decade of middling shuffling semi-contention. They won 96 games, took the division and looked like a team poised for a multi-year run. True, they had albatross contracts in Dan Uggla and BJ Upton. But they had a slew of great young players — Freddie Freeman, Evan Gattis, Jason Heyward, Justin Upton, Andrelton Simmons, Julio Teheran, Craig Kimbrel, Mike Minor, Kris Medlen. They’d signed a number of them to long-term contracts.

    But it was more than just that. The Braves were fun to watch again. I looked forward to every game and would watch them on mlb.tv while messaging my brother. It felt like 1991 all over again, like we were returning to the good old days.

    What a difference a year makes. The Braves had a lousy 2014 season, with the bats completely collapsing and several of their young pitchers getting hurt. They finished under .500 and looked terrible the last few months. I couldn’t watch them, it was so maddening.

    But as disappointing as the season was, there were still reasons for optimism. They had one of the best pitching staffs in the league. Their defense was very good. They still had the young core that had looked so promising a year earlier. A change of hitting coach (or maybe manager) and they looked good to bounce back in 2015 and fulfill their destiny as the next Braves dynasty.

    Well, that apparently wasn’t good enough. A month ago, they traded away Jason Heyward — a 24 y/o Atlanta native and one of the best players on the team — for a disappointing pitcher from the Cardinals. They traded Tommy La Stella, one of their few prospects who could get on base, for an oft-injured former pitching prospect. Yesterday, they traded Justin Upton, their second best player, for some minor league prospects, the best of which is a disappointing first-round pitcher coming off arm surgery. The rumor is that they’re accumulating capital to make some major plays in the international market. I’m dubious. I don’t see Liberty Media — the cheapskates owners who wrecked the dynasty — shelling out for the top-tier talent.

    It was the Heyward trade was the watershed for me — an awful echo of the Len Barker trade. The Braves traded away their most popular player — a young talent who is still years away from his prime — for the ultimate bag of magic beans: a young pitcher. And the language surrounding the trade was even more disheartening. The Braves talked about “years of control” — i.e., how many more years they have before the players reach free agency. They talked about how they’re building for 2017, when their stadium opens. They talked about how they were trying to get out from under some bad contracts.

    I understand the theory behind all that. The problem is that these are the things said by loser organizations. Loser organizations are always rebuilding, always aiming to contend a few years from now, always worried about years-of-control and payroll implications. Smart teams worry about those things too but they also know how to hold onto their best players and how to build a team that will contend, full stop, not just in some nebulous future window. They don’t trade away almost all of their on-base skills for minor league scraps and pitchers with injury risks. They don’t trade away talented young players and sign older less-talented players to replace them. They don’t look at the team that kept runs off the board better than almost anyone last year but couldn’t string three hits together and think their real need is mawr pitching.

    The Braves aren’t some ancient team at the end of a great run trading away their aging stars. They were one of the youngest teams in the majors with some of their best players locked up long term. This isn’t the Red Sox rebuilding when their stars all aged overnight. This is like the Royals tearing up their young team two years before those players took them to the World Series.

    (And it’s made worse by the signing of Nick Markakis to a 4-year deal. Markakis is six years older than Heyward. He’s four years older than Upton. And he’s not nearly as good as either of them. The Braves outfield has gotten older while shedding all of its on-base skills, all of its power and all of its defense. This is not how you build a team that will contend three years from now. This is how you become the Marlins.)

    Looking at the destruction of a good team, the trading away of good young players for scraps, the obsession over payroll (for an organization awash in money), I can’t help but think of the bad old days when the Braves would trade away Brett Butler and sign Ken Oberkfell, when they’d break Pascual Perez and trade for Danny Heep or Ozzie Virgil, when they talked excitedly about potentially signing the remnants of an aging Jim Rice. Yesterday’s Upton trade simply confirmed my suspicions. The Braves are no longer a serious organization. They had a team that could have contended when they opened their new stadium. Now they don’t.

    I’m probably being overly bitter and pessimistic. But I’m dubious that this team will contend anytime in the next five years and I’m certain they will not approach anything like a dynasty as long as Liberty Media are in charge. They’re simply too cheap and too stupid to build the kind of powerhouse they used to be known for.

    No, we’re heading back to the bad old days when the Braves were the joke of the National League. And with the Hawks still unserious and the Falcons “contending” at 5-9, I fear that the days of Loserville have returned.

    Addendum: Braves’ apologists are saying this team couldn’t afford to keep Upton and Heyward. This is garbage. Uggla’s contract comes off the books next year. And the Braves’ organization has a revenue stream of $253 million. They could easily pay those two outfielders $40 million a year and not break a sweat. This is just an excuse from a cheapskate owner.

    How Many Women?

    Campus sexual violence continues to be a topic of discussion, as it should be. I have a post going up on the other site about the kangaroo court system that calls itself campus justice.

    But in the course of this discussion, a bunch of statistical BS has emerged. This centers on just how common sexual violence is on college campuses, with estimates ranging from the one-in-five stat that has been touted, in various forms, since the 1980’s, to a 0.2 percent rate touted in a recent op-ed.

    Let’s tackle that last one first.

    According to the FBI “[t]he rate of forcible rapes in 2012 was estimated at 52.9 per 100,000 female inhabitants.”

    Assuming that all American women are uniformly at risk, this means the average American woman has a 0.0529 percent chance of being raped each year, or a 99.9471 percent chance of not being raped each year. That means the probability the average American woman is never raped over a 50-year period is 97.4 percent (0.999471 raised to the power 50). Over 4 years of college, it is 99.8 percent.

    Thus the probability that an American woman is raped in her lifetime is 2.6 percent and in college 0.2 percent — 5 to 100 times less than the estimates broadcast by the media and public officials.

    This estimate is way too low. It is based on taking one number and applying high school math to it. It misses the mark because it uses the wrong numbers and some poor assumptions.

    First of all, the FBI’s stats are on documented forcible rape and does not account for under-reporting and does not includes sexual assault. The better comparison is the National Crime Victimization Survey, which estimates about 300,000 rapes or sexual assaults in 2013 for an incidence rate of 1.1 per thousand. But even that number needs some correction because about 2/3 of sexual violence is visited upon women between the ages of 12 and 30 and about a third among college-age women. The NCVS rate indicates about a 10% lifetime risk or about 3% college-age risk for American women. This is lower than the 1-in-5 stat but much higher than 1-in-500.

    (*The NCVS survey shows a jump in sexual violence in the 2000’s. That’s not because sexual violence surged; it’s because they changed their methodology, which increased their estimates by about 20%.)

    So what about 1-in-5? I’ve talked about this before, but it’s worth going over again: the one-in-five stat is almost certainly a wild overestimate:

    The statistic comes from a 2007 Campus Sexual Assault study conducted by the National Institute of Justice, a division of the Justice Department. The researchers made clear that the study consisted of students from just two universities, but some politicians ignored that for their talking point, choosing instead to apply the small sample across all U.S. college campuses.

    The CSA study was actually an online survey that took 15 minutes to complete, and the 5,446 undergraduate women who participated were provided a $10 Amazon gift card. Men participated too, but their answers weren’t included in the one-in-five statistic.

    If 5,446 sounds like a high number, it’s not — the researchers acknowledged that it was actually a low response rate.

    But a lot of those responses have to do with how the questions were worded. For example, the CSA study asked women whether they had sexual contact with someone while they were “unable to provide consent or stop what was happening because you were passed out, drugged, drunk, incapacitated or asleep?”

    The survey also asked the same question “about events that you think (but are not certain) happened.”

    That’s open to a lot of interpretation, as exemplified by a 2010 survey conducted by the U.S. Centers for Disease Control and Prevention, which found similar results.

    I’ve talked about the CDC study before and its deep flaws. Schow points out that the victimization rate they are claiming is way more than the National Crime Victimization Survey (NCVS), the FBI and the Rape, Abuse and Incest National Network (RAINN) estimates. All three of those agencies use much more rigorous data collection methods. NCVS does interviews and asks the question straight up: have you been raped or sexually assaulted? I would trust the research methods of these agencies, who have been doing this for decades, over a web-survey of two colleges.

    Another survey recently emerged from MIT which claimed 1-in-6 women are sexually assaulted. But only does this suffer from the same flaws as the CSA study (a web survey with voluntary participation), it’s not even claiming what it claims:

    When it comes to experiences of sexual assault since starting at MIT:

  • 1 in 20 female undergraduates, 1 in 100 female graduate students, and zero male students reported being the victim of forced sexual penetration
  • 3 percent of female undergraduates, 1 percent of male undergraduates, and 1 percent of female grad students reported being forced to perform oral sex
  • 15 percent of female undergraduates, 4 percent of male undergraduates, 4 percent of female graduate students, and 1 percent of male graduate students reported having experienced “unwanted sexual touching or kissing”
  • All of these experiences are lumped together under the school’s definition of sexual assault.

    When students were asked to define their own experiences, 10 percent of female undergraduates, 2 percent of male undergraduates, three percent of female graduate students, and 1 percent of male graduate students said they had been sexually assaulted since coming to MIT. One percent of female graduate students, one percent of male undergraduates, and 5 percent of female undergraduates said they had been raped.

    Note that even with a biased study, the result is 1-in-10, not 1-in-5 or 1-in-6.

    OK, so web surveys are a bad way to do this. What is a good way? Mark Perry points out that the one-in-five stat is inconsistent with another number claimed by advocates of new policies: a reporting rate of 12%. If you assume a reporting rate near that and use the actual number of reported assaults on major campuses, you get a rate of around 3%.

    Hmmm.

    Further research is consistent with this rate. For example, here, we see that UT Austin has 21 reported incidents of sexual violence. That’s one in a thousand enrolled women. Texas A&M reported nine, one in three thousand women. Houston reported 11, one in 2000 women. If we are to believe the 1-in-5 stat, that’s a reporting rate of half a percent. A reporting rate of 10%, which is what most people accept, would mean … a 3-5% risk for five years of enrollment.

    So … Mark Perry finds 3%. Texas schools show 3-5%. NCVS and RAINN stats indicate 2-5%. Basically, any time we use actual numbers based on objectives surveys, we find the number of women who are in danger of sexual violence during their time on campus is 1-in-20, not 1-in-5.

    One other reason to disbelieve the 1-in-5 stat. Sexual violence in our society is down — way down. According to the Bureau of Justice Statistics, rape has fallen from 2.5 per 1000 to 0.5 per thousand, an 80% decline. The FBI’s data show a decline from 40 to about 25 per hundred thousand, a 40% decline (they don’t account for reporting rate, which is likely to have risen). RAINN estimates that the rate has fallen 50% in just the last twenty years. That means 10 million fewer sexual assaults.

    Yet, for some reason, sexual assault rates on campus have not fallen, at least according to the favored research. They were claiming 1-in-5 in the 80’s and they are claiming 1-in-5 now. The sexual violence rate on campus might fall a little more slowly than the overall society because campus populations aren’t aging the way the general population is and sexual violence victims are mostly under 30. But it defies belief that the huge dramatic drops in violence and sexual violence everywhere in the world would somehow not be reflected on college campuses.

    Interestingly, the decline in sexual violence does appear if you polish the wax fruit a bit. The seminal Koss study of the 1980’s claimed that one-in-four women were assaulted or raped on college campuses. As Christina Hoff Summer and Maggie McNeill pointed out, the actual rate was something like 8%. A current rate of 3-5% would indicate that sexual violence on campus has dropped in proportion to that of sexual violence in the broader society.

    It goes without saying, of course that 3-5% of women experiencing sexual violence during their time at college is 3-5% too many. As institutions of enlightenment (supposedly), our college campuses should be safer than the rest of society. I support efforts to clamp down on campus sexual violence, although not in the form that it is currently taking, which I will address on the other site.

    But the 1-in-5 stat isn’t reality. It’s a poll-test number. It’s a number picked to be large enough to be scary but not so large as to be unbelievable. It is being used to advance an agenda that I believe will not really address the problem of sexual violence.

    Numbers means things. As I’ve argued before, if one in five women on college campuses are being sexually assaulted, this suggests a much more radical course of action than one-in-twenty. It would suggest that we should shut down every college in the country since they are the most dangerous places for women in the entire United States. But 1-in-20 suggests that an overhaul of campus judiciary systems, better support for victims and expulsion of serial predators would do a lot to help.

    In other words, let’s keep on with the policies that have dropped sexual violence 50-80% in the last few decades.

    A Fishy Story

    Clearing out some old posts.

    A while ago, I encountered a story on Amy Alkon’s site about a man fooled into fathering a child:

    Here’s how it happened, according to Houston Press. Joe Pressil began dating his girlfriend, Anetria, in 2005. They broke up in 2007 and, three months later, she told him she was pregnant with his child. Pressil was confused, since the couple had used birth control, but a paternity test proved that he was indeed the father. So Pressil let Anetria and the boys stay at his home and he agreed to pay child support.
    Fast forward to February of this year, when 36-year-old Pressil found a receipt – from a Houston sperm bank called Omni-Med Laboratories – for “cryopreservation of a sperm sample” (Pressil was listed as the patient although he had never been there). He called Omni-Med, which passed him along to its affiliated clinic Advanced Fertility. The clinic told Pressil that his “wife” had come into the clinic with his semen and they performed IVF with it, which is how Anetria got pregnant.

    The big question, of course, is how exactly did Anetria obtain Pressil’s sperm without him knowing about it? Simple. She apparently saved their used condoms. Gag. (Anetria denies these claims.) [tagbox tag=”IVF”]

    “I couldn’t believe it could be done. I was very, very devastated. I couldn’t believe that this fertility clinic could actually do this without my consent, or without my even being there,” Pressil said, adding that artificial insemination is against his religious beliefs. “That’s a violation of myself, to what I believe in, to my religion, and just to my manhood,” Pressil said.

    I’ve now seen this story show up on a couple of other sites. The only links in Google are for the original claim and her denial. I can’t find out how it was resolved. But I suspect his claim was dismissed. The reason I suspect this is because his story is total bullshit.

    Here’s a conversation that has never happened:

    Patient: “Hi, I have this condom full of sperm. God knows how I got it or who it belongs to. Can you harvest my eggs and inject this into them?”

    Doctor: “No problem!”

    I’ve been through IVF (Ben was conceived naturally after two failed cycles). It is a very involved process. We had to have interviews, then get tests for venereal diseases and genetic conditions. I then had to show up and make my donation either on site or in nearby hotel. And no, I was not allowed to bring in a condom. Condoms contain spermicides and lubricants that murder sperm and latex is not sperm’s friend. Even in a sterile container, sperm cells don’t last very long unless they are placed in a special refrigerator. Freezing sperm is a slow process that takes place in a solution that keeps the cells from shattering from ice crystal formation.

    And that’s only the technical side of the story. There’s also the legal issue that no clinic is going to expose themselves to a potential multi-million dollar lawsuit by using the sperm of a man they don’t have a consent form from.

    So, no, you can’t just have a man fill a condom, throw it in your freezer and get it injected into your eggs. It doesn’t work that way. This is why I believe the woman’s lawyer, who claims Pressil agreed to IVF and signed consent forms.

    I’ve seen the frozen sperm canard come up on TV shows and movies from time to time. It annoys me. This is something conjured up by people who haven’t done their research.