I just noticed I have about five Linkoramas lingering in my queue. So I’ll take out whole bunch here.
Archive for the ‘Science and Edumacation’ Category
Shortly after I graduated from college, I was on a kick to try to get healthy and lose weight. It still hasn’t worked, 19 years la- … holy crap, 19 years?! … let me see … 1994 .. good God, I’m old … anyway, it still hasn’t worked 19 year later.
At one point, I tried Herbalife. This was not something I came to of my own accord. A friend’s husband was into it on the lowest tier of their multi-level marketing. I was too young and stupid to know just how idiotic herbal supplements were, so I figured “what the hell” and jumped.
It was a big mistake. The compound contained ephedra and it caused my first incident of Premature Ventricular Contractions — a common harmless arrhythmia that nevertheless is scary as hell. Some time later I tried a prescription weight loss pill that also caused PVC’s. They went away after I stopped, mostly. I still get them occasionally, most notably right after my wedding and when I haven’t been getting enough sleep. I had a full cardio workup six years ago and everything looks fine. But I still wonder if the ephedra did any permanent damage.
The thing is that I’m not normally into that sort of thing. But it was sold to me because I wasn’t terribly familiar with high-pressure marketing techniques and certainly didn’t expect them from a friend’s husband. I’ve since … well not wised up, exactly. I’ve gotten confident enough to tell people to fuck off. In fact, high pressure sales pitches are the surest way to drive me away. When we bought our first home, I literally walked out on people who tried to get me to buy right then with “if you buy right now” incentives. The home we bought was sold in a low-pressure way. We felt — correctly as it happened — that this reflected the salesman’s confidence in his product.
I’m rambling. Let me get to the point. Part of the sales pitch I got for Herbalife went like this:
Mike, I’m telling you, I’ve investigated all kinds of supplements. I’ve looked into everything. And I’ve researched this product really thoroughly. I wouldn’t take anything I didn’t know everything about. So trust me: this is the real deal.
Standard stuff, right? But hidden within that is something I’ve come to recognize as the mark of a shyster. If someone spends an inordinate amount of time telling you, in a vague sense, how much experience they have and how much expertise they have and how they’ve really researched this and they’ve looked at everything out there, they are, to be blunt, full of shit.
This instinct has served me well. When Neal Boortz began flogging the Fair Tax, he talked about how much research had been done and how he’d looked at every plan out there (really? every plan?). That pinged my radar and I did some research and found out that the Fair Tax had giant gaping problems (documented here). When a contractor came by and gave me a pitch about how he’d tried everything and he was the best expert, I went with someone else.
And you see this constantly in the alternative medicine crowd. Sellers and promoters will constantly tell you how extensively they’ve surveyed things, how much research they’ve done, how much experience they have and, inevitably, it turns out not to be the case.
So, in my roundabout way, here is Mike’s Rule of Expertise: Experts don’t constantly reassure you of their expertise; they simply dole out facts and data.
Let cite some good examples from my blogroll: Radley Balko doesn’t talk about what an expert he is on criminal justice matters; he tells you specifically what he’s learned, seen and read. The Bad Astronomer doesn’t talk about how much experience he has in astrophysics; he points you at research and researchers who’ve done the work. Maggie McNeill doesn’t pontificate about her extensive background in the sex industry; she links every study and opinion piece she can find. Joe Posnanski doesn’t talk about how many athletes he’s interviewed or how much Bill James likes him; he crunches the numbers, gets the quotes and presents the facts.
I’ve been to hundreds of science talks. Not one has centered around the speaker’s credentials and how they’ve explored every alternate theory. They present hypothesis, data and conclusion. The best ones acknowledge their limitations and possible alternate theories. The kind of dead certainty you will encounter in, say, a homeopathy practitioner, is minimal in any good scientist and absent in the best ones.
This is how real experts do it. Experts want you to trust the facts; con men want you to trust them.
(In a related note, I, like most astronomers, rarely affix “Ph.D.” to the end of my name unless I’m applying for a grant where the credential is required. I also only refer to myself as “Dr. Siegel” when yelling at the cable company. And the only time I’m called that at work is either as part of a running gag or when being addressed formally (grant correspondence, for example; and I usually encourage them to call me Mike). This is partially because astronomers are an informal bunch. It is also related to my time at UVa, where everyone except Ed School professors and medical doctors takes the moniker of “Mr.” and “Ms.” as a sign of respect to Mr. Jefferson.
But I also I think this flows from the same skepticism of over-credentialing. A real scientist wants you to trust the data, not them. The only academics I know who use the Ph.D. suffix or the Doctor prefix are either a) pretentious; b) medical doctors, where I think it’s appropriate, and c) women or minorities in disciplines where they have trouble being taken seriously and it’s hard to begrudge them. And anyone who refers to themselves as “Dr. Smith, Ph.D.” is almost certainly full of it.)
What brought this to my frontal lobe was a re-eruption (a few months ago now) of controversy over Sex at Dawn. I find the premise of Sex at Dawn — that humans are naturally polyamorous — interesting if flawed. But what has long bothered me is the certainty with which this supposedly scientific premise is discussed. Every time I hear Christopher Ryan speak, I feel like he’s about to sell me herbal supplements. He’s not quite as bad as my friend’s now ex-husband. He actually does know some stuff. But he seems stunningly unaware of what he doesn’t know or of what facts are inconsistent with his thesis. Is he right? Dammit, if this comes down to me reading his book, I give up.
Anyway, there is some controversy over Christopher Ryan’s credentials. I took a look at his wikipedia page and this is what I found:
He received a BA in English and American literature in 1984 and an MA and Ph.D. in psychology from Saybrook University, in San Francisco, CA twenty years later. He spent the intervening decades traveling around the world, living in unexpected places working odd jobs (e.g., gutting salmon in Alaska, teaching English to prostitutes in Bangkok and self-defense to land-reform activists in Mexico, managing commercial real-estate in New York’s Diamond District, helping Spanish physicians publish their research). Drawing upon his multi-cultural experience, Ryan’s academic research focused on trying to distinguish the human from the cultural. His doctoral dissertation analyzes the prehistoric roots of human sexuality, and was guided by the psychologist, Stanley Krippner.
Ryan has guest lectured at the University of Barcelona Medical School, consulted at various hospitals, contributed to publications ranging from Behavioral and Brain Sciences (Cambridge University Press) to a textbook used in medical schools and teaching hospitals throughout Spain and Latin America and makes frequent mass media appearances. Ryan contributes to both Psychology Today and Huffington Post.[
I read that and I heard, “I’m telling you, I’ve been all over the world and met all kinds of people and read all the papers. And this polyamory thing; this is the real deal.” Maybe Ryan is right. I’ve got an 80-book backlog right now, but I’m hoping to get to his at some point. But a Wikipedia entry filled with such a wide array of credentials combined with his “I’m such an expert” public statements make me suspect the work has flaws. And what I’ve read indicates this perception is correct. If and when I get to his book, I’ll know for sure.
(I wrote the above a couple of months ago. When I went to it today, I was reminded of a recent post at Popehat that mocked a legal spammer for doing the same thing: talking himself up as some modern-day renaissance man. Ken has a lot more experience in dealing with shyster lawyers, obviously. His approach to this is different because he gets a lot of legal spamming. But the basic tenet is the same: a real hot shot lawyer doesn’t try to wow you with his credentials.)
I think I’ve spent the entirety of this week either on the phone or having a meeting or curled up in bed with a migraine. Sigh. Some weeks are like that.
Earlier this week, the Journal of the American Medical Association came out with a huge study of obesity that concludes that the obesity hysterics are, indeed, hysterical. Their results indicate that being moderately overweight or even very mildly obese doesn’t make you more likely to die than a thin person. In fact, it may make you less likely to die, to the tune of 6%. (Severe obesity, however, did show a strong connection to higher death rates).
Now you would think that this would be greeted with some skeptical enthusiasm. If the results are born out by further study, it would mean we do not have a massive pending public health crisis on our hands. It means that instead of using cattle prods to get moderately overweight people into the gym, we can concentrate on really obese people.
So is the health community greeting this with relief? Not exactly:
That’s the wrong conclusion, according to epidemiologists. They insist that, in general, excess weight is dangerous. But then they have to explain why the mortality-to-weight correlation runs the wrong way. The result is a messy, collective scramble for excuses and explanations that can make the new data fit the old ideas.
William Saletan at Slate lists a dozen different explanations for why this study is wrong, definitely wrong, absolutely wrong, no sir. Most of these cross him (and me) as trying to rationalize away an inconvenient scientific result.
Not a bad bunch of pictures, if I do say so myself.
As I often say about innovation, the technical problems are nothing compared to the pinhead legal problems. Verge has a good article up sorting through some of the legal and treaty issues (yes, treaty issues) involved in automated robotic cars. It’s definitely worth your time.
The article seems unduly pessimistic to me. These are things that can be worked out — we have entire armies of lawyers in this country who stand to make millions getting everything sorted into legal precedent. And if these things prove to be safe — and I think they will — the economic pressure to work out the legal issues will be fierce.
The one thing that bothered me about the article was this:
The Geneva Convention on Road Traffic (1949) requires that drivers “shall at all times be able to control their vehicles,” and provisions against reckless driving usually require “the conscious and intentional operation of a motor vehicle.” Some of that is simple semantics, but other concerns are harder to dismiss. After a crash, drivers are legally obligated to stop and help the injured — a difficult task if there’s no one in the car.
As a result, most experts predict drivers will be legally required to have a person in the car at all times, ready to take over if the automatic system fails. If they’re right, the self-parking car may never be legal.
Did you see the subtext? The subtext is that if I’m in a crash with an automated car, there is no one around to render assistance to me.
Well, maybe. Bleeding out while unconscious or seriously injured would be a risk (although it’s not like pedestrians and bystanders are going to disappear). But being in a collision with a robot would have some advantages over being in one with a human:
Robot cars are coming, one way or another. As powerful as the legal pinheads are, the force of progress is simply too strong.
This came to my attention a month ago. I drafted a post, forgot about it in the election/migraine event horizon but now want to get it out my drafts section. I think it’s worth posting because we are likely to hear more of this from the more hysterical environmental wing.
The chart, from Ezra Klein’s usually excellent Wonkblog, purports to show a steep rise in weather-related fatalities in recent years.
It doesn’t show anything of the kind.
First of all, what it shows is a slight decline or flat trend with a few recent spikes caused by a 90’s heat wave, Hurricane Katrina and last year’s tornados. Now maybe you can argue that we should pay more attention to these in the era of global warming because they may be related (or may not). I agree. However, the long term trend in almost all categories is down — way down. Deaths from lightning strikes are down by over two-thirds over the last 70 years. That’s real progress.
But the progress is even better than the graph shows. The graph makes a huge blindingly obvious error; one that Klein’s readers jumped on immediately: it does not account for population growth. The first data point is from a sample of 140 million people while the last if from a sample of 310 million. To compare raw figures is simply ridiculous (and, indeed, Klein’s co-blogger later tweeted a version with death rates that was far less dire and showed dramatic declines in weather-related fatalities).
The third problem is less obvious but potentially the worst one. The plot includes deaths from heat, cold, “winter fatalities”, rip currents and wind. Heat deaths are particularly important to the point Wonkblog is making since, presumably, global warming will result in more deaths from heat waves and drought.
The problem is that the NOAA, from whose data the graph is taken, did not track heat deaths until 1986. The same goes for many deaths in the “other” category. Cold fatalities were not tracked until 1988. Winter fatalities until 1986. Rip currents until 2002. Wind deaths until 1995. No correction, none whatsover, is made for the incomplete data that spans the first five or six decades of NOAA’s sample.
It is simply not sensible to treat the data as though there were zero deaths from heat and other categories before the mid-1980’s. In fact, there are many reasons — the spread of air-conditioning for example — to suspect that heat-related deaths were much much higher in the past. It would defy common sense for the sharp reductions in fatalities from tornados, hurricanes and lightning (not to mention earthquakes) to not reflected in the statistics for other weather-related deaths.
But let’s not assume. Let’s go to the record. The data start in 1940, which usefully omits one of the greatest environmental calamities in American history: the Dust Bowl. Thousands died; at least 5000 in one 1936 heat wave alone. Another massive drought hit in the 1950’s. A 1972 heat wave killed 900 people. A 1980 heat wave killed 1700 people. All of those happened before the NOAA tracked the number of heat-related deaths. None are in the sample.
To be completely honest, the NOAA data seems a poor resource for this kind of study. It apparently does not include the 1988 drought, recording only 47 heat-related deaths in that two-year period. But it does include the 1995 and 1999 heat waves. I have no idea what their criteria are. I suspect they are counting deaths from specific short-term heat waves rather than broad massive events like the 88-89 drought. That’s fine as far as it goes. But if your attempt to quantify long-term trends in weather-related deaths ignores droughts; if it ignores the God-damned Dust Bowl, I would submit that you are looking at the wrong data.
So, in the end, the claim that we are getting more weather-related fatalities than ever is, at least in this case, based on a heavily biased poorly understood sample that barely supports the conclusion
CNN has an article up that is … kinda dumb:
While the campaigns eagerly pursue female voters, there’s something that may raise the chances for both presidential candidates that’s totally out of their control: women’s ovulation cycles.
You read that right. New research suggests that hormones may influence female voting choices differently, depending on whether a woman is single or in a committed relationship.
Please continue reading with caution. Although the study will be published in the peer-reviewed journal Psychological Science, several political scientists who read the study have expressed skepticism about its conclusions.
Basically, this new study claims — actually, rediscovers — that women in relationships favor Romney by 19 points and single women favor Obama by 33. Their new claim is that when those women are ovulating, those percentages jump by as many as 20 points.
This has, for obvious reasons, caused quite a stir in the blogosphere and Twitter. Unfortunately, the primary reaction is for people to clutch their copies of McKinnon and scream at some Texas professor for daring to suggest that women are nothing but hormone-addled idiots, even though the professor in question says nothing of the kind. And that reaction is kind of unfortunate. Because in their zeal to proclaim that women are completely unaffected by their hormones, people are missing the real reason why the article is dumb and should just be snickered at and then ignored.
First, the number of women we are dealing with is small. I don’t have access to the study and their exact numbers but they studied 502 women total. If by “change of 20 points*” they mean that women in relationships went from 59-41 Romney to 69-31 Romney, that’s a total of about 25 women changing their minds. And a similar number among single women. That … really doesn’t strike me as a statistically significant sample, especially given how volatile polls are known to be anyway and how uncertain the date of ovulation can be.
(*A critical point that is missing from the article is whether that jump is 20 points in differential or absolute (i.e, from 59-41 to 69-31 or 79-21). It’s the difference between 25 women changing their minds — a small number — and 50, a more interesting number. I also note the phrase “as much as 20 points”, which suggests that 20 points is at the outer edge of a very large statistical uncertainty and the actual difference is much smaller. This is why I would like to see the actual study.)
Second, it’s difficult to pin down an a priori reason why a woman’s menstrual cycle might affect her voting. In the absence of clear information, we can only speculate. And this is where CNN and the researchers really flounder badly:
Here’s how Durante explains this: When women are ovulating, they “feel sexier,” and therefore lean more toward liberal attitudes on abortion and marriage equality. Married women have the same hormones firing, but tend to take the opposite viewpoint on these issues, she says.
“I think they’re overcompensating for the increase of the hormones motivating them to have sex with other men,” she said. It’s a way of convincing themselves that they’re not the type to give in to such sexual urges, she said.
It’s true enough that women feel “sexier” when ovulating and are known to change their behavior (more likely to have sex, more likely to wear skimpy clothing, etc.). That’s all well-established biology. How this translates into political behavior isn’t clear at all. It seems that the researchers came up with one half of a dubious idea (“women feel sexier so they want abortion to be legal”) and then had to scramble to find the other half (“um, so married women are … repressing?”). That’s nice spit-balling but it’s no more valid than saying that when women are menstruating, they get mad and say, “Screw that guy, I ain’t voting for him any more!” You can basically shove anything you want into that information vacuum and call it “science”.
Something important jumped out at me on a second reading: no one quoted in the article is a biologist or any other kind of scientist. The study author is a Professor of Marketing. They also quote Professors of Political “Science” and Women’s and Gender Studies. I would hazard that maybe the Professor of Marketing knows something about statistics. But this whole things reeks of the Scientific Peter Principle: poorly done studies are the ones most likely to get attention because their flaws produced amazing results.
Here’s $0.02 from someone as equally unqualified to look into this as anyone quoted in the article. I suspect this effect, such as it is, is small, even smaller than the 10% they are claiming. I also suspect that this study was conducted some time ago when a lot of the voters were undecided and might have been a little torn between the two candidates. Undecided voters have a tendency to sway with every breeze that blows. Under those circumstances, it’s possible that the hormone kick at ovulation and the resulting surge in self-confidence might make women a little firmer in their political convictions one way or the other. Or, conversely, that the effects of PMS and/or menstruation make women a little less confident in their choices. One test you could do? See if “ovulation effect” diminishes as we get closer to the election and more people learn about the candidates and make up their minds.
The gripping hand here is that this entire thing is pointless trivia as far as elections go. You see, women’s menstrual cycles tend to be random. So the percentage of women who are ovulating at any one moment is a constant. So the net effect of this on the vote?
Update: I just slapped myself in the head for not saying this in the main text: where the hell was the group of menopausal women used as a control?
I love this:
The coolness and wow factors are there, yes. So is the idea that this sort of thing can be done so cheaply and easily. But the real gem is the look on that kid’s face. This is something he will never forget. I have to steal this idea for my kid one day.