More Bat Research, or When Not to Trust the Experts
Scientists have a lot more knowledge than the rest of us, but they don't always use that knowledge for good.
Gain-of-function research should continue. Such is the opinion of the Biden White House, as explained in this press conference at the end of February. It is “important to help prevent future pandemics,” says communications man John Kirby.
Last fall, researchers in Boston went public about creating new variants of covid-19, while in a recent letter, over 150 prominent virologists joined together to air their concerns that new regulations might “overly restrict the ability of scientists to generate the knowledge needed to protect ourselves from these pathogens.”
By now, most serious people are in agreement that the 2019 coronavirus originated in a laboratory in Wuhan. Even federal agencies like the FBI are falling into line. The evidence for what happened is overwhelming.
To begin with, the bats that host the virus’ nearest relatives don’t live near Wuhan itself, but 1100 miles away, along the border between China and Laos, in just the place where the Wuhan researchers repeatedly went to collect wild bat viruses for their gain-of-function experiments. Then include the fact that, shortly after the first outbreak, the Chinese authorities deleted their viral genome archives – why do that if you don’t have something to hide? And of course we shouldn’t forget about China’s dismal lab safety protocols. And so forth.
For the first year after covid appeared, mainstream outlets like CNN and Twitter censored these things for political reasons. After President Trump left office, the taboo was relaxed, and now, even mainstream opinion is coalescing around the lab leak theory.
China, obviously, doesn’t come away looking good, but the United States isn’t off the hook, either. It was the US government that funded this research, and the only reason that Chinese scientists were doing it in the first place was because so many American scientists (whom the Chinese see as their superiors) had devoted their careers to making gain-of-function research look necessary, important, and safe.
How do people in the pure and applied sciences advance their careers? By doing the things that the higher-ups consider necessary, important, and safe.
And when a lot of high-status people have clustered around a set of prestigious ideas, it’s rare for anything as mundane as evidence or results to break the prestige of those ideas. Hence the fact that, even after three years of covid, nearly all of the leading authorities on gain-of-function research are still in favor of gain-of-function research.
In the kind of country that America once was, it would be a surprise if a man like Anthony Fauci was able to escape impeachment for repeatedly lying to Congress about this research, and about his own role in promoting and funding it. But in our own day, Fauci not only got to spend the remainder of his career as the No. 1 expert authority on the crisis he helped create, he also retired as perhaps the most admired medical professional in the country.
In most circumstance, one shouldn’t be suspicious of expertise. After all, if you get your arm or your leg shattered in a road accident, you want an expert trauma surgeon to repair it. If you get accused of a crime you didn’t commit, you want to be defended by an expert lawyer. And so forth.
But the problems appears when you’re dealing with a field where the professionals have a strong incentive to exaggerate the usefulness of what they do, and to understate its potential for harm. Unfortunately, most branches of pharmacology and virology fit snugly into this category.
And that “incentive to exaggerate” can cause a lot of trouble in a world where being a scientific expert means being a self-watching watchman, in a world where way too many people will keep on trusting you no matter how badly you abuse their trust, and in a world where, for most doctors and scientists, there is no accountability.
Because that is what happens when hardly anyone thinks about the fact that, once people make careers out of a particular field of research, they are often emotionally incapable of processing evidence that what they’re doing isn’t necessary, important, and safe – and that it might actually be useless or even harmful.
Obviously, the problem of biased expertise is not new. It existed in the 1950s when lobotomies were common, and in the late 1800s when it was normal for women to be institutionalized for “nymphomania,” and even all the way back in the late Middle Ages when the default, expert-endorsed response to unexplained illnesses (whether physical or mental) was to find some evil-looking person in the village and try her for witchcraft.
The people who earned a living by performing lobotomies had a much higher opinion of lobotomy than the general population. The people who ran late-1800s asylums soberly insisted that all those women who were being locked up for experiencing sexual desire to the same degree that men did really had a mental disorder and really needed treatment.
And of course Heinrich Kramer, author of the Malleus Maleficarum (by far the number-one witch-hunting guide of all time) could give you all kinds of evidence, collected over the course of a long and illustrious career, to back up his claims that witchcraft was the top threat to the peace and order of Europe, and also his claims that if judges at witch trials carefully followed the handbook he had written for them, there would be hardly any risk of innocent people being executed by mistake.
In the end, these people’s life’s work ended up in history’s dustbin because a lot of influential people chose not to trust the experts.
By now we know that you shouldn’t listen uncritically when a man who’s spent his life hunting witches tells you how to hunt witches, or when a doctor who’s earned his way to fame by performing lobotomies tells you about how lobotomies are usually beneficial and only rarely harmful. And so forth.
One can only wish for this kind of levelheadedness in the medical controversies of the present day.
Consider the debate over childhood ADHD, and the drugs (mainly stimulants like Ritalin, Adderall, etc.) that are used to treat it. The overwhelming opinion within the psychiatric industry is that the ADHD diagnosis is scientifically sound, and that the drugs (which have of course been very well-studied) are effective and safe.
Outside the industry, there’s a lot more suspicion. People with common sense know that children are, by nature, more rambunctious and distractible than adults, and that half of them are more so than the average child. They know that these traits (despite being burdensome for children in the present-day school system) are not a mental disorder. And they suspect that it might be a bad idea to start a child on a lifetime of hard drug dependency in order to improve his performance in grade school.
Curiously, there is a lot of scientific research that supports this point of view. We know, for instance, that the academic benefits of ADHD treatment usually only last for a year or two (unlike the ill effects of drug dependency, which last a lifetime). We know that stimulants suppress children’s physical growth. And we know that prolonged drug dependency during childhood leads to anomalies in brain development, including permanent deficiencies of the same neurotransmitters that the drug is boosting in the short term.
We also know that the drugs are damaging the dopaminergic system, which regulates rewards/pleasure, so that drugged children can grow up to suffer from low motivation, erratic moods, and depression.
Also, we know that the diagnostic criteria for ADHD are very sketchy. (For instance, one Canadian study found that children born in December, who are constantly being compared to classmates a little older than themselves, are 47 percent more likely to be medicated for ADHD than children born in January.)
Do the majority of psychiatrists (and pediatricians, etc.) make serious attempts to confront these findings?
No. They just produce more handbooks about how to use the drugs “properly,” and more studies showing that ADHD drugs achieve their short-term goal of producing a quiet, well-behaved child. But they avoid putting serious thought into the ethical question of whether all this damage is an acceptable price to pay.
It isn’t all that different from how, even after three years of covid, almost everybody involved in gain-of-function research has steadfastly refused to change his or her mind about gain-of-function research.
And then of course there is the lying. It is well-known that Dr. Fauci lied to Congress in an attempt to cover up US funding of gain-of-function research in China. Likewise, many of the key researchers involved in promoting ADHD medication have lied about taking several million dollars in under-the-table “consulting fees” from drug manufacturers.
The guilty parties include Dr. Joseph Biederman, the Chair of Pediatric Psychopharmacology at Harvard, whose prestige probably did more than anyone else’s to normalize Ritalin prescriptions for small children. Even after he was exposed for taking $1.6 million in undisclosed payments, nothing ever happened to him. He was not disciplined by his institution, and his professional honors continued to pile up.
Yet there are still a lot of ordinary Americans who, when presented with all this information – about either ADHD or covid – will just hem and haw and finally say something like “I trust the experts.”
Perhaps, in the case of ADHD, they will then declare their faith that a treatment that has been studied and used by so many thousands of well-credentialed people for such a long time is very unlikely to be harmful.
My own point of view is somewhat different. I think that, when so many thousands of well-credentialed people devote their careers to something, it becomes very unlikely that any amount of evidence will convince them that the thing in question is harmful.
And when I see a long string of news stories about prominent people saying that gain-of-function research should continue, it only confirms my hypothesis.
It’s too late to save the seven million or so people who have died worldwide from covid-19. And it’s too late to save the roughly equal number of American children who will spend the rest of their lives dealing with drug-induced mental disfigurement because they squirmed too much or had bad handwriting in elementary school.
But if Americans don’t want these kinds of things to keep happening, then now is the time to wake up, and to stop “trusting the experts” in the same naïve way that they have hitherto done.
This essay was originally published at The American Thinker.
To believe that 7 seven million people died from CoViD-19 is to fall into the trap of trusting the experts. To do so is to take much for granted such as there being a unique lab manipulated virus that can infect people and spread throughout the world; that there is a test that can prove causality and not just try to make an association based on non-specific symptoms; that it was a "virus" causing an increase in morbidity and mortality rather than undetermined isolated event s thus allowing for measures known to adversely impact health to be rolled out until the effects of a poorly or purposefully designed toxin could kick in, acting as cover for something that may not exist as portrayed. To ignore all the above is to "trust the experts" and ignore any and all alternative agendas.
In his Memoirs of a Revolutionist, the critic Dwight MacDonald [https://en.wikipedia.org/wiki/Dwight_Macdonald] said that for American Marxists (of whom he was one, before the war), the New York Times was like Aristotle was for the Church fathers -- a pagan source, but revered.
It used to be the case that Americans could differ widely on many issues -- but we all took for granted the reliability of some institutions were were above politics. The far Left had a built-in distrust of any endeavor associated with big business, but even they didn't question, for example, the efficacy of the polio vaccine.
When a group of psychiatrists were enticed into saying that the very conservative Republican candidate for President in 1964 was psychologically unfit for the office there was a sharp reaction within their profession, leading to something called 'the Goldwater Rule', prohibiting them from commenting on the mental fitness of candidates. They realized that their credibility with the public (or half of it) would be undermined if they were seen to be partisan.
All that is gone now.
If you read the supposedly-neutral fact-checkers, Snopes, on the origins of the virus, you will find them confidently proclaiming that "overwhelming evidence" shows that it came from a bat. (Incidentally, it might be an interesting exercise for the Twilight Patriot to see how Snopes is able to exercise a leftist bias using various editorial tricks: Did Kamala Harris contribute money to the bail fund for the BLM/AntiFa rioters? FALSE. But they should have asked, did she solicit others to do so?)
It would be nice if a modern Diogenes found some honest people, drawn from all political quarters, and persuaded them to a genuine fact-checking site, that we could all trust. But as America unravels, things are falling apart, and the center cannot hold.