A very good blog post on what to expect from a PhD program (and especially what the university itself won’t tell you): So You Want To Go To Grad School (in the Academic Humanities)?
Two paragraphs which are especially informative for people who don’t have recent personal experience in a PhD program:
The most important person in the process is your advisor, who is generally a senior member of the faculty in your department who shares your specialization. I struggle to find words to communicate how important this person will be during your graduate experience.. Graduate study at this level is effectively an apprenticeship system; the advisor is the master and the graduate student is the apprentice and so in theory at least the advisor is going to help guide the student through each stage of this process. To give a sense of the importance of this relationship, it is fairly common to talk about other academics’ advisors as forming a sort of ‘family tree’ (sometimes over multiple ‘generations’). Indeed, the German term for an advisor is a doktorvater, your ‘doctor-father’ (or doktormutter, of course) and this is in common use among English-language academics as well and the notion it suggests, that your advisor is a sort of third parent, isn’t so far from the truth.
If you are considering graduate school with an eye towards continuing in academia who you choose as your advisor will be very important: academia is a snooty, prestige conscious place and your advisor’s name and prestige will travel with you. But there’s more than that: your advisor, because they need to check off on every step of your journey and you will need their effusive letter of recommendation to pursue any kind of academic job has tremendous power over you as a graduate student. You, by contrast, have functionally no power in that relationship; you are reliant on the good graces of your advisor.
I have heard the theory that every time we remember something it is influenced by our thoughts, feelings, and beliefs at the time of remembering. That implies that the memories we think most about are the ones that have been most distorted from their original form.
An exaggerated version is in effect for stories recounted to others. They always need to be selective in detail to make the account manageable in length, and simple tweaks to make it more comprehensible and straightforward have a tendency to persist in later retellings. In particular, I find in myself a tendency to combine the most memorable features of multiple events into a single recollection/story — not, for example, as two or more different parties at distinct semi-remembered places, but one party which sets up a subsequent part of the story.
I suppose the phenomenon demonstrates the value of contemporaneous records and accounts like journaling. Doubtless our interpretations of those records are influenced by subsequent context, but at least the record itself is immutable.
On a global level, bin Laden’s 9/11 attacks set the course of U.S. foreign policy for the first two decades of the twenty-first century and reshaped the Muslim world in ways that bin Laden certainly didn’t intend and that few could have predicted in the immediate aftermath. The Authorization for Use of Military Force, which Congress passed days after 9/11, allowed President Bush to “use all necessary and appropriate force against those nations, organizations, or persons he determines planned, authorized, committed, or aided terrorist attacks that occurred on September 11, or harbored such organizations or persons.”
This authorization sanctioned “forever wars” that lasted for two decades after 9/11. Three presidents as different from each other as Bush, Obama, and Trump used this same authorization to carry out hundreds of drone strikes against groups such as ISIS, al-Qaeda in the Arabian Peninsula, al-Shabaab, and the Pakistani Taliban. Few of these strikes had any connection to the perpetrators of 9/11. The authorization was also used to justify various types of U.S. military operations in countries around the world, in Afghanistan, Ethiopia, Kenya, Libya, Mali, Nigeria, Pakistan, the Philippines, Somalia, Syria, and Yemen. And, of course, 9/11 provided much of the rationale for George W. Bush to invade and occupy Iraq two years later.
The was exactly the opposite of bin Laden’s aim with the 9/11 attacks, which was to push the United States out of the greater Middle East, so its client regimes in the region would fall. Instead, new American bases proliferated throughout the region, in Afghanistan, Djibouti, Iraq, Kuwait, Qatar, Syria, and the United Arab Emirates. Meanwhile, al-Qaeda—”the Base” in Arabic—lost the best base it ever had in Afghanistan. Rather than ending American influence in the Muslim world, the 9/11 attacks greatly amplified it.
Bin Laden later put a post facto gloss on the strategic failure of 9/11 by dressing it up as a great success and claiming the attacks were a fiendishly clever plot to embroil the U.S. in costly wars in the Middle East. Three years after 9/11, bin Laden released a videotape in which he asserted, “We are continuing with this policy of bleeding America to the point of bankruptcy.” There was no evidence that this was really bin Laden’s plan in the run-up to the 9/11 attacks. 9/11 was a great tactical victory for al-Qaeda—the group inflicted more direct damage on the United States in one morning than the Soviet Union had during the Cold War—but ultimately it was a strategic failure for the organization, just as Pearl Harbor was for Imperial Japan.
Bergen, Peter. The Rise and Fall of Osama bin Laden. Simon & Schuster, 2021. p. 242-3
Related: The success of bin Laden’s strategy
It’s worth mentioning here that there is simply no evidence for the common myth that bin Laden and his Afghan Arabs were supported by the CIA financially. Nor is there any evidence that CIA officials at any level met with bin Laden or anyone in his circle. Yet the notion that bin Laden was a creation of the CIA is widespread. For instance, the American film-maker Michael Moore has written, “WE created the monster known as Osama bin Laden! Where did he go to terrorist school? At the CIA!” The real problem is not that the CIA helped bin Laden during the 1980s, but that the U.S. government had no idea about his possible significance until 1993, when he first started to appear in internal U.S. intelligence analyses describing him as a financier of Islamic extremist groups.
The notion that the CIA aided the rise of the Afghan Arabs is based on a fundamental misunderstanding of how the agency supported the Afghan war effort. First, it was overseen by a tiny group of CIA officers in Pakistan. Vincent Cannistraro, who helped coordinate CIA support to the Afghans during the mid-1980s, explained there were only six CIA officials in Pakistan at any given time, and they were simply “administrators.” Secondly, CIA officers in Pakistan seldom left the embassy in Islamabad, and rarely even met with the leaders of the Afghan resistance, let alone Arab militants. That’s because the CIA officers provided American funding to Pakistan’s Inter-Services Intelligence (ISI) agency, which, in turn, decided which among the Afghan mujahadeen groups would receive this funding.
Bridadier Mohammad Yousaf, who ran the ISI’s Afghan operations, explained that it was “a cardinal rule of Pakistan’s policy that no Americans ever became involved with the distribution of funds or arms once they arrived in the country. No Americans ever trained or had direct contact with the mujahadeen, and no American official ever went inside Afghanistan.” Mark Sageman, a CIA officer who worked on the Afghan “account” in Pakistan during the mid-1980s, recalls “we were totally banned” from going into Afghanistan, for fear it would hand the Soviets a great propaganda victory if a CIA officer was captured there.[“] The CIA’s Milt Bearden says the agency “never recruited, trained or otherwise used Arab volunteers. The Afghans were more than happy to do their own fighting—we saw no reason not to satisfy them on this point.” No independent evidence of the CIA supporting al-Qaeda has emerged in the four decades since the end of the anti-Soviet war in Afghanistan.
In short, the CIA had very limited dealings with the Afghans, let alone the Afghan Arabs. There was simply no point for the CIA and the Afghan Arabs to be in contact with each other, since the agency worked through Pakistan’s military intelligence agency during the Afghan War, while the Afghan Arabs had their own sources of funding. The CIA did not need the Afghan Arabs and the Afghan Arabs did not need the CIA.
Bergen, Peter. The Rise and Fall of Osama bin Laden. Simon & Schuster, 2021. p. 42-3
There’s a voguish argument that in an era of easy information availability there is less cause to have any substantial body of knowledge memorized. I have seen articles arguing that the crucial cognitive skills for young people today are the ability to find what they are looking for, given access to the internet.
I think there is a huge and obvious shortcoming with this perspective. Knowing that I can look up the Wikipedia article on the Lutheran Revolution, for example, is just a one way mental link that stops there. If you know nothing about the history of Catholicism, or of religious conflict in Europe, or of the precepts of Christianity then knowing where to find someone else’s writing about the Reformation doesn’t give you any meaningful understanding of what it was or why it mattered. Someone asked a narrow question about the event will be able to find it through an online search, but without internalized knowledge they won’t be able to see the implications and connections to other phenomena. Knowing that you can look up thermodynamics or Carnot efficiency doesn’t give you the ability to apply those concepts when thinking about an application like heating or cooling or the efficiency of an engine.
The ongoing COVID pandemic is demonstrating the extent of scientific and medical ignorance even within rich industrialized societies. That manifests in people falsely believing that they can make health choices for themselves with no consideration of others, and of course in the enormous amount of nonsense that is circulating about vaccines. It’s strange to observe how society has become technological to an unprecedented degree — with technology literally making life as humanity is experiencing it possible — and yet culturally an interest in and knowledge about science is treated as an optional personal curiosity, like fly fishing or following a soccer team. Broadly speaking, I hold the view that to understand anything well requires knowing at least the basics about many other issues (nobody can sensibly evaluate public health policy without knowing the rudiments of medicine, statistics, and epidemiology for example). That concept of knowledge as an interconnected web demonstrates how the ability to pluck out a narrow fact with the help of technology may not translate into much real understanding.
It’s oversimplistic to apply a ‘deficit model’ to what people know about an issue like COVID or climate change, assuming that there is an empty void where knowledge ought to be and that filling it is the solution. For issues tied up in politics, and thus in questions about what people will be free to do, the desire to undertake particular behaviours can create the motivation to believe what’s necessary to keep doing them. Just as someone operating under motivated reasoning can never be swayed by facts or arguments, more education alone won’t combat the problem of people choosing to believe factually what supports their behaviours or ideological positions.
100 years ago, someone could have been appropriately laughed at for saying they know about Pitt the Elder or the Peloponnesian War because they know they can go to a library and find books about them. The instant availability of information online doesn’t really change that.
Researching social movements — where relevant information is often on social media, or the websites of NGOs, universities, or corporations that reorganize them frequently — link rot is an acute problem. Increasingly, the default way to let a reader see the source you’re referencing is to provide an internet hyperlink, and yet there is no assurance that a link on a site which you don’t control will continue to work.
Jonathan Zittrain has an instructive article in The Atlantic about many of the dimensions of the problem. Strikingly, he cites a study by Kendra Albert and Larry Lessig that half the links cited in court opinions since 1996 no longer work, along with 75% of the links in the Harvard Law Review.
Beyond the Wayback Machine, which I already use extensively both to find material which is no longer online and to preserve links to live content that may be useful in the future, Zittrain suggests several other initiatives to help with the problem, including Perma and Robustify.
In Rhodes’ energy history I came across an interesting parallel with the 1988 STS-27 and 2003 STS-107 space shuttle missions, in which the national security payload and secrecy in the first mission may have prevented lessons from being learned which might have helped avert the subsequent disaster. Specifically, the STS-27 mission was launching a classified satellite for the US National Reconnaissance Office (NRO) and as a result they were only able to send low-quality encrypted images of the damage which had been sustained on launch to the shuttle’s thermal protective tiles. Since the seven crew members of STS-107 died because the shuttle broke up during re-entry due to a debris impact on the shuttle’s protective surfaces on launch, conceivably a fuller reckoning of STS-27 might have led to better procedures to identify and assess damage and to develop alternatives for shuttle crews in orbit in a vehicle that has sustained damage that might prevent safe re-entry.
Rhodes describes Belorussian leader and nuclear physicist Stanislav Shushkevich’s analysis of the Chernobyl disaster:
By Shushkevich’s reckoning, the Chernobyl accident was a failure of governance, not of technology. Had the Soviet Union’s nuclear power plants not been dual use, designed for producing military plutonium as well as civilian power and therefore secret, problems with one reactor might have been shared with managers at other reactor stations, leading to safety improvements such as those introduced into US reactors after the accident at Three Mile Island and the Japanese reactors after Fukushima.
Rhodes, Richard. Energy: A Human History. Simon & Schuster, 2018. p. 335
This seems like a promising parallel to draw in a screenplay about the STS-27 and STS-107 missions.
By [President Jimmy] Carter’s own account, his poor opinion of nuclear power originated in personal experience. In 1952 the future president was a US Navy lieutenant with submarine experience stationed at General Electric in Schenectady, New York, training in nuclear engineering under Hyman Rickover. That December, an experimental Canadian 30-megawatt heavy-water moderated, light-water cooled reactor at Chalk River, Ontario, experienced a runaway reaction, surging to 100 megawatts, exploding and partly melting down. It was the world’s first reactor accident, a consequence of a fundamental design flaw of the kind that would destroy a Soviet reactor at Chernobyl three decades later. Since Carter had clearance to work with nuclear reactors, which were still classified as military secrets, he and twenty-two other cleared navy personnel went to Ontario early in 1953 to help dismantle the ruined machine. Because it was radioactive, the calculated maximum exposure time around the damaged structure itself was only ninety seconds. That exposure would be the equivalent of a worker’s defined annual maximum dose of radiation—in those days, 15 rem (roentgen equivalent man). More than a thousand men and two women, most of them Chalk River staff, would participate in the cleanup.
Had he known the long-term outcome of the Chalk River radiation exposures, Carter might have felt friendlier to nuclear power. A thirty-year outcome study, published in 1982, found that lab personnel exposed during the reactor cleanup were “on average living a year or so longer than expected by comparison with the general population of Ontario.” None died of leukemia, a classic disease of serious radiation overexposure. Cancer deaths were below comparable averages among the general population.
Rhodes, Richard. Energy: A Human History. Simon & Schuster, 2018. p. 316, 317
I’ve noted before the exceptional and enduring influence Hyman Rickover (‘father of the nuclear navy’) has had over the subsequent use of nuclear technology. Richard Rhodes’ energy history provides another example:
At the same time, Rickover made a crucial decision to change the form of the fuel from uranium metal to uranium dioxide, a ceramic. “This was a totally different design concept from the naval reactors,” writes Theodore Rockwell, “and required the development of an entirely new technology on a crash basis.” Rockwell told me that Rickover made the decision, despite the fact that it complicated their work, to reduce the risk of nuclear proliferation: it’s straightforward to turn highly enriched uranium metal into a bomb, while uranium dioxide, which has a melting point of 5,189 ˚F (2,865 ˚C) requires technically difficult reprocessing to convert it back into metal.
Rhodes, Richard. Energy: A Human History. Simon & Schuster, 2018. p. 286
Examples like this illustrate the phenomenon of path dependence, where at a certain junction in time things could easily go one way or the other, but once the choice has been made it forecloses subsequent reversals. Examples abound in public policy. For instance, probably nobody creating a system from scratch would have used the US health care model of health insurance from employers coupled with the right to refuse coverage to those with pre-existing conditions, yet once the system was in place powerful lobbies also existed to keep it in place. The same could be said about many complexities and inefficiencies in nations’ tax codes, which distort economic activity and waste resources with compliance and monitoring but which are now defended by specialists whose role is to manage the system on behalf of others.
See also: Zircaloy is a problem
After finding his quartet of books about the global history of nuclear weapons so valuable and intriguing, when I saw that a used book shop had a recent history of energy by Richard Rhodes I picked it up the next day.
It includes some nice little historical parallels and illustrations. One that I found striking illustrates how recent the oil-fired world which we now take for granted really is. Rhodes describes how “big-inch” pipeline technology was developed in the 1930s in America, allowing pipes of greater diameter than 8″ which would have split with earlier manufacturing techniques, but saw relatively little use due to the great depression. In 1942, the first “Big Inch” pipeline was built from east Texas refineries to the northeast (p. 286), partly to avoid the risk of U-boat attack when shipping oil up the east coast.
In addition to illustrating how America’s mass-scale oil infrastructure is mostly less than one human lifetime old, the Big Inch example demonstrates how long-lasting such infrastructure is once installed. Rhodes points out that Big Inch and its near parallel Little Inch (constructed after February 1943) companion are still operating today (p. 271).