On December 7, 1941, an unprepared United States was attacked at Pearl Harbor, crippling much of the US fleet. Less than four years later, the US deployed a new and unimaginably destructive weapon— a weapon that, just a few years before, had been considered science fiction. On May 25, 1961, John F. Kennedy promised that a man would walk on the Moon by the end of the decade. In July 1969, it happened. Many who witnessed this giant leap for mankind had been born before the invention of airplanes. These two events were outliers, but suggestive ones. They represent a century during which humans routinely invented and deployed new and transformative technologies at breakneck speed. (Page 7)

From where we stand today, the period from the mid19th to the mid20th century seems like a heroic age. By comparison, ours is an age of stagnation. Median wage growth has slowed, inequality and income concentration are on the rise, political polarization has intensified, rates of debt and leverage have exploded, startup formation has decelerated, and geographic mobility has declined. (Page 7)

There are also worrying symptoms of scientific stagnation. Fields ranging from physics and neurobiology to cognitive psychology and quantitative finance have recently experienced a reproducibility crisis, wherein their findings have been impossible to replicate. Worse, multiple studies have shown that despite exponential increases in funding, nearly every scientific field is simply producing fewer breakthroughs. (Page 8)

Against the standard view in economics and finance, which holds that speculative financial bubbles are intrinsically negative phenomena, we develop a model of bubbles as innovation accelerators. (Page 10)

Technological innovation is more driven by excess, exuberance, and irrationality than by costbenefit analyses, rational calculation, and careful and deliberate planning. Realitybending delusions are underrated drivers of technoeconomic progress. In other words, a necessary enabling condition for technological progress, which ultimately fuels human flourishing, is what the Ancient Greeks called thymos, which often gets translated as “spiritedness”— a relentless drive to transcend the limitations of a listless present. (Page 10)

Across nearly every aspect of society, we have an obsessive desire to eliminate risk and an irrepressible, almost romantic longing for safety. At universities, elites are selected into a series of schools and careers that promise them virtually unlimited choice. Go to Harvard, and you can pick any career; choose consulting, law, or banking, and you can work in any industry. In science, institutional incentives, such as citationdriven metrics and a funding process that rewards incremental science, reduce risktaking. In culture more broadly, realworld risktaking is substituted with virtual surrogates such as VR, video games, and the metaverse. (Page 11)

The contemporary “cult of innovation,” as philosopher René Girard called it, 9 seems to distract us from the fact that true progress has become an increasingly rare occurrence. Against the selfaggrandizing claims of “innovation”— often simply employed to sell a slightly improved beauty care formula, the latest softwareasaservice product, or any startup pitched as “X for Y”— we need to reclaim the meaning of innovation as a truly singular inflection point, a development that reorders the present and impacts the trajectory of the future. (Page 14)

A bubble is therefore not simply a collective delusion but an expression of a future that is radically different from now. This is what the current moment is missing, and what we hope to reestablish with this book. (Page 15)

The West has also become more nihilistic. Based on an analysis of the Google Books Ngram corpus, the use of terms related to progress and the future has decreased by about 25 percent since the 1960s, while those related to threats, risks, and worries have become several times more common. (Page 21)

First is the dynamics of demographics: A combination of aging and low birth rates tends to result in slowed growth, since older people usually spend less than younger people and are less productive. (Page 23)

A higherorder effect of these trends, as we’ll detail later in this chapter, is a rising societal aversion to risk and a culture that is increasingly cautious and complacent. (Page 23)

Of course, the causes of stagnation are complex. But what these symptoms of stagnation and decline have in common is that they result from a societal aversion to risk, which has been on the rise for the past few decades. Societal risk intolerance expresses itself almost everywhere— in finance, culture, politics, education, science, and technology. Broadly, there seems to be a collective desire to suppress and control all risks and conserve what is at the expense of breaking the terminal horizon of the present and accelerating toward what could be. (Page 25)

Someone who makes a billion dollars in a single year and spends $ 50 million of it on a new mansion is still saving 95 percent of their resources. Population aging thus contributes to concentrations of wealth. As a result, capital is less likely to fund novel ideas simply because so much of it lies dormant. (Page 29)

The third factor driving risk intolerance is the tyranny of bureaucratization. Over the past few decades, bureaucratization has penetrated all social domains, from governments to universities to firms. Synchronous with the abandonment of Bretton Woods, the increase in bureaucratization— reflected in our everexpanding corpus of tax and legal codes— coincides with the invention of word processing in the 1970s, which dissolved the human limitations of copying longform texts. (Page 29)

It took Nintendo almost a century to invent the iconic Super Mario Brothers; YouTube was initially envisioned as a video dating site; Slack started as an internal communication tool for programmers developing an online game. Greatness often doesn’t follow predefined milestones. (Page 30)

The managerial drive to quantify, control, and eliminate risk has had a disastrous impact on science. In academia, citations have become the dominant metric for evaluating the quality of a scientist’s research. This has incentivized researchers to pursue incremental work that leads to numerous published papers rather than pursuing riskier exploratory projects that might result in no publications, no citations, and, therefore, no funding. (Page 31)

there seems to be no issue here given that the past five decades have seen enormous gains in stock values. But these stock gains haven’t translated into productivity gains. After 1971, the divergence between hyperfinancialized markets and the real economy accelerated, ushering in an age of artificial wealth and value fueled by exploding debt and leverage. (Page 33)

Whereas microprocessor performance has doubled roughly every two years since 1975, and computation density has experienced sustained geometrical growth of approximately 35 percent per year for the past 50 years, crop yield, energy density, and transportation speed have more or less stagnated— or worse, declined. Since the late 1950s, transportation speeds have flatlined at around 0.85 mach. Compared to leadacid batteries, the best battery technology available at the beginning of the 1900s, which had an energy density of 25 watthours per kilogram, the best lithiumion batteries in 2022 had an energy density just 12 times higher, which translates to an exponential growth of only 2 percent a year. (Page 34)

Today, AIpowered chatbots can help you book a flight to London while riding the New York City subway. But the subway infrastructure itself was built at the beginning of the last century, and the flight will take longer than it would have in the 1970s. The infrastructure of the world beyond our smartphones is deteriorating, if not collapsing. (Page 35)

The abandonment of Bretton Woods means monetary value can no longer be stored over time. Instead of taking calculated risks with savings that have future purchasing power, the postBretton Woods system incentivizes a distorted approach to risktaking. Under this model, you must chase almost every bubble and momentum trade just to preserve the present value of your money— which, consequently, exposes you to the existential risk of losing everything. (Page 37)

The massive cash holdings of some of the largest tech companies—Apple, for example, had around $112 billion worth of cash and securities on hand in early 2023—attest to the lack of ideas regarding how to invest in the future. Microsoft, Apple, and Alphabet, it seems, don’t know what to do with their money anymore. (Page 38)

The atomic bomb preceded the “defense space”; the “space industry” emerged decades after Apollo; “crypto” followed Bitcoin. What does accelerate progress is a concentration of effective people working on adjacent problems (Page 39)

Whereas the last century brought radically different and almost incommensurate phases in art, architecture, literature, and film, the past three decades have been characterized by a “recession of novelty” 58 and a period of increasing homogeneity. 59 When there are attempts at creativity, they are often trapped in recursive loops of self-referential recycling and nostalgia. (Page 42)

Virtual surrogates of risk-taking have replaced actual risk-taking. Empire-building is restricted to strategy games, romantic conquests are substituted with virtual-reality porn, and the drive for greatness and heroism is passively sublimated into the latest Marvel movie. (Page 43)

Similarly, our conception of the future has become pessimistic and unambitious. Whereas the Star Trek series, which first aired in 1966, envisioned a techno-optimistic future of space colonization, warp drives, transporters, molecular replicators, and time travel, today’s simulations are darker and more dystopian. Space, famously dubbed “the final frontier” in Star Trek, turns into an abyssal source of horror in films like Alien. Instead of boldly going where no one has gone before, our current spacefarers desire nothing more than to return from the darkness of space into the comfort zone of Earth—think of 1995’s Apollo 13, 2013’s Gravity, and 2015’s The Martian. (Page 43)

A large-scale study of 14 million works of literature published over the past 125 years in English, Spanish, and German found that over the last two decades, textual analogs of cognitive distortions, including disorders such as depression and anxiety, have surged well above historical levels—including during World Wars I and II—after declining or stabilizing for most of the 20th century. 67 It is perhaps unsurprising, then, that the use of antidepressants and the practice of self-medicating via hallucinogens and pacifying drugs like cannabis are steadily on the rise. (Page 45)

Attention, one of today’s scarcest resources, gets routed toward likes, comments, and shares to create a constant stream of dopamine-triggering stimulation that translates into quantifiable “engagement.” Beneath the surface of the slick user interfaces, the hard-coded flows of attention and information are controlled by the logic of preferential attachment. What already has a lot of likes will get even more, reinforcing the homogeneity of thought and content. (Page 48)

A visit to a modern college campus might produce a similar observation. Higher education is now the site of an irresistible drive to moralize and politicize everything, which in turn imposes self-censorship and a risk-averse culture. (Page 49)

Later in a student’s academic career, risk aversion becomes a response to the economics of higher education. Since the financial crisis in 2007, student debt in the US has increased by 144 percent to around $1.7 trillion. 96 It’s not surprising, then, that students have become more risk averse in selecting their majors and subsequent career paths. Education has become a portfolio-optimization problem. (Page 50)

Sadly, the push toward risk aversion, homogeneity, and conformity—supercharged by hyper-financialization, bureaucratization, and accelerating demographic shifts—afflicts it, too. (Page 50)

Drug discovery in biotech, for example, is becoming slower and more expensive over time. The cost of developing novel drugs doubles every nine years, an observation referred to as Eroom’s law. (Page 51)

The current institutional system that organizes scientific research is structured in a way that rewards and instills orthodoxy. Second is the ever-expanding bureaucratization of science, which has resulted in the disturbing finding that researchers spend more than 40 percent of their time compiling and submitting grant proposals. 101 These two trends are accompanied by an increasing drive toward the third: hyper-specialization. Researchers and academics have to become ever more specialized to make progress in an ever-narrowing field of study or research. (Page 51)

Scholarly journal publications and citation measures, which Google Scholar has made easier than ever to track, have become the dominant factors in publication, grant-making, tenure, and promotion decisions. An inevitable consequence is a bias toward incrementalism, as crowded scientific fields attract the most citations. High-risk, exploratory science gets less attention and less funding because it is less certain to lead to publishable results. Reducing science to a popularity contest is a good way to ensure that breakthroughs never happen. (Page 53)

It took more than 20 years for the world to recognize CRISPR’s promise. For a long time, research on the subject didn’t attract many citations. As recently as 10 years ago, leading scientific journals rejected papers on CRISPR that would ultimately help win its discoverers the 2020 Nobel Prize in Chemistry. A major scientific breakthrough essentially occurred while no one was looking. (Page 53)

Collectively, funders ignore high-risk research projects and proposals, a major reason why breakthrough science is becoming more rare. What gets funded are conservative research projects that primarily build on established science. (Page 54)

As the number of scientific publications explodes, cognitively overloaded researchers and reviewers need to resort to citations to assess the constant flow of information and data. The result is that the papers that get published are the ones that cite more existing research, particularly research that forms a canonical point of view that a reader can easily recognize. Novel ideas, which don’t fit within a well-established canon, are significantly less likely to be produced, published, and widely read. (Page 55)

A large study analyzing more than 244 million scholars who contributed to 241 million articles over the last two centuries found that as scientists age, they are less likely to disrupt the state of science. Instead, they become more resistant to novel ideas and are more likely to criticize emerging research. 108 Moreover, increasing age correlates with a decrease in risk tolerance, which reinforces scientists’ tendency to work within a well-defined scientific paradigm and contribute to an established canon. 109 In sum, the shifting age dynamics of researchers contributes significantly to scientific stagnation. Moreover, older, more established scientists tend to control not only more resources, such as lab equipment and funds, but also access to prestigious academic positions, (Page 56)

Most studies in these fields cannot be reproduced or replicated. In 2015, an attempt to reproduce 100 psychology studies was able to replicate only 39 of them; 120 a large 2018 study that aimed to reproduce prominent studies in psychology found that only half of the 28 could be replicated. 121 An attempt to reproduce peer-reviewed and widely cited research published in the leading journals Nature and Science concluded that only 13 of the 21 results could be reproduced. 122 Meanwhile, studies conducted by pharmaceutical companies such as Bayer could not reproduce more than 80 percent of selected experiments published in prestigious journals. 123 (Page 60)

All of the symptoms of stagnation, stasis, and decline we’ve discussed in this chapter are caused by a rise in societal risk intolerance, which has resulted not only in an increase in homogeneity and uniformity but also in a crisis of meaning and a failure to build a definitive, optimistic vision of the future. (Page 63)

When you’ve had success for too long, you lose the desire to take risks. (Page 63)

Speculation provides the massive financing needed to fund highly risky and exploratory projects; what appears in the short term to be excessive enthusiasm or just bad investing turns out to be essential for bootstrapping social and technological innovations. If we examine how progress has unfolded over the course of the past century, we find that in both hardware and software, from the Apollo program to Bitcoin, some of our greatest technological achievements are the product of bubbles. (Page 85)

The Manhattan Project, the Apollo program, and the development of Covid-19 vaccines were all bubbles. They attracted floods of capital and talent toward specific, futuristic aims. (Page 90)

An important differentiator is that participants in good bubbles tend to talk about the ways in which everyone’s behavior will be transformed in the future. In contrast, participants in bad bubbles often talk about how a new version of an existing product will be more widely available and slightly better than the previous one. (Page 99)

That FOMO arises during bubbles is an acknowledgment that bubbles are one-off events. For someone with the right skills living at the right time, there was one opportunity to build a railroad network, an airline network, the internet, or a new monetary system. Once these feats were accomplished, the same opportunity would likely never arise again. Often it’s only possible to participate in a bubble once, because bubbles are self-reinforcing and path-dependent. Their end state is highly dependent on their initial state, and small changes tend to be magnified over time. Why does the PC industry today have open hardware standards and relatively closed software standards? In large part because the arrangement was good for Microsoft, which boosted it in the 1980s. (Page 102)

Since bubbles are temporary phenomena with powerful long-term effects, you should fear missing out. Financial FOMO at its worst is the desire to make easy money at the expense of less well-informed people. But at its best, FOMO is the nagging suspicion that someone is building the future, and it could be you. (Page 103)

Bubbles are not only mechanisms for coordinating parallel innovation in emerging technologies. Scientific megaprojects also often follow a bubble-like dynamic (Page 107)

More broadly, atomic theory and the reality of nuclear weapons were linked by a long chain of technical uncertainties. Resolving these uncertainties serially would have taken too long, so everything was built at once based on rough estimates that were rapidly and continuously refined. Any researcher interested in nuclear weapons in, say, 1935 could have looked at the available information and concluded that such weapons were possible, or at least weren’t demonstrably impossible. But building any individual part in isolation would be worthless except for demonstration purposes or to confirm theories. It took a megaproject to make nuclear weapons a reality. As a species of inflection bubble, a megaproject accomplishes a set of tasks in parallel that would never be accomplished serially. (Page 107)

Every financial mania requires some suspension of disbelief and unshakeable faith that the idea at its core will pan out. More often, these delusions are more rational than they appear, if only in hindsight. Early 20th-century progress in cars and late 20th-century progress in computers both seemed unbelievable to those who watched them happen at the time. But sometimes, at the intersection of finance and technology, when two industries producing complementary products embrace a shared delusion, the delusion becomes true. (Page 108)

To make the envisioned future a reality, delusional ideas, ambitious people, companies, labs, hardware, and computer code all need to collide. Innovation clusters involve many people in the same industry working in close proximity—bouncing around ideas, attracting and poaching talent, raising capital, and figuring out management best practices. (Page 109)

Innovation clusters have existed in many times and places. Shipbuilding and finance were colocated in Renaissance Venice, while Florence specialized in textiles and finance. Coal in Germany’s Ruhr Valley first led to the formation of a cluster of steel companies in the late 19th century, and later to a cluster of chemicals companies. Detroit comprised a cluster of auto companies, as well as the suppliers, bankers, and advertisers who serviced them. Houston has, off and on, been a one-stop shop for acquiring promising oil leases, the financing to drill, and the people and equipment necessary to carry out the project. And Shenzhen is a wonder of modern manufacturing, where the incredible density of people, equipment, and relationships makes it possible to acquire basically any component for manufactured goods. (Page 109)

Clusters are effective because they increase the bandwidth at which participants share information. Even though financial transactions can be carried out entirely by computer and over the phone, economic clusters around finance remain in places like New York and London. The software industry is also highly intangible, yet Silicon Valley has been the best place to open a software company in the past few decades. In fact, the availability of cheap, low-bandwidth communication can increase the value of higher-bandwidth communication. When it’s easy to interact with people digitally, there are more loose social ties that can benefit from physical proximity. (Page 110)

The existence of the bubble convinces people to work on specific projects right now rather than risk missing out. And, like traditional innovation clusters, bubbles encourage deep specialization. You don’t have to build the full stack if you know that every layer is being worked on by someone else; instead, you can focus on the layer you know best. Finally, the bubble’s combination of long-term certainty (“This will happen”) and short-term radical uncertainty (“I have no idea how to make this work right now”) creates an environment that incentivizes information sharing. (Page 110)

Financial bubbles emerge when perception and reality diverge. When this happens, one of two things eventually occurs: either perception moves closer to reality or reality bends in the direction of perception. In the former case, bubbles can result in spectacular crashes that annihilate value and wealth. In the latter, they serve as a necessary catalyst for massive technological acceleration, as some of the bubbles we document in this book demonstrate. Nevertheless, given the potential for ruin that inevitably accompanies a bubble, some readers will remain skeptical of their value. Why not merely pursue safe, incremental progress? To address this alternative, it’s worth considering a world without financial bubbles—a world where no one gets too excited about the possibilities of the future because no one is excited about the future at all. Such a situation is not merely depressing but one that inescapably leads to conflict. (Page 111)

The more adverse the economic environment, the less positive-sum the thinking becomes. (Page 113)

But whether they occur in technology or in scientific megaprojects, the bubbles we document are driven by definite visions of and optimism about the future—building a nuclear weapon, landing a man on the Moon, developing a novel decentralized monetary architecture—that border, in almost all cases, on the delusional. Yet, like self-fulfilling prophecies, these collective delusions and wild speculations materialized into the realities they envisioned. (Page 115)

True, the government today routinely spends far more on Medicare, but that’s a relatively sure thing—we know what we’re getting. The same was not true of the atomic bomb. Building it required an immense investment in untested, specialized physical infrastructure, and the request for that investment came in the middle of an economic depression and a world war. (Page 121)

That risk was magnified by the nature of the Manhattan Project’s participants. The single most important secret in the United States was left in the hands of people whom the national elite were not particularly inclined to trust: academics who had flirted with the far left, assorted European refugees whose allegiances were unclear, and pure oddballs who thumbed their noses at authority, a trait that was tolerated at MIT and Berkeley but not exactly celebrated by the US Army. How did it all happen? How did the Manhattan Project succeed? The answer lies in two themes we have already emphasized: risk tolerance and bubbles. (Page 122)

The Manhattan Project was also bubble-like in that it featured high uncertainty, plenty of waste, many mistakes, and divergent concerns. Some participants were motivated by scientific curiosity, others by geopolitics, but the project focused all of them on the same endpoint. And finally, like many other bubbles, the Manhattan Project had significant spillover effects, enabling new technologies quite apart from the project’s goal and leading to innovations upon innovations. (Page 124)

Like a twitchy day trader, FDR had to ask himself, “What do they know that I’ll wish I’d known?” In any case, if an atom bomb was physically possible, a Nazi bomb was theoretically possible. The US decision to develop nuclear weapons to preempt German nuclear attempts was enormously risky, but what other choice would even a risk-averse government make? As it turns out, this mining ban was the result of an April 1939 conference by German nuclear physicists, which had led Hans Geiger to propose starting research for a German nuclear weapons program. The Manhattan Project may be history’s most extreme example of a justified fear of missing out. (Page 128)

Feynman was not the only risk-taker at Los Alamos. A notable factor on this score is the youth of the project participants—the average age of experimenters and engineers at Los Alamos was just 25. 164 Youth, as most of us recall from our teenage years, breeds a risk-taking mentality. The same holds true when applied to organizations, academic fields, and industries. When an organization or field is growing fast, young people join, the average age remains low, and the pace of advancement is high. When organizations and fields peak in importance and popularity, recruiting slows substantially, pushing the average age—as well as organizational risk aversion—upward. (Page 130)

Instead of ranking options and implementing them in order, the Manhattan Project pursued several at once and ended up using three of them together. This approach also demonstrates the healthy side of the government’s fear of missing out. The best way to ensure the project would succeed was to overinvest despite redundancy, knowing it was better to have some wasted effort from trying multiple methods at once than to bet on a single method that might not pan out. (Page 133)

The parallel approach to uranium enrichment increased the risk that the amount of money, time, and material wasted on fruitless efforts would be significant. But it lowered the more important risk that the bomb would not be completed at all. (Page 134)

Nuclear competition also prompted the development of game theory, an important branch of math and economics also invented by von Neumann. In a world of mutually assured destruction in which a state could retaliate against a devastating nuclear attack but could not stop an assault already underway, academics undertook more serious decision-making models that incorporated known incentives and uncertain information. Today, game theory underpins the ad pricing models of major internet companies like Alphabet, Amazon, and Facebook. (Page 137)

But it was Sputnik II, launched a month later, that struck real fear into the hearts of America’s technocratic elite. Its 1,121-pound launch mass was close to that of the most recent generation of nuclear warheads. More than a research satellite, Sputnik II was a demonstration of military capability. Any major US city could now be targeted by Soviet nuclear warheads. (Page 143)

As with other bubbles we cover, the Apollo program was able to synthesize multiple goals: pure scientific curiosity, a novel engineering challenge, and a desire to demonstrate military superiority over an adversary. (Page 144)

Johnson made a powerful appeal, evoking America’s “race for survival.” As he put it, “control of space means control of the world.” 179 But he also emphasized the more prosaic gains to be made. The Moon mission was, he argued, “a solid investment, which will give ample returns in security, prestige, knowledge, and material benefits.” (Page 145)

Neil Armstrong would later cite the commitment and enthusiasm of Apollo’s developers as the source of the program’s success. Their optimism also played a key role. They weren’t just driven—they truly believed that with enough hard work and ingenuity, failure was impossible. Like the Manhattan Project, the Apollo program succeeded in large part because of the visionary determination of what Peter Thiel has called “extreme founder figures”—people like von Braun, NASA’s George Mueller, and President Johnson—all of whom demonstrated a relentless drive that bordered on the kind of craziness observed during stock market manias. (Page 147)

Apollo illustrates that it’s never a good idea to bet against human ingenuity, even if this innate drive occasionally ends in failures, crashes, and catastrophes. (Page 151)

Apollo’s management structure was decentralized, flexible, and often informal, which enabled scientific and technical risk-taking, rapid cycles of testing and feedback, and a relentless focus on problem-solving. (Page 152)

Given the complex timing, math, and mechanics of LOR, Houbolt had to work hard to convince program managers and engineers of its merits. Ultimately, his monomaniacal commitment to the idea convinced not only high-ranking NASA bureaucrats but also fellow engineers like von Braun, who previously had been vehemently opposed. (Page 156)

For the Apollo mission, NASA bought massive amounts of integrated circuits, which were used in the board computers of the Apollo command module and the Apollo lunar module. Indeed, NASA needed so many integrated circuits that, by the end of 1963, NASA purchased 60 percent of all integrated circuits manufactured in the US. (Page 160)

Apollo’s backers won support by emphasizing geopolitical competition and useful spillover technologies. But these factors do not represent the mission’s essence. Apollo built a monument to the human drive for greatness. Its end, therefore, has an enduring civilizational significance. (Page 164)

Over that period, chip fabrication has become intensely specialized. A single fabrication plant takes years to build and costs upward of $15 billion, more than an aircraft carrier or a nuclear power plant. (Page 169)

Moore’s law was a retrospective observation when it was made, but because of the power of bubbles to coordinate behavior, it became a self-fulfilling prophecy that guided the actions of both chip designers and their customers. (Page 170)

Some batches, Brattain noticed, smelled like acetylene lights, which had been used for car headlights until the 1910s. Another Bell Labs researcher, Henry Theurer, realized that the unusual smell of acetylene lights was caused by small impurities of phosphorus. It was a very lucky break. At the time, spectrometers were not sensitive enough to detect impurities that small; the only scientific instrument capable of identifying the accidental dopant was the human nose. (Page 173)

In 1954, for instance, Bell Labs built TRADIC (Transistor Digital Computer), the first entirely solid-state computer, using transistors rather than vacuum tubes. It was an impressive proof of concept but an expensive one, costing 20 times as much as a traditional computer. (Page 175)

The earliest buyer was the military, which was initially interested in transistor-based radios because they consumed less power and weighed 40 percent less than vacuum-tube-based models. (Page 175)

From 1953 through 1955, almost half of the R & D funding for the transistor business came directly from the government, as more communication and navigation systems were transistorized. (Page 176)

In 1957, seven of them—Gordon Moore, Eugene Kleiner, Jean Hoerni, Julius Blank, Sheldon Roberts, Victor Grinich, and Jay Last—decided to leave together. (Page 177)

The proximate cause of this demand was the Soviet launch of the Sputnik 1 satellite in October 1957 and Sputnik 2 a month later. As we’ve already observed, these events immediately prompted an increase in defense spending, with a focus on precision rockets. The goal was to maximize the odds that America’s nuclear missiles would reach their intended targets in the event of a nuclear war. This placed a premium on reliable guidance systems. (Page 178)

He hit upon his solution while reading about a failed dental appliance that used sand to blast away decayed teeth. The dental sandblaster didn’t catch on, but the basic design—start with a piece of material and systematically etch parts of it away—provided Kilby with an important insight that remains a fundamental step in semiconductor production to this day. (Page 179)

Fairchild had firsthand experience with the tyranny of numbers problem that was troubling the rest of the industry. It had signed a contract to sell transistors for the US government’s Minuteman rocket program, but its products didn’t meet the government’s reliability standards. For a small firm dependent on federal money—the government was responsible for a third of the industry’s sales from the 1950s through the early ’60s—this was a potential disaster. (Page 180)

But computer manufacturers adopted the transistor relatively late. The first major transistor-based consumer product was the transistor radio. (Page 182)

This is critical, because large corporations have the scale to create bubbles that influence social progress. What was unique about the middle of the 20th century was not that people invented new things but that so many large institutions devoted significant financial resources to that effort. In doing so, they generated both transformative products and knowledge that changed the course of the future. (Page 205)

Du Pont’s best option was to invent something new and try to sell it. In 1903, Du Point opened its first R&D lab, the Experimental Station. In the decades to come, the Experimental Station and the company’s other labs created novel substances with seemingly magical traits. (Page 205)

In addition, cars required new structural and decorative materials. Du Point made money from synthetic leather and the fuel additive tetraethyl lead, which no one had previously been able to make safely. (Page 207)

The products Du Point launched from its well-capitalized, talent-rich R&D labs were revolutionary. In addition to nylon, cellophane, and Teflon, there was also neoprene, PVC, Kevlar, spandex, and many others. These products may sound prosaic today, but that’s a testament to how deeply embedded they are in the texture of everyday life. (Page 207)

In essence, AT&T became a quasi-nationalized public utility that had many good reasons to invest heavily in R&D and allow others to take advantage of its findings. (Page 211)

Meanwhile, the world at large benefited from important inventions produced by R&D centers like AT&T’s Bell Labs, which gave us the transistor, the C programming language, the Unix operating system, lasers, solar cells, and information theory. (Page 211)

AT&T’s history demonstrates that a regulated monopoly, freed from some of the day-to-day concerns about profitability and strictly limited in its ability to pursue new lines of business, could still produce a series of exceptionally valuable discoveries. AT&T was essentially a government-backed theoretical research program with a slight tilt toward telecom applications, wrapped in a nominally private-sector business. (Page 212)

Many Skunk Works projects were reactive, intended to outmaneuver Soviet defenses. For example, in the 1950s the Soviets had excellent air defense systems, which prevented US planes from flying over the country at precisely the moment when the Soviets were also ramping up their nuclear weapons tests. Enter Skunk Works’ U-2. (Page 214)

Skunk Works was a success on multiple fronts. It avoided bureaucracy by working directly with military buyers. It used shrewd management to get the best out of talented workers whose enthusiasm couldn’t be dampened by officials who were not themselves in the know. And it harnessed an overarching motivation to achieve results: the existential threat of annihilation in the geopolitical context of the Cold War. (Page 215)

Critically, Lockheed was backed by a huge pile of government money. The government’s willingness to put funding on the line meant that Lockheed could sustain its investments. After all, what was money compared with the prospect of saving millions of lives and preserving the country? With this trade-off in mind, the government justified investing heavily in initiatives that might or might not have an immediate payoff. (Page 215)

Meanwhile, a premium is placed on immediate earnings. In the 1960s, a research-intensive company might have traded at 30 times its earnings or higher, meaning investors were primarily betting on future growth. By the late 1970s, the average large stock in the US traded at just seven times earnings, implying a very low value for distant, hypothetical profits and rewarding short-term earnings improvements over research breakthroughs. (Page 223)

For instance, in the aftermath of Facebook’s IPO many investors were outraged by the company’s focus on mobile, an area that, at the time, appeared to risk cannibalizing its then-lucrative desktop ads business. But while investors could make their views known, they couldn’t act on them—Facebook founder Mark Zuckerberg controlled the board. That turned out to be good for Facebook as well as for its investors. Meta now derives the majority of its revenue from the same mobile advertising that investors once shunned. (Page 224)

Large tech companies with vast resources are spending on R & D. They might not always spend wisely, and investing in incremental improvements to existing products does not have the same impact as funding new ideas. But overall, it’s a promising trend. (Page 225)

Whereas R&D spending by Meta, Alphabet, Google, Microsoft, and Amazon totaled $109 billion in 2019, they channeled $223 billion into R&D amid the AI boom in 2022. (Page 226)

Since 1859, when a down-on-his-luck railroad worker first decided to drill for oil in Titusville, Pennsylvania, the oil and gas industry has continually reinvented itself to increase production. (Page 231)

The race to discover and exploit new sources of oil was perhaps the last gasp of the Victorian colonial adventure story. Small teams of geologists and explorers trekked across desert wastelands, dense jungles, and other difficult terrain, cutting deals with tribal chiefs, kings, and tsars. One explorer in what is now Iran realized that the best places to look for oil were in locations where locals worshiped fire, as a spiritually significant eternal flame might indicate a geologically significant deposit of natural gas. (Page 233)

It was also a contributing factor in the First Gulf War, in which Saddam Hussein invaded Kuwait to gain control of its oil fields. In the 2000s, as oil became increasingly important to the functioning of society, the widespread scramble for oil and other sources of energy represented a scramble for independence and national security. (Page 236)

The basic idea behind fracking is simple: Blast a mixture of fluids into a well, which widens cracks in rocks and releases oil and gas. These fluids contain proppants, or tiny grains of sand that hold cracks open and allow oil and gas to continue flowing. It’s a tricky balancing act—the fluid must hit the rocks, which are under immense pressure, hard enough for them to shatter, but not so hard that it dislodges the proppants, which must remain behind to prop open the cracks. (Page 237)

Then, nearly 20 years later, in 1996, a Mitchell employee attended a baseball game with engineers from a competing oil and gas company, Union Pacific Resources. During the game, the men shared some promising results from a new fracking technique called a slick-water frack, which halved costs. (Page 239)

But fracking has had another beneficial impact that’s worth considering: The US and its allies are no longer dependent on the Middle East and can therefore refrain from intervening in the region to the same extent. (Page 247)

A need for oil has repeatedly required the US government and American companies to deal with governments they’d prefer to avoid. Because of fracking, the best opportunity for energy switched from oil and gas that was inconveniently placed around the globe to oil that was inconveniently placed in the US’s own backyard. (Page 247)

Growing fear of nuclear energy—which has, paradoxically, resulted in policies that have hindered the adoption of a lower-emission energy technology—illustrates how a shared narrative has the power to influence technological adoption and diffusion. Similarly, even as the fracking industry has grown, it has faced pushback from voters because of a bleak narrative involving the violent penetration of the Earth’s surface to extract an alien dark matter that is inexorably linked with environmental destruction, pollution, and war. (Page 248)

Unintentionally or not, the fracking bubble helped the US achieve energy independence. This may not sound as impressive as sending a man to the Moon, but energy independence has similarly large-scale geopolitical effects. As recent trends toward deglobalization, industrial onshoring, and geopolitical fragmentation have made clear, oil is more than a barbarous relic of a fossil-powered past. “Black gold” remains a strategic geopolitical reserve that continues to dictate the rise and fall of the wealth of nations. (Page 251)

How could a bit of code released on obscure online platforms by a pseudonymous developer who later completely disappeared disrupt the nature of money? How could a decentralized alternative to both the Bretton Woods and fiat systems emerge from an open-source protocol with no outside investments, no corporate headquarters, no legal structure, and no roadmap? Answers can be found in Bitcoin’s bubble-like nature. (Page 259)

Bitcoin achieves digital scarcity by designing a payment architecture in which the creation of money requires solving computationally expensive problems. This is known as a proof of work system. (Page 262)

Until the bubble bursts, these self-fulfilling dynamics are constantly reinforced. Participants discard traditional risk-benefit analyses and valuation models because the risks of not participating in the bubble are perceived as greater than the bubble’s identifiable risks. Whether they form in technology, in markets, or in scientific megaprojects, bubbles channel our thymotic energies, ambitious visions, irrational exuberance, and economic desires toward the realization of a future that is radically different from the present. (Page 287)

The assumption is that we’ve moved past the old rituals, but Girardians view this development as a regression. Instead of eliminating the need for ritual, we’ve re-created the primitive, ad-hoc practice of scapegoating from which these rituals evolved. Think of online cancellation or the symbolic scapegoating of CEOs and politicians. (Page 290)

Bubbles, then, fuse agency with destiny. They synthesize deterministic collectivism—the infamous wisdom or madness of crowds—and the hyper-individualism that manifests, for example, in the cult of founders in Silicon Valley. (Page 294)

From the outside, bubbles seem to instantiate a repeatable and predictable pattern: They start with a novel idea or core technology that attracts extreme commitment and excessive investment from early adopters, which results in a speculative mania that is often followed by a crash. (Page 296)