Buying Back Our Slack`
AI and the case for rebuilding the firm
In August 2025, Coinbase CEO Brian Armstrong posted a directive in the engineering Slack channel to onboard AI tools that week. He held a meeting that Saturday and fired everyone who had not complied. Absent from this story is what Coinbase did to help its engineers adapt, because they did nothing. Nothing was the only option because the capacity to help didn’t exist.
This isn’t a story about one company’s ruthlessness. It’s the endpoint of a fifty-year transformation that stripped corporations of their ability to absorb shocks on behalf of their workforce. Understanding how that capacity was destroyed is the first step toward rebuilding it and, for the first time in two generations, the means to rebuild it exist.
Unraveling the Grand Bargain
For most of the postwar era, the corporation stood between its workers and the volatility of the market. Capital provided stable employment and goods to workers and their families. Workers, in turn, provided employers their full productive effort.
In 1950, the United Auto Workers and General Motors signed the Treaty of Detroit. GM guaranteed long-term contracts, pensions, health insurance, and wages tied to inflation; in exchange, the UAW gave up the right to strike over certain workplace controls and guaranteed uninterrupted production. This became the blueprint for the American manifestation of postwar stakeholder capitalism.
This arrangement went largely unchallenged until 1970, when economist Milton Friedman published The Social Responsibility of Business Is to Increase Its Profits in the New York Times. This article, now better known as the Friedman Doctrine, gave intellectual legitimacy to the idea that firms exist for the sole purpose of maximizing shareholder value. He explicitly refuted the idea that firms owe their employees or the public anything beyond operating within the bounds of the law.
Friedman argued that corporate obligations to workers and society were not just economically inefficient, but an immoral theft of shareholder wealth. The institution that had functioned as a shock absorber for its workforce was redefined as waste. As this doctrine took hold, companies began to strip out their capacity to absorb shocks in the name of higher efficiency and higher profits.
What followed was five decades of increased profitability through higher efficiency, but this efficiency came at a cost. One of the most fundamental principles in systems theory, engineering, and economics is the efficiency-resilience trade-off. The exact mechanisms that make a system efficient are the same mechanisms that make it fragile.
The first wave was breaking the taboo around mass layoffs. Until the 1970s, mass layoffs were seen as a failure of management and a last resort to save a dying company. Wall Street’s newfound interest in quarterly profits and takeover threats from private equity turned downsizing into a routine strategic tool. Jack Welch popularized the use of mass layoffs to boost stock prices even when companies were highly profitable. The workforce became the first place firms looked to absorb economic pain, rather than the last.
Once the taboo of firing direct employees was broken, the next logical step to maximize shareholder value was to stop hiring them in the first place. The 1990s marked the start of a wave of outsourcing, offshoring, and subcontracting as firms focused more and more on their “core competencies”. Turning fixed costs into variable contracts shielded firms from long-term obligations. When a macroeconomic shock hits, the firm doesn’t even have to announce layoffs; it simply declines to renew vendor contracts, instantly transferring the economic pain down the supply chain.
From the 2010s onward, algorithmic management and the gig economy achieved the ultimate, frictionless realization of the Friedman doctrine. Zero-hour contracts and just-in-time scheduling software shifted toward paying labor only for the exact minutes they are generating revenue. This marked the total removal of the firm from the labor equation. There is zero slack or paid downtime, so the system is perfectly efficient for capital. The worker absorbs 100% of the friction, idle time, and economic variance.
Decades of research on occupational health, most notably the Whitehall studies tracking British civil servants, have shown that the single strongest predictor of mortality is how much control someone has over their work. Case and Deaton’s studies on deaths of despair, the spike in mortality from suicide, drug overdoses, and alcohol-related liver disease among working-class Americans, documented what happens when the Whitehall finding scales to an entire class. The people most affected had lost control not just over their daily tasks but over their life trajectory. The economic variance that corporations shed had to go somewhere. It went into their bodies.
Colliding with a Vacuum
The fifty-year trend was abstract for white-collar workers because the hollowing out was happening around them. Their institutions were degraded but their work itself was safe. They could feel the loss of stability, the thinning of management, and the erosion of tenure, but the fundamental threat that the work might not need a human belonged to someone else. Factory workers, call center employees, retail clerks. Not them.
AI changes that. For the first time, the hollowing out meets an existential threat to the work itself, and it hits the population that’s been living inside degraded institutions without realizing how degraded they’ve become. They’re about to find out, because the institution that’s supposed to help them navigate this transition has already been stripped of the capacity to do so.
A friend recently worked on an AI rollout with a client and two things stood out. First, AI anxiety was real across their teams. People were experiencing real disruption to the way they work and felt uncertain about their futures. Second, this corporation was cutting costs to pay for more AI, pulling even more adaptive capacity out of the system at a time when its employees were asking, even begging for help adopting these tools.
Even more extreme versions of this are already playing out. Eric Vaughan, the CEO of IgniteTech, laid off nearly 80% of his staff when they did not adopt AI on his desired timeline. When asked about it, Vaughan said he’d do it again given the chance.
This is the Friedman doctrine reaching the perfect moral inversion of stakeholder capitalism. Absent from all of these stories is what the company did to help people adapt. Were the tools ready? Was the transition resourced? Was anxiety addressed? Was the pace of change survivable? The capacity to help people adapt, to resource transitions, and to manage the rate of change was gutted long before the arrival of AI.
When you strip an organization of its middle management, its slack, and its forward-planning capacity, the organization loses the ability to adapt to major changes in its environment.
Because the organization cannot adapt, it demands that the individual adapt instantly and perfectly. IgniteTech and Coinbase blamed their workers for a problem created by their lack of institutional capacity. This is the system operating exactly as designed.
The catastrophic strategic error these companies and others like them are making is confusing a chronic environmental shift with an acute operational problem. Demanding immediate adoption “or else” is a crisis playbook. This playbook worked in the past because disruptions were discrete events: market crashes, technological breakthroughs, new competitive threats. But AI is not an acute crisis; it is a continuous acceleration of environmental change.
Emptying the Well
The model Coinbase and IgniteTech have adopted assumes that chewing through employees via burnout and forced attrition is sustainable. Sustainability assumes there’s a sufficient supply of qualified replacements, the cost of turnover is low relative to the cost of retention, and the knowledge required to do the work is transferable enough that new hires reach productivity quickly.
AI changes at least two of those conditions. First, the knowledge required to be effective is increasingly context-dependent. The developer who’s been working with your codebase and your AI toolchain for a year isn’t interchangeable with one you hire off the street, because effectiveness with AI tools compounds with familiarity with the system, the domain, and the specific ways the tools interact with your architecture. The replacement hire who is burned out from their last job and behind on your specific stack starts the adaptation cycle over from zero while the technology continues to change underneath them. Progress resets while the company eats onboarding costs.
Second, the rate of change means the adaptation cycle never completes. In the old model, you could chew through people because each new hire was adapting to a relatively stable environment. Learn the job, do the job, get burned out, get replaced. The replacement walks into roughly the same job. When the job itself is being redefined continuously, each replacement walks into a different job than the one the previous person burned out doing. The onboarding cost rises with every cycle. At some point the math breaks.
Third, the chew-through model destroys institutional learning. Every person who leaves takes with them whatever they figured out about how to work effectively in the new environment. The organization never accumulates knowledge about its own adaptation process. It’s perpetually starting over. The firms that retain people through the transition will compound their learning. The firms that churn will keep paying the same tuition over and over without ever graduating.
What Grows Back
The Friedman doctrine didn’t just change who captures economic value. It changed the information environment of the firm by degrading every mechanism by which organizations perceive, interpret, and respond to reality. Not as a side effect, but as a direct consequence of treating everything that isn’t immediately productive as waste. And now AI arrives, demanding the fastest organizational adaptation in history, and finds institutions that have spent fifty years destroying their own capacity to adapt.
Taking full advantage of AI requires reimagining workflows entirely, and in its ultimate form, reimagining the job itself. The reward is a tremendous boost in productivity, as much as a full order of magnitude in output. We don’t need to guess whether an AI surplus is coming; people on the leading edge of the curve are already realizing it.
AI doesn’t just increase output, it changes the texture of the workday. AI eliminates the most routine tasks first: building a deck, writing tests, clearing up a spreadsheet. From a productivity standpoint, this is a win. These tasks served another purpose: cognitive rest. Periods of lower-intensity work give the brain time to reset between high-judgment decisions. The Friedman era stripped every institutional shock absorber until one remained, hidden inside the texture of the work itself. With this stripped away, what remains is a workday of orchestration, judgment, and synthesis, with no recovery periods built in. Output per hour rises, but cognitive load rises with it.
People on the leading edge of AI adoption are already experiencing this. Yegge describes the profound exhaustion that follows sustained agentic work, a phenomenon widespread enough to have earned its own name: the “AI Vampire”.
This cognitive tax will land on a workforce already experiencing AI anxiety. When the productivity surplus hits the mainstream, the hollowed out institution will act on muscle memory. Leadership’s instinct will be to treat 10x productivity gains as an opportunity to fire 90% of their people. Organizations that capture the entire surplus while offering nothing to help their workers recover will find themselves mining for fool’s gold. Workers will simply burn out and take the tacit knowledge required to execute successfully with them.
The coming AI surplus creates the first structural opportunity in over half a century to begin to rebuild the capacity of the firm. To survive the chronic disruption AI brings, firms cannot simply cut headcount. They must use the economic gains to intentionally fund the biological and institutional recovery of their workforce. The firms that thrive will be the ones that use the productivity surplus to buy back the slack.
There are three interventions that will enable firms and their people to thrive in this new reality. These interventions build on each other, and are sequenced so each one creates the capacity for the next.
Building a Buffer
The most fundamental problem employees face is an overwhelming information environment. At most firms adopting AI, every individual employee is doing their own sensemaking about AI: scanning Reddit, reading blog posts, trying tools on their own time, attempting to distinguish between hype and substance with no institutional support. That’s an enormous duplication of effort across the workforce, and most people are bad at it because filtering signal from noise in a rapidly evolving technological landscape is a specialized skill. It’s unreasonable to expect a frontend developer or a product manager to also be an effective technology analyst in their spare time.
The role of the institution is to step back in and do the heavy lifting of sensemaking and pacing. This becomes the organization’s outer shock absorber. This role is to stand at the boundary and absorb the raw chaos of everything that is changing, filter the signal from the noise, and introduce information at a survivable, sequenced rate. Some companies are hiring Chief AI Officers to fill this function, while others might embed it in an existing role or build a small dedicated team.
A small team and AI augmentation can monitor the landscape, filter signal from noise, tailor guidance to different roles, and push curated updates to the workforce. The function itself is a demonstration that AI amplifies human judgment rather than replacing it. This team must be the most fluent AI users in the company because their credibility depends on it. This is an internal product team with the workforce as their customer, understanding what different groups need, building feedback loops, and iterating on what works. Every recommendation they make carries implicit proof: we tried this, here’s what works, here’s what doesn’t, here’s what you can ignore.
This intervention provides the what; to succeed, it must be paired with the how.
Buying Back the Slack
The next intervention institutions can make is building learning hours into the workday. Even with curated information, adaptation takes time and cognitive resources. When that time competes with production, production always wins because production is measured and learning isn’t. Boxing off learning hours within the workday does two things: it gives people the actual time to adapt, and it sends a signal that the organization considers adaptation part of the job rather than a personal responsibility to be handled after hours. That signal matters as much as the hours themselves, because it’s the organization saying “we own this problem, not you.”
This time must be paired with institutional sensemaking. Unstructured hours where people are expected to simply “go figure it out” just relocates the same sensemaking burden from nights and weekends to the workday without actually reducing it. Learning hours become the delivery mechanism for the curation. Without curation, learning time can turn into doom scrolling Reddit’s AI subs. Without dedicated time, even the best curated content goes unread due to the demands of production. These two interventions are load-bearing for each other.
The framing of this time is critical to making it sustainable. Learning hours are vulnerable to the pressures of immediate, visible costs (hours not producing) getting prioritized over diffuse, long-term benefits (workforce adaptability). When AI-enabled workers can produce in four hours what used to take eight, learning hours aren’t additional cost, they’re a reinvestment of surplus that already exists. The reframe makes cutting learning hours equivalent to demanding more than full output, which is harder to justify than cutting a “perk.”
Curation and protected time address the challenge of learning in an ever-changing environment. They don’t address the deeper vulnerability: The roles people occupy are themselves becoming unstable.
When the Walls Come Down
Bell Labs originated the title “Member of the Technical Staff” to emphasize collective expertise over hierarchy. Companies like OpenAI and NVIDIA have revived and modernized the role in recent years to solve a different problem: roles are collapsing because AI is making the boundaries between them obsolete. MTS job postings show enormous scope, merging previously distinct roles into high-output generalists.
Companies like Perplexity originated the “product engineer” role, which collapses engineering, product management, and design. Typical projects have one or two people on them. Taking their podcast as an example: it’s built by one person end to end, a brand designer who also does audio engineering, ElevenLabs integration, scripting, and research. That’s one person spanning what would traditionally be PM, design, engineering, and content and shipping a product that people actually use.
These roles existed as separate disciplines because execution in each domain required specialized skill and sheer effort. As AI drops those costs, the walls come down. When a PM can go from a customer story to a working prototype without an engineer, and an engineer can define requirements and test them with users without a PM, this is the path of least resistance rather than a power grab. But when your identity is tied to a domain of work, someone else entering that domain feels like an incursion. The resulting conflicts look like turf wars and ego, but they’re structural. The boundaries that kept the peace no longer exist.
This change poses a significant threat to identity. Roles like product manager, designer, and engineer were historically tied to methods, and AI is coming for every method simultaneously. As roles collapse into one another, the labels people use to identify themselves lose their meaning.
Builder roles address this challenge by tying identity to an outcome: being someone who makes things to solve problems. The methods are explicitly expected to evolve, and the tools are fluid by definition. AI shifts from threat to identity (“Will AI replace me?”) to just another tool in the builder’s kit (“How do I build better with AI?”). This is a durable, stable identity because building always needs to happen.
The builder role doesn’t eliminate expertise. People will still go deep. The difference is that depth becomes a capability you bring to building rather than a boundary that defines what you’re permitted to work on. The specialist says “that’s not my job.” The builder with a specialty says “I’m strongest in this area, but I can contribute everywhere.” The knowledge doesn’t change. The relationship between the person and the work changes.
Changing roles dissolves professional identities that people spent years constructing, and that entire career paths, hiring pipelines, compensation structures, and status hierarchies are built around. This is a significant cost. It’s not a coincidence that builder roles are found at young companies that grew up in this new reality and thus never had to pay the transition cost. Companies will transition despite the cost because the cost is one-time and permanently raises resilience. Established companies will face the transition cost whether they manage it deliberately or not. The question isn’t whether to pay the cost but whether to pay it on your terms through intentional consolidation or pay it chaotically through territorial conflict, confusion, and attrition.
These same costs are the reason that this change can only happen after the first two changes we’ve discussed. The Chief AI Officer function and the learning hours can be implemented relatively quickly because these changes relieve employee stress, so the workforce’s capacity does not govern the speed of adaptation. Role consolidation is a longer-term structural change that becomes possible after the other interventions have freed up enough capacity and relieved enough stress to make the transition tolerable.
The Work of a Generation
Designing new systems and rebuilding the capacity of a firm to absorb shocks is not the work of a single essay or a single quarterly off-site. This is the work of a generation.
Whether it’s creating an institutional buffer, protecting time, or redefining roles, it will take a few tries to get this right. And we must get it right. The alternative is continuing to run an obsolete, 50-year-old playbook that is already resulting in institutional collapse and deaths of despair. The surplus is coming, the need is acute, and the only question is whether leaders will use it to rebuild or to repeat the cycle of extraction that brought us here.
For the first time in two generations, the economic incentives of the firm and the needs of its workers are aligned. For fifty years, firms pushed more and more variance onto labor, relentlessly stripping away resilience and making employees absorb the shock in the name of efficiency. The AI surplus gives us the capital to finally buy back our slack. If we step up to the challenge, we won’t just survive the disruption. We will build institutions that are fundamentally stronger, deeply adaptable, and in harmony with the humans who power them.


