The Water Is Rising - What Every Leader Needs to Understand About AI Right Now
By Jason T. Douglas, MHA, FACHE, CMPE, LNHA, HSE Co-Founder, Frontier Strategy Partners LLC February 2026
Think back to February 2020. You were probably going about your normal life — going to restaurants, shaking hands, planning trips. Maybe you had seen a headline or two about a virus spreading overseas, but life felt normal. It was normal. And then, over the course of about three weeks, everything changed. Schools closed. Offices emptied. The world rearranged itself into something you wouldn't have believed if you had described it to yourself a month earlier.
Matt Shumer, CEO of OthersideAI and a six-year veteran of the AI startup world, opens his February 2026 essay "Something Big Is Happening" — which has now been viewed by more than 50 million people in less than a month — with exactly that image. "I think we're in the 'this seems overblown' phase of something much, much bigger than Covid," he writes. That framing is worth sitting with. Not to induce panic, but because the alternative to understanding what is coming is being defined by it rather than shaping your response to it.
I'm not writing this as a technology enthusiast. I'm writing this as a healthcare executive and strategic advisor who has spent more than two decades working in environments where resources are constrained, change is constant, and the margin for error is thin. The leaders I work with are smart, experienced, and deeply committed to the people and communities they serve. Most of them have heard about AI. Many have experimented with it casually. Almost none of them have reckoned with what is actually happening right now — and what it means for them, their organizations, and the people who depend on them.
This piece draws on two remarkable works published in early 2026: Shumer's blog post and Howard Marks' February 26 memo "AI Hurtles Ahead" from Oaktree Capital Management. Together, they offer something rare: the urgency of a practitioner experiencing the shift in real time, paired with the measured analytical discipline of one of the most respected investors of the past half century. My goal is to synthesize their insights and translate them into something actionable for leaders operating in any sector — healthcare, finance, law, manufacturing, education, and beyond.
The Pace of Change No One Is Fully Comprehending
Let's start with the facts, because they are genuinely difficult to absorb. In 2022, AI couldn't reliably do basic arithmetic. In 2023, it passed the bar exam. By 2024, it could write functional software and explain graduate-level science. By late 2025, some of the best engineers in the world said they had handed most of their coding work to AI entirely. And then, on February 5th, 2026, two major AI laboratories released new models simultaneously — GPT-5.3 Codex from OpenAI and Opus 4.6 from Anthropic — that by most accounts from practitioners made everything preceding them feel like a different era.
Shumer describes the experience from inside his own workweek. He tells the AI what he wants built, in plain English, walks away from his computer for four hours, and returns to find the work done — finished, not a rough draft, done better than he would have done it himself, with no corrections needed. A year ago, he was going back and forth with AI, guiding it, making edits. Now he describes the outcome and leaves. He offers a specific example: he instructs the AI to build an application, describing what it should do and roughly what it should look like. The AI writes tens of thousands of lines of code, then opens the application itself, clicks through every feature, tests functionality as a user would, identifies what it doesn't like about the experience, revises it independently, and delivers a finished product with a note that it is ready for testing. When he tests it, it is usually perfect. "I'm not exaggerating," he writes. "That is what my Monday looked like this week."
What shook him most was not the volume of output but the quality of decision-making. The most recent model, he writes, "wasn't just executing my instructions. It was making intelligent decisions. It had something that felt, for the first time, like judgment. Like taste. The inexplicable sense of knowing what the right call is that people always said AI would never have" (Shumer, "Something Big Is Happening," February 9, 2026). For leaders in any field, that sentence deserves attention. The conventional wisdom has long held that judgment — the kind rooted in experience, intuition, and contextual wisdom — would remain the province of human professionals. That assumption is now in genuine question.
Howard Marks, writing from the perspective of a seasoned investor who has navigated every major market cycle and technological disruption of the past fifty years, found himself similarly unsettled. After asking Claude — Anthropic's AI model — to create a tutorial explaining AI and the changes of the past three months, Marks described his reaction in terms that are striking coming from someone of his disposition: "Before I start in, I want to try to communicate the level of awe with which I viewed Claude's output. It read like a personal note from a friend or colleague. It made reference to things I've talked about in past memos… It argued logically, anticipated points I might make in response, injected humor, and bolstered its credibility by candidly acknowledging AI's limitations, just as I might do" (Marks, "AI Hurtles Ahead," Oaktree Capital Management, February 26, 2026).
There is an organization called METR that measures AI capability with rigor, tracking the length of real-world tasks — measured by how long they take a human expert — that an AI model can complete successfully from start to finish without human help. Roughly a year ago, that number was approximately ten minutes. Then it grew to an hour. Then several hours. The most recent published measurement showed AI completing tasks that would take a human expert nearly five hours. And that number is doubling approximately every seven months, with recent data suggesting the pace of acceleration may itself be accelerating. If the trend holds, we are looking at AI that can work independently for days within the next year, for weeks within two, for month-long projects within three.
Marks places this trajectory in historical context by comparing it to the personal computer. ENIAC was built in 1945. IBM didn't begin selling PCs for general business and home use until the early 1980s — nearly forty years later. By contrast, Marks notes that it was less than two years ago that generative AI was first framed as a general-purpose technology affecting knowledge work, education, and consumer decision-making — and just two years later, it is already being used by approximately 400 million individuals and 75 to 80 percent of companies. "Nothing has ever taken hold at the pace AI has," he writes. "It's able to change the world at a speed that approaches instantaneous, outpacing the ability of most observers to anticipate or even comprehend" (Marks, "AI Hurtles Ahead," February 26, 2026). The computer took forty years to reach mass adoption. AI took two. That is not an incremental difference in pace. It is a different phenomenon entirely.
Three Levels — And Why the Third One Changes Everything
To understand what is genuinely new about this moment, it helps to have a framework. Marks' tutorial from Claude describes three levels of AI capability whose distinctions are not subtle — they are the difference between a tool and a transformation.
Level One is Chat AI — the familiar version most people have encountered, where you ask a question and the AI provides an answer. At this level, AI saves time that would otherwise be spent researching and thinking. It is genuinely useful, but it is bounded. The AI answers; it does not act. Level Two is Tool-Using AI, where the AI is instructed to search for information, analyze it, and perform tasks with it. The economic value is meaningfully larger because it saves execution time, not just thinking time, but it is still doing what it is told. A significant portion of current enterprise AI deployment is at this level.
Level Three is Autonomous Agents. At this level, the user gives the AI a goal and the parameters of the desired output, and the AI does the work, checks it, and delivers a finished product. As Marks' tutorial described it: "This is labor replacement at the task level. Not assistance — replacement." Marks writes that AI was at Level One in 2023, Level Two in 2024, and is now operating at Level Three — and that the distinction between Level Two and Level Three "might sound subtle" but "isn't. It's the difference that determines whether AI is a productivity tool or a labor substitute. And that difference is what separates a $50 billion market from a multi-trillion dollar one" (Marks, "AI Hurtles Ahead," February 26, 2026).
For leaders, the Level Three shift is the one that reframes the conversation. Previous waves of automation made workers faster. Level Three AI does the work. A tool that makes your analyst 20 percent faster is worth approximately 20 percent of that analyst's salary — you still need the analyst. A tool that does the analyst's entire job, start to finish, on a defined category of tasks, is worth the analyst's entire compensation for those tasks. Multiply that across every knowledge worker doing structured analytical work — legal associates, financial analysts, management consultants, software engineers, compliance officers, claims adjusters — and you are, as Marks' tutorial frames it, talking about a meaningful share of a labor market that runs into the trillions annually.
AI Is Now Building Itself
There is one additional development that deserves careful attention because it fundamentally changes the trajectory of what comes next. In the technical documentation for GPT-5.3 Codex, OpenAI included the following: "GPT-5.3-Codex is our first model that was instrumental in creating itself. The Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results and evaluations." Read that carefully. The AI helped build itself.
This is not a forecast about what might happen someday. It is a technology company stating, in its own product documentation, that the model just released was used to create itself. Dario Amodei, the CEO of Anthropic, has said publicly that AI is now writing much of the code at his company, and that the feedback loop between current AI and next-generation AI is, in his words, gathering steam month by month. He has suggested we may be only one to two years away from a point where the current generation of AI autonomously builds the next.
As Shumer describes it: "Each generation helps build the next, which is smarter, which builds the next faster, which is smarter still. The researchers call this an intelligence explosion. And the people who would know — the ones building it — believe the process has already started" (Shumer, "Something Big Is Happening," February 9, 2026). Amodei has also said publicly that AI models "substantially smarter than almost all humans at almost all tasks" are on track for 2026 or 2027. Whether that timeline is precise or not, the directional claim from the CEO of one of the two leading AI safety companies is worth taking seriously.
Marks draws a sharp distinction between AI and every other technological development in history on exactly this point. Railroads, computers, automation, the internet — all were essentially labor-saving devices designed to perform existing tasks more efficiently. AI is different not just in magnitude but in kind. "I believe AI will take on tasks we didn't imagine it doing," Marks writes, "and perhaps even tasks that didn't exist before AI dreamed them up" (Marks, "AI Hurtles Ahead," February 26, 2026).
Is This Real, or Is It Hype? The Marks Framework
At this point, a reasonable leader might ask: is this real, or is it another overpromised technology wave that will disappoint before it delivers? Howard Marks addresses this question with characteristic discipline, breaking what he calls the bubble question into its component parts, because conflating them leads to confused thinking.
Is the technology itself a fad or an illusion? Here he is direct: it is "a very real thing, with the potential to vastly alter the business world and change much of life as we know it." Shumer echoes this with equal clarity — anyone who tried AI in 2023 or early 2024 and found it unimpressive was right at the time, but those versions are ancient history. The models available today are unrecognizable from what existed even six months ago, and the debate about whether AI is "really getting better" or "hitting a wall" is, he writes, simply over.
Is application of the technology a distant dream? No — the technology is already in demand and being applied at massive scale. Are the people building AI infrastructure behaving wisely? Here Marks exercises appropriate caution. He observes that in every major wave of technological innovation, the rush to build infrastructure has accelerated adoption while also destroying significant capital, and there is no reason to assume this time will be different. He also flags that some AI revenue is currently circular in nature — derived from AI companies buying from each other — and that the chain of value must ultimately rest on end users paying for real economic benefit.
Are the valuations of AI businesses rational? For large, profitable companies for whom AI is one part of an already-great business, Marks suggests it is unlikely their prices will prove ruinously excessive. For pure-play AI companies not yet public and for early-stage startups with multi-billion-dollar valuations but no announced products, he applies his honest framing: most people who participate in lotteries end up with worthless tickets, but the few winners get very rich.
His conclusion is the one worth carrying into every strategic conversation: "Since no one can say definitively whether this is a bubble, I'd advise that no one should go all-in without acknowledging that they face the risk of ruin if things go badly. But by the same token, no one should stay all-out and risk missing out on one of the great technological steps forward. A moderate position, applied with selectivity and prudence, seems like the best approach" (Marks, "AI Hurtles Ahead," February 26, 2026). For leaders making operational and strategic decisions rather than investment decisions, the Marks framework translates into a simple and actionable question: is this real enough to warrant deliberate engagement right now? Based on the evidence, the answer is unambiguously yes.
What This Means Across Every Industry
One of the most important things to understand about this wave of AI is that, unlike previous automation, it is not targeting one skill or one sector. It is, as Shumer puts it, a general substitute for cognitive work — and it improves across all domains simultaneously. Previous automation created gaps that displaced workers could move into. When factories mechanized, workers became office workers. When the internet disrupted retail, workers moved into logistics and services. But AI, Shumer observes, "doesn't leave a convenient gap to move into. Whatever you retrain for, it's improving at that too" (Shumer, "Something Big Is Happening," February 9, 2026).
In legal work, AI can already read contracts, summarize case law, draft briefs, and conduct legal research at a level that rivals junior associates. Shumer describes a managing partner at a major law firm who spends hours each day using AI, describes the experience as having a team of associates available instantly, and has said that if the technology stays on its current trajectory, it will be able to do most of what he does before long — and he has decades of experience as a managing partner. He is not panicking, Shumer notes, but he is paying very close attention.
In finance, building financial models, analyzing data, writing investment memos, and generating reports are tasks AI handles competently today and is improving at rapidly. Marks acknowledges that AI possesses many of the qualities a good investor needs — the ability to absorb more data than any individual, superior pattern recognition, freedom from the emotional distortions of fear and greed — while noting that genuinely novel situations requiring human judgment and intuition remain, for now, a domain where superior investors can still add value. The key phrase there is "for now."
In healthcare, AI is approaching or exceeding human performance in several clinical areas including diagnostic imaging, laboratory analysis, and clinical literature review. Administrative functions — coding, documentation, prior authorization, revenue cycle management — are already being meaningfully automated. The question for healthcare leaders is not whether AI will change their operations, but whether they will shape that change or be shaped by it.
In software engineering, the transformation has, by Shumer's account, already fully arrived. A year ago, AI could write a few lines of code. Now it writes hundreds of thousands of lines of working code, tests its own output, and iterates until satisfied. In writing, content, and communications — marketing copy, reports, journalism, technical documentation — the quality of AI-generated content has reached a point where many professionals cannot reliably distinguish it from human work. In customer service, genuinely capable AI agents, not the frustrating chatbots of five years ago but systems that handle complex multi-step interactions, are being deployed now across industries.
Shumer is explicit that his list of affected fields is illustrative, not exhaustive: "If your job isn't mentioned here, that does not mean it's safe. Almost all knowledge work is being affected" (Shumer, "Something Big Is Happening," February 9, 2026). And it is worth noting the opportunity side of this, which tends to be underemphasized. The best tutor in the world is now available to anyone for approximately twenty dollars a month — infinitely patient, available continuously, capable of explaining anything at whatever level of depth is needed. Knowledge has become essentially free. The tools to build things have become extraordinarily inexpensive. Whatever leaders and their organizations have been putting off because it felt too hard, too expensive, or too far outside existing expertise — the barrier to attempting it has collapsed.
The Honest Reckoning on Jobs and Society
Both Shumer and Marks engage seriously with the most difficult question this technology raises: what happens to the people whose work AI displaces? Shumer notes that Dario Amodei has publicly predicted that AI will eliminate fifty percent of entry-level white-collar jobs within one to five years, and that many in the industry believe he is being conservative. The capability for massive disruption could arrive by the end of this year, even if the economic ripple effects take longer to fully manifest.
The historical optimist argument is not unreasonable — agricultural mechanization, the industrial revolution, the internet, each was predicted to cause mass unemployment, and each ultimately did not. Marks acknowledges this history fairly but is cautious about extrapolating the pattern this time: "I'm neither enough of a futurist to imagine the new jobs that may be created nor enough of an optimist to trust that they'll materialize. That certainly doesn't mean they won't" (Marks, "AI Hurtles Ahead," February 26, 2026). The speed of this transition is unlike anything in the historical record. AI can put people out of work far faster than society can retrain them and create new opportunities.
This is not a reason to resist AI. It is a reason for leaders to think carefully and humanely about how they deploy it, how they communicate about it, and what obligations they carry to the people who may be affected. Marks closes his memo with a sentence that reflects this tension with unusual candor: "A friend wrote to me recently that he'd rather be an optimist and wrong than a pessimist and right. Me too. I wish I could be confident that my worrying is unwarranted" (Marks, "AI Hurtles Ahead," February 26, 2026). That is not the voice of a Luddite. It is the voice of a thoughtful person who has spent a career being honest about uncertainty, and who is not going to stop now.
What Leaders Should Actually Do
Both Shumer and Marks are clear that the purpose of understanding this is not paralysis — it is preparation. And the preparation is more straightforward than the magnitude of the change might suggest.
Start using AI seriously, not casually. Shumer is specific: don't use AI as a search engine. Push it into your actual work. Take the most time-consuming, analytical part of your week and give it to AI. See what happens. The first attempt may not be perfect — iterate, rephrase, give it more context, try again. His guidance is worth quoting directly: "If it even kind of works today, you can be almost certain that in six months it'll do it near perfectly. The trajectory only goes one direction" (Shumer, "Something Big Is Happening," February 9, 2026). Marks makes a related point about prompt quality — the people whose AI experiences are disappointing are often using free-tier tools or asking shallow questions. The limitation is generally on the user side, not the model side.
Have no ego about it. Shumer's managing partner example is instructive. A person with decades of experience and significant institutional standing is spending hours every day with AI — not because he is a technology enthusiast, but because he understands what is at stake and refuses to let professional pride stand between him and capability. The leaders who will struggle most are those who feel that using AI diminishes their expertise. It doesn't. It extends it.
Think carefully about what is hardest to replace. Neither Shumer nor Marks suggests that everything is equally vulnerable on the same timeline. Relationships and trust built over years. Work requiring physical presence. Roles with licensed accountability where someone must sign off, take legal responsibility, or stand in a courtroom. Industries with heavy regulatory structures where institutional inertia will slow adoption. These are not permanent shields, but they buy time — and time, used well, is the most valuable resource available right now.
Get your financial house in order. Not a call to drastic action, but a recognition that if real disruption is coming to your industry, financial resilience matters more than it did a year ago. Build reserves where possible. Be cautious about fixed commitments that assume current revenue is guaranteed. For organizational leaders, this translates into a strategic posture question: are you positioned to adapt, or are you locked into structures and cost bases that assume the current environment is stable?
Build the habit of adapting. The specific tools matter less than the muscle of learning new ones quickly. The models that exist today will be obsolete within a year. The workflows being built now will need to be rebuilt. Shumer makes a concrete suggestion worth taking literally: spend one hour every day experimenting with AI. Not reading about it — using it. Try something you haven't tried before, something you're not sure it can handle. "If you do this for the next six months," he writes, "you will understand what's coming better than 99% of the people around you. That's not an exaggeration. Almost nobody is doing this right now. The bar is on the floor" (Shumer, "Something Big Is Happening," February 9, 2026).
Reframe what you're building toward. The opportunity side of this shift is as real as the threat. Capabilities that once required teams of specialists and significant capital are becoming accessible to any leader willing to engage seriously. The leaders who see this as an opening — to serve constituencies they couldn't reach before, to do things they couldn't do before, to operate at a scale that wasn't previously achievable — will be better positioned than those who are purely defensive.
The Bigger Picture
Marks, who has spent his career studying dislocations and distinguishing the real from the illusory, ends his memo with a statement that deserves to be read slowly: "The bottom line for me is that AI is very real, capable of doing a lot of work that heretofore has been done by knowledge workers, and growing extremely rapidly in terms of applications. What we see today is only the beginning… if I had to guess, I'd say its potential is more likely underestimated today rather than overestimated" (Marks, "AI Hurtles Ahead," February 26, 2026). That assessment, coming from someone with his track record of intellectual honesty and disciplined skepticism, should register.
Shumer closes with a sentence that has stayed with me since I first read it: "We're past the point where this is an interesting dinner conversation about the future. The future is already here. It just hasn't knocked on your door yet. It's about to" (Shumer, "Something Big Is Happening," February 9, 2026).
In February 2020, most of us were going about our normal lives while a small number of people were watching signals that something unprecedented was approaching. The people who acted early — who took the signals seriously, who prepared, who adapted — did better than those who waited for certainty that never fully arrived before the world had already changed.
The same dynamic is playing out right now. The signals are not subtle. The people closest to what is happening are not equivocating. And the window in which early movers have a genuine advantage over those who wait is open — but it will not stay open indefinitely.
The question for every leader reading this is not whether AI is real. That question has been answered. The question is what you intend to do with the time you have before it redefines the environment you operate in. Begin now. Not because the future is certain, but because the alternative — looking back and wishing you had — is entirely avoidable.
Sources
Shumer, Matt. "Something Big Is Happening." Blog post, February 9, 2026.
Marks, Howard. "AI Hurtles Ahead." Memo to Oaktree Clients, Oaktree Capital Management, L.P., February 26, 2026.
Jason T. Douglas (MHA, FACHE, CMPE, LNHA, HSE) is President and CEO of Lexington Regional Health Center and Co-Founder of Frontier Strategy Partners LLC, a rural healthcare consulting firm focused on strategy, service line development, and operational performance for Critical Access Hospitals and clinics.
© 2026 Frontier Strategy Partners LLC. All rights reserved.