Thursday, 19 February 2026

Here's what smart people are saying about a software apocalypse

A composite image of Sam Altman, Jensen Huang, and Matt Garman
OpenAI CEO Sam Altman, Nvidia CEO Jensen Huang, and AWS CEO Matt Garman
  • Wall Street has cooled a bit after a massive sell-off wiped more than $1 trillion in Big Tech valuations.
  • Recent AI advancements have undermined some investors' faith in established software names.
  • Jensen Huang, OpenAI CEO Sam Altman, and Figma CEO Dylan Field have all weighed in.

Big Tech is continuing to bounce back after a brutal sell-off.

Wall Street's fears of AI-related disruption drove a sell-off of software stocks after the release of Anthropic's new industry-specific plug-in.

Not everyone in finance and tech is sold on the idea that AI is going to kill the software business.

From Nvidia's CEO dismissing the concerns, to Zoho's founder acknowledging the industry is "ripe for consolidation," here's what leaders in tech and finance are saying:

Jensen Huang
Nvidia CEO Jensen Huang
Nvidia CEO Jensen Huang

Nvidia CEO Jensen Huang said software is a tool for AI to use, rather than replace.

"There's this notion that the tool industry is in decline and will be replaced by AI," Huang said during a recent Cisco AI event. "You could tell because there's a whole bunch of software companies whose stock prices are under a lot of pressure because somehow AI is going to replace them. It is the most illogical thing in the world and time will prove itself."

Huang named ServiceNow, SAP, Cadence, and Synopsis, as bright spots in the industry.

Sam Altman
Sam Altman speaks during an event
OpenAI CEO Sam Altman

OpenAI CEO Sam Altman said volatility will remain in the software market.

"It's different, it's definitely not dead," Altman said during an interview on TBPN. "How you create it, how you're going to use it, how much you're going to have written for you each time you need it, versus how much you'll want sort of a consistent UX —yeah, that's all going to change."

In the meantime, sell-offs like the one Wall Street has continued now are likely to continue.

"I think it's just going to be volatile for a while as people figure out what this looks like."

Dylan Field
Dylan Field
Dylan Field

Figma CEO Dylan Field said it's not just about building something quickly, it's about "building the right thing."

"Good enough, it works, it's not enough," he told CNBC in February. "You really have to focus."

Overall, Field said volatility is good for companies. Shares of Figma have dropped by over 79% over the past year, illustrating the growing pains for the design company since its highly touted IPO in July 2025.

"I think if you look back on this time, we'll just be a more resilient company overall," Field said.

Figma also announced a partnership with Anthropic on a tool that converts AI-generated code into editable designs.

Sridhar Vembu

Sridhar Vembu, founder of Zoho, a cloud-based software company, said SaaS was "ripe for consolidation" long before the rise of AI.

"An industry that spends vastly more on sales and marketing than on engineering and product development was always vulnerable," he wrote on X. "The venture capital bubble and then the stock market bubble funded a fundamentally flawed, unsustainable model for too long. AI is the pin that is popping this inflated balloon."

Vembu said he asks his employees to consider the possibility of the company's death.

"When we accept that possibility, we become more fearless and that is when we can calmly chart our course."

Steven Sinofsky
Steven Sinofsky
Steven Sinofsky

Steven Sinofsky, who helped lead the development of Windows 7 and 8, said AI may change "what we built and who builds it," but tales of software's demise are just "nonsense."

"Wall Street is filled with investors of all types. There's also a community, and they tend to run in herds. The past couple of weeks have definitely seen the herd collectively conclude that somehow software is dead. That the idea of a software pure play will just vanish into some language model," Sinofsky wrote in a lengthy post on X. "Nonsense."

Sinofsky said it is true some companies will fail. He also noted that such cycles have happened in retail and media.

"Strap in," he wrote. "This is the most exciting time for business and technology, ever."

Rene Haas
Arm CEO Rene Haas
Arm CEO Rene Haas

Arm CEO Rene Haas isn't panicking.

"As I look at enterprise AI deployment, we aren't anywhere close to where it can be," Haas told the Financial Times.

Haas, who leads the SoftBank-owned semiconductor company, said the current market reaction is "micro-hysteria."

Stephen Parker

JPMorgan analyst Stephen Parker said investors shouldn't be too worried by the sell-off.

"We're seeing a rotation," Parker told CNBC. "It's about a broadening of the recovery story. Cyclicals are picking up the slack, and it's not just the AI infrastructure plays and the hyperscalers that are driving markets higher."

Parker, the co-head of global investment strategy at JPMorgan Private Bank, said AI developments are likely to continue to cause disruption in the software industry.

Anish Acharya
Anish Acharya speaks during an event
Anish Acharya

Anish Acharya, a general partner at A16z, said the sell-off was an overreaction based on a misunderstanding of how AI will be deployed.

"You have this innovation bazooka with these models," Acharya told podcaster Harry Stebbings during an episode of "20VC." "Why would you point it at rebuilding payroll or ERP or CRM, right? You're going to take it and use it to extend your core advantage as a business, or you're going to take it to optimize the other 90% that you're not spending on software today."

Acharya said there "will be secular losers," but overall, the sell-off was misguided.

"I think the general story that we're going to vibe code everything is flat wrong and the whole market is oversold software," he said.

Spenser Skates
Amplitude CEO Spenser Skates is pictured.

Amplitude CEO Spenser Skates said the sell-off correctly identified that many SaaS companies are moving too slowly.

"The median SaaS company their innovation has actually slowed to a standstill," Skates told TBPN in February. "I don't know if you guys have ever been inside of these, but it's crazy how little they ship in terms of net new products."

Skates said AI has placed a major emphasis on the speed of innovation.

"It's like sushi," he said. "Buyers are always going to want the best thing. So, if you're keeping up with innovating the best thing, you will be able to charge a premium. It's fine that the 7-Eleven at the gas station now sells sushi. It's not going to put Jiro in Japan out of business."

Matt Garman
AWS CEO Matt Garman
AWS CEO Matt Garman

AWS CEO Matt Garman said the current fears are "overblown."

"AI is absolutely a disruptive force that's going to change how software is consumed and how it's built," he told CNBC in February.

The top Amazon exec said current SaaS companies can still survive this moment.

"They have to innovate, just like the rest of the world," he said. "They can't stand still. If they stand still, they're absolutely going to be disrupted."

Read the original article on Business Insider


from Business Insider https://ift.tt/t8W71dL

AI's top leaders got corralled into holding hands. It made for a photo op for the ages.

Sam Altman and Dario Amodei
Sam Altman and Dario Amodei's hands did not make contact, and the internet noticed.
  • Sam Altman and Dario Amodei's awkward moment at the India AI Summit went viral.
  • The two AI leaders — and former colleagues — raised their arms but did not hold hands.
  • Tech leaders, including Sundar Pichai, gathered onstage with Narendra Modi in New Delhi.

The world's biggest AI leaders gathered in New Delhi this week, prepared to talk about the latest models and their impact on societies. They seemed less prepared for a 14-person hand-hold that tech circles will remember for a long time.

On Thursday, top executives, including Demis Hassabis, Sundar Pichai, Brad Smith, Sam Altman, and Dario Amodei, lined up on stage with Indian Prime Minister Narendra Modi at the India AI Impact Summit.

In his signature style, Modi held hands with Pichai on his right and Altman on his left and began raising their linked arms for a celebratory photo. Modi has previously taken photos this way with world leaders, including former US President Joe Biden and EU Commission President Ursula von der Leyen.

The other tech execs were quick to catch on to Modi's directive, looking right and left before grabbing their neighbour's hand.

The photo op's most meme-worthy scene was the OpenAI and Anthropic CEOs not managing — or refusing— to hold each other's hands. After a pause, they raised their arms without making contact.

The moment was widely screenshotted and shared on social media.

The awkward moment followed a Super Bowl advertising jab between the two AI giants earlier this month. Anthropic's 30-second commercial roasted OpenAI over its decision to bring ads to ChatGPT.

After Anthropic released a series of Super Bowl ad teasers, Altman responded with a lengthy post on X, calling the Anthropic ad "dishonest."

Amodei cofounded Anthropic in 2021 after leaving OpenAI, citing disagreements over AI safety priorities and the lab's leadership style.

Read the original article on Business Insider


from Business Insider https://ift.tt/VTMbPds

Wednesday, 18 February 2026

OpenAI, Meta, and Apple's latest battle: Breaking your phone addiction

DeepMind's CEO said there are still 3 areas where AGI systems can't match real intelligence

Demis Hassabis
DeepMind's CEO said AGI still lags behind real intelligence in three areas.
  • DeepMind's Demis Hassabis said artificial general intelligence efforts still leave a lot to be desired.
  • Current systems cannot learn continuously, cannot plan long-term, and lack consistency, he said.
  • He said last year that it would take five to 10 years for the world to see real AGI in play.

True artificial general intelligence is on the way, but it still has some ways to go, said Google DeepMind's CEO.

Speaking at an AI summit in New Delhi, Demis Hassabis was asked whether current AGI systems can match human intelligence. AGI is a hypothetical form of machine intelligence that can reason like people and solve problems using methods it was not trained in.

Hassabis' short answer: "I don't think we are there yet."

He listed three areas where current AGI systems are falling short. The first was what he called "continual learning," saying that the systems are frozen based on the training they received before implementation.

"What you'd like is for those systems to continually learn online from experience, to learn from the context they're in, maybe personalize to the situation and the tasks that you have for them," he said during the discussion.

Secondly, Hassabis said current systems struggle with long-term thinking.

"They can plan over the short term, but over the longer term, the way that we can plan over years, they don't really have that capability at the moment," he said.

And lastly, he said that the systems lack consistency. They're adept in some areas and unskilled in others.

"So, for example, today's systems can get gold medals in the international Math Olympiad, really hard problems, but sometimes can still make mistakes on elementary maths if you pose the question in a certain way," he said. "A true general intelligence system shouldn't have that kind of jaggedness."

Humans, in comparison, would not make mistakes on an easy math problem if they were math experts, he added.

Hassabis said in a "60 Minutes" interview last year that true AGI would arrive in five to 10 years.

The executive cofounded DeepMind, an AI research lab, in 2010. The lab was acquired by Google in 2014 and is the brains behind Google's Gemini. In 2024, Hassabis won a joint Nobel Prize in chemistry for his work on protein structure prediction.

AGI is a disputed topic in Silicon Valley. Databricks CEO Ali Ghodsi said at a September conference that current AI chatbots already meet the definition of AGI, but Silicon Valley leaders keep "moving the goalposts" and pushing toward superintelligence, or AI that can outthink humans.

The AI Summit in India, from Monday to Friday this week, has attracted big names from the tech and AI spheres. Notable speakers on the summit's agenda include OpenAI CEO Sam Altman, Anthropic CEO Dario Amodei, Google CEO Sundar Pichai, and Meta's chief AI officer, Alexandr Wang.

Read the original article on Business Insider


from Business Insider https://ift.tt/cdbnrAP

Tuesday, 17 February 2026

Hide and squeak: Passengers flying to Spain endured a flight to nowhere after a rodent was spotted on board

Airbus A320NEO aircraft of SAS Connect spotted flying in the air, departing from Amsterdam Schiphol AMS airport.
An SAS Airbus A320neo.
  • A flight from Stockholm to Spain's Costa del Sol turned around earlier this month.
  • It diverted after a rodent sighting on board, a spokesperson for SAS told Business Insider.
  • A replacement plane landed in Málaga five hours later than the first one was scheduled to.

Passengers in Europe had a grueling flight to nowhere earlier this month due to an unusual passenger.

A Scandinavian Airlines flight turned around after a rodent was spotted on board.

Flight 1583 departed Stockholm Arlanda Airport on February 7 and was supposed to land in Málaga, Spain, four hours later.

However, almost two hours into the journey, the Airbus A320neo U-turned while flying over Belgium, according to flight-tracking data.

It flew back to Sweden, touching down in the capital 3 hours and 20 minutes after taking off.

In a statement to Business Insider, an airline spokesperson said the plane turned around "after a suspected rodent sighting on board."

"We followed established procedures and, as a precaution, returned the aircraft to Arlanda to carry out standard inspections of both the aircraft and relevant suppliers," they added. "Passengers were boarded on a new aircraft to Malaga shortly after."

SAS did not confirm exactly what kind of rodent was spotted, but Flightradar24 reported that it was a mouse.

Diverting a plane due to a rodent might seem bizarre, but loose animals on board can pose a safety risk. It could potentially damage electrical wiring or other components, leading to system faults or, in rare cases, a fire.

Data from Flightradar24 shows an extra flight, operated under the call sign SAS95T, flew from Stockholm to Málaga later the same day.

It arrived around 3:30 p.m., five hours later than passengers were first scheduled to arrive in the Costa del Sol.

This wasn't the first time that such an unwelcome passenger has caused a flight to turn around.

In 2024, One Mile at a Time reported that an SAS flight to Malága returned to Copenhagen after a mouse was found in somebody's in-flight meal, before it escaped into the cabin.

Later that year, a TAP Air Portugal plane was grounded after 132 hamsters escaped from their cages inside the cargo hold.

Read the original article on Business Insider


from Business Insider https://ift.tt/lFIu6ea

Office food perks are getting better — and they're here to stay

Monday, 16 February 2026

The art of the squeal: What we can learn from the flood of AI resignation letters

Robot hand holding an envelope that reads "I QUIT!!" surrounded by a pile of papers.

Corporate resignations rarely make news, except at the highest levels. But in the last two years, a spate of X posts, Substack open letters, and public statements from prominent artificial intelligence researchers have created a new literary form — the AI resignation letter — with each addition becoming an event to be mined for meaning. Together, the canon of these letters — some of them apparently bound by non-disclosure agreements and other loyalties, legally compelled or not — tells us a lot about how some of the top people in AI see themselves and the trajectory of their industry. Overall, the image is bleak.

This past week brought several additions to the annals of "Why I quit this incredibly valuable company working on bleeding-edge tech" letters, including from researchers at xAI and an op-ed in The New York Times from a departing OpenAI researcher. Perhaps the most unusual was by Mrinank Sharma, who was put in charge of Anthropic's Safeguards Research Team a year ago, and who announced his departure from what is often considered the more safety-minded of the leading AI startups. He posted a 778-word letter on X that was at times romantic and brooding — he quoted the poets Rainer Maria Rilke and Mary Oliver. Opining on AI safety, his own experiences working on AI sycophancy and "AI-assisted bioterrorism," and the "poly-crisis" consuming our society, the letter had three footnotes and some ominous, if vague, warnings.

"We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences," Sharma wrote. "Throughout my time here, I've repeatedly seen how hard it is to truly let our values govern our actions."

Sharma noted that his final project at Anthropic was "on understanding how Al assistants could make us less human or distort our humanity" — a nod, perhaps, to the scourge of AI psychosis and other novel harms emerging from people overvaluing their relationships with chatbots. He said that he didn't know what he was going to do next, but expressed a desire to pursue "a poetry degree and devote myself to the practice of courageous speech." The researcher ended by including the full text of "The Way It Is" by the poet William Stafford.

In the annals of AI resignations, Sharma's missive might be less dramatic than the boardroom coup that ousted OpenAI CEO Sam Altman for five days in November 2023. It's less troubling than some of the other end-of-days warnings published by AI safety researchers who quit their posts believing that their employers weren't doing enough to mitigate the potential harms of artificial general intelligence, or AGI, a smarter-than-human intelligence that AI companies are racing to build. (Some AI experts question whether AGI is even achievable or what it might mean.)

But Sharma's note captures the deep attachments that top AI researchers — who are extremely well-compensated and work together in small teams — feel to their work, their colleagues, and, often, their employers. It also exposes some of the tensions that we see cropping up again and again in these resignation announcements. At top AI labs, there's an intense competition for resources between research/safety teams and people working on consumer-facing AI products. (Few, if any, public resignations seem to come from people on the product side.) There are pressures to ship without proper testing, established safeguards, or knowing what might happen when a system goes rogue. And there's a deep sense of mission and purpose that can sometimes be upended by feelings of betrayal.

Many of the people who have publicly quit AI companies work in safety and "alignment," the field tasked with making sure that AI capabilities align with human needs and welfare. Many of them seem very optimistic about AI, and even AGI, but they worry that financial pressures are eating away at safeguards. Few seem to be giving up on the field entirely — except perhaps Sharma, the aspiring poet. Either they jump ship for another seven-, eight-, or nine-figure job at a competing AI startup, or they become civic-minded AI analysts and researchers at one of a growing number of AI think tanks.

Sam Altman
Sam Altman

All of them seem to be worried that either epic gains or epic disasters lie ahead. Announcing his departure from Anthropic to become OpenAI's Head of Preparedness earlier this month, Dylan Scandinaro wrote on LinkedIn, "AI is advancing rapidly. The potential benefits are great — and so are the risks of extreme and even irrecoverable harm." Daniel Kokotajlo, who resigned from OpenAI, said that OpenAI's systems "could be the best thing that has ever happened to humanity, but it could also be the worst if we don't proceed with care."

Recently, xAI, where co-founder Elon Musk is notorious for tinkering with the proverbial dials of the Grok chatbot, has seen a half-dozen members of its founding team leave. But the locus of the AI resignation letter, as a kind of industry artifact, is the red-hot startup OpenAI, where major figures, including top executives and safety-minded researchers, have been leaving for the last two years. Some resigned; some were fired; some were described in the press as "forced out" over internal company disputes. Seven left in a short period in the first half of 2024.

With revenue paling compared to its massive and growing infrastructure costs, OpenAI recently announced that it would begin incorporating ads into ChatGPT. That caused researcher Zoë Hitzig to quit. This week, she published a resignation letter in the Times, warning about the potential implications of ads becoming part of the substrate of chatbot conversations. "ChatGPT users have generated an archive of human candor that has no precedent, in part because people believed they were talking to something that had no ulterior agenda," she wrote. But, she warned, OpenAI seemed prepared to leverage that "archive of human candor" — much as Facebook had done — to target ads and undermine user autonomy. In the service of maximizing engagement, consumers might be manipulated — the classic sin of the modern internet.

If you think you are building a world-changing invention, you need to be able to trust your leadership. That's been a problem at OpenAI. On November 17, 2023, Altman was dramatically fired by the company's board because, it claimed, Altman was "not consistently candid in his communications with the board." Less than a week later, he performed his own boardroom coup and was reinstated, before consolidating his power. The exodus proceeded from there.

On May 14, 2024, OpenAI co-founder Ilya Sutskever announced his resignation. Sutskever was replaced as head of OpenAI's superalignment team by John Schulman, another company co-founder. A few months later, Schulman left OpenAI for Anthropic. Six months later, he announced his move to Thinking Machines Lab, an AI startup founded by former OpenAI CTO Mira Murati, who had replaced Altman as OpenAI's interim CEO during his brief firing.

The day after Sutskever left OpenAI, Jan Leike, who also helped head OpenAI's alignment work, announced on X that he had resigned. "OpenAI is shouldering an enormous responsibility on behalf of all of humanity," Leike wrote, but the company's "safety culture and processes have taken a backseat to shiny products." He thought that "OpenAI must become a safety-first AGI company." Less than two weeks later, Leike was hired by Anthropic. OpenAI and Antrhopic did not respond to requests for comment.

At OpenAI, departing researchers have said that the experts concerned with alignment and safety have often been sidelined, pushed out, or scattered among other teams, leaving researchers with the sense that AI companies are sprinting to build an invention they won't be able to control. "In short, neither OpenAI nor any other frontier lab is ready, and the world is also not ready" for AGI, wrote Miles Brundage when he resigned from OpenAI's AGI readiness team in 2024. Yet he added that "working at OpenAI is one of the most impactful things that most people could hope to do" and did not directly criticize the company. Brundage now runs AVERI, an AI research institute.

Across the AI industry, the story is much the same. In public pronouncements, top researchers gently chastise or occasionally denounce their employers for pursuing a potentially apocalyptic invention while also emphasizing the necessity of doing that research. Sometimes they offer a "cryptic warning" that leaves AI watchers scratching their heads. A few do seem genuinely alarmed at what's happening. When OpenAI safety researcher Steven Adler left the company in January 2025, he wrote that he was "pretty terrified by the pace of AI development" and wondered if it would wipe out humanity.

Yet in the many AI resignation letters, there's little discussion of how AI is being used right now. Data center construction, resource consumption, mass surveillance, ICE deportations, weapons development, automation, labor disruption, the proliferation of slop, a crisis in education — these are the areas where many people see AI affecting their lives, sometimes for the worse, and the industry's pious resignees don't have much to say about it all. Their warnings about some disaster just beyond the horizon become fodder for the tech press — and de facto cover letters for their next industry job — while failing to reach the broader public.

"Tragedies happen; people get hurt or die; and you suffer and get old," wrote William Stafford in the poem that Mrinank Sharma shared. It's a terrible thing, especially the tones of passivity and inevitability — resignation, you might call it. It can feel as if no single act of protest is enough, or, as Stafford writes in the next line: "Nothing you do can stop time's unfolding."


Jacob Silverman is a contributing writer for Business Insider. He is the author, most recently, of "Gilded Rage: Elon Musk and the Radicalization of Silicon Valley."

Read the original article on Business Insider


from Business Insider https://ift.tt/si3LNl0

Here's what smart people are saying about a software apocalypse

OpenAI CEO Sam Altman, Nvidia CEO Jensen Huang, and AWS CEO Matt Garman AP and Getty Images Wall Street has cooled a bit after a massiv...