As we close the chapter on 2025, a year dominated by conversations about whether AI will save the world or ruin it, it’s clear the pace of change in the tech industry is only accelerating. While the end of the year is often a time for reflection, our focus is always on what's next.
To help you prepare for the year ahead, we asked leading tech reporters for their predictions on how 2026 will unfold. It should come as no surprise that their thoughts centered on AI and LLMs. Here’s what they see on the horizon

Alex Heath, Founder, Sources, and Co-Host of ACCESS
Alex, a prolific tech reporter who has interviewed many of Silicon Valley’s leading figures, shared that he thinks “we’re all, collectively, going to talk a lot more about world models in 2026. LLMs [will] still, obviously, be very important, and will continue to be so. But, [he] believes the conversation will shift toward 3D models that understand 3D and can therefore control robotics and agents in 3D spaces.
ChatGPT will remain the dominant consumer chatbot platform, but the gap between ChatGPT and alternatives will close. [He] doesn’t see them being the far-and-away leaders in 2026.”
Alex also predicted that “Sam Altman will not be the CEO of OpenAI this time next year.” He clarified, saying, “Not that he won't still be with the company, but [he bets] that he won't have the CEO title.”

George Lawton, Correspondent, diginomica
George, a journalist who’s covered how complex technology works for over 25 years, believes “physical AI [will] mature into enterprise workflows.”
George went on to tout “NVIDIA's recent release of Project Apollo for simplifying the workflows for surrogate AI physical models for [speeding] simulation by 50-10,000 times, and is already gaining traction among major electronics design and simulation vendors. Over the next year, vendors of product lifecycle management tools like Siemens will begin weaving these into enterprise workflows to support just in time simulation and analysis. Next up will be applying these mathematical and statistical predictive models across the business.”

Ron Miller, Editor, FastForward
Ron, a technology reporter with almost 30 years of experience covering enterprise technology, predicted that “the AI bubble is going to burst.” He went on to say that “the amount of money being thrown around and poured into this, the amount of data center capacity [being built], all under the belief that it will make AI smarter. [He doesn’t] think that’s necessarily going to play out that way.”
Ron shared, “It’s not like software. When you upgrade software, you get more functionality. With AI, some barriers are tough to overcome when you hit them. [He thought] we saw that with ChatGPT-4 and 5. We expected a huge leap, and we didn’t get it.”
Ron closed, saying, “ChatGPT was [an undeniable breakthrough]. [He remembered] in April 2023, [he] would go to conferences, and people would say, “Wow. If ChatGPT-4 is like this, I couldn’t imagine what 5, 6, and 7 are going to be like.” And, it doesn’t work that way.” [He doesn’t] claim to be an AI expert. He reiterated he’s a journalist, and that talks to his people. “The individuals [he] trusts say it doesn’t move in a straight line. There are detours where you hit a barrier, and it’s difficult to overcome that barrier. There’s this belief that brute force is the answer, but that won’t get us past those bumps.
We’re going to conclude not too far into the New Year that maybe we need to put the brakes on. [He doesn’t] think it’ll be like an AI Winter, where it will be a decade before we return to AI. [He] thinks it's going to be an AI November, where the masses will hunker down and rethink their strategies because right now, the hype doesn’t match the reality.”

Tara Seals, Managing Editor, News, Dark Reading
Tara, who has 25-plus years of experience as a journalist, analyst, and editor in the communications and technology space, predicted that “2026 will see a market correction for AI in security, and possibly even a bust cycle equivalent to the dot-com bust of the early 2000s.”
Tara leaned in, sharing that, “Initial use cases are being either proven out and adopted, or trialed and dropped, and the initial irrational exuberance and appetite for adopting anything AI is starting to fade. Companies are starting to realize that AI is not a panacea for all that ails security, and can't be deployed everywhere willy-nilly without significant planning and careful thought to how it augments (not replaces) human security. Expect a more measured adoption approach in 2026.”

Tony Bradley, Editor in Chief of TechSpective, and Senior Contributor at Forbes
Tony is a prolific writer across a diverse landscape of media outlets, as well as a published author and host of the TechSpective Podcast, who has interviewed some of the most influential voices in technology.
AI Will Turn Against Its Own Network in 2026
Tony’s concerns for 2026 are both security-related. First, he asks, “We’ve spent years warning about insider threats, but what happens when the 'insider' is your own AI?”
He goes on to explain, “As organizations integrate generative AI into their cybersecurity stack, attackers are already finding ways to jailbreak and prompt-inject these tools from the inside. We’re not talking about brute force or perimeter attacks. These assaults will reprogram your defenses—using your AI to suppress alerts, alter logs, and even gaslight your own security team with fabricated reports.
“It seems both ironic and inevitable. In the rush to deploy AI as a silver bullet, we’ve introduced a new attack surface with fewer checks than ever. Companies betting on ungoverned AI to protect them may soon find they’ve trained their own saboteur.”
AI Model Poisoning Will Be the New Propaganda Machine
He’s not only concerned with threats from within, though. “Nation-state cyberattacks are nothing new—but AI will supercharge their tactics in chilling ways. We’ve already seen a primitive version of this with the weaponization of disinformation by Fox News and other right-wing media outlets, where false narratives are strategically seeded to destabilize public trust and fuel division.”
“Now imagine that same tactic at the speed, scale, and subtlety of AI,” he warns.
Tony believes we’ll see adversaries begin targeting the foundation models that drive our information flows, “poisoning training data, manipulating outputs, and inserting invisible biases into the tools people increasingly rely on to understand the world.”
The result will be, as he puts it, “a reality distortion field. And unlike traditional propaganda, this won't require a broadcast network or an audience. It will be embedded, ambient, and incredibly hard to trace.”

Jon Swartz, Senior Content Writer, Techstrong Group
Jon has been reporting on tech since 1987 and has been nominated for the Pulitzer Prize four times.
Jon believes “Artificial intelligence (AI) experimentation is about to end.”
His prediction is that industry leaders are sending a clear message: The days of flashy demos and pilot programs are over. “What's coming instead is a reckoning,” he says, “one that will separate companies building sustainable AI systems from those merely chasing headlines.”
That’s a wrap from Team Highwire for 2025! We’re excited for all the possibilities 2026 will bring. Until then, we wish you a joyful and safe holiday season. Happy Holidays!