2025 AI Forecast: Serious Insights on AI and the Future of Work
- Executive Summary: 2025 Industry Analysis on AI Integration and Impacts
- 2025 AI Forecast: Serious Insights on AI and the Future of Work
- Experience: AI everywhere, and Iβm confused
- Chasing ROI
- Agents: The next rung up the hype ladder
- Biased AI: The rise of ideological LLMs?
- Sustainability: More energy before more energy efficiency
- Data: Enterprises still washing their RAGs
- Open Source: Looks good until it’s all on you
- Killer App: Scaling success will lead the killer app discussion, but perhaps we should be looking harder at hard problems
- Regulations: Test the EU
- Trust: If you steal my stuff can I trust you to talk to my customers?
- Security: An AI arms race to protect and attack AI’s huge surface of vulnerabilities
- Hardware: AI PC become dominant and it might not be x86
- Robots: More common but notΒ The JetsonsΒ yet.
- Acquisitions: Consolidation, consolidation, consolidation
- The Red Lines on the Forecast Map
Executive Summary: 2025 Industry Analysis on AI Integration and Impacts
Key Trends and Challenges in AI Evolution
The 2025 workplace will be a hybrid reality for many organizations, building on post-pandemic adjustments. AI will dominate headlines, influencing layoffs, training needs, and workforce dynamics, while management struggles with integrating digital labor. Workers and managers alike will grapple with tool proliferation and undefined AI applications. AI fatigue, struggles with data preparedness, and slow movement toward redefining roles and practices may undermine widespread success.
AI Everywhere and the βDigital Laborβ Revolution
AIβs expanding footprint introduces both opportunities and friction. Workers face a deluge of tools and inconsistent guidance, while middle managers must balance human teams and autonomous digital agents. AI-driven decision-making challenges traditional hierarchies and roles, creating friction in workforce adoption.
Navigating Subscription Overload and AI Fatigue
The rising costs of AI tool subscriptions strain small businesses. Organizations must combat fatigue by funding training, establishing clear usage guidelines, and fostering collaborative environments for ethical AI deployment.
Security and Ethics in the AI Arms Race
AIβs vulnerabilitiesβfrom training phase attacks to operational risksβrequire organizations to adopt comprehensive security measures. Open-source AI presents additional challenges in accountability and talent acquisition, necessitating robust frameworks and rigorous governance. Ethical concerns, from biased AI outputs to environmental impacts, remain unresolved.
Scaling Success and Addressing βKiller Appβ Myths
The search for transformational AI applications will focus on iterative, high-value projects in fields like climate science and computational biology. Commercial packaging of successful AI solutions will drive market growth, particularly in marketing, education, and engineering.
Global Regulatory Shifts and IP Challenges
Regulatory frameworks like the EU AI Act emphasize transparency and accountability, while U.S. regulations aim to balance innovation and ethical use. Intellectual property lawsuits around training data and generative outputs will shape vendor practices and consumer trust.
AIβs Economic and Organizational Impacts
ROI-focused implementations may limit competitive differentiation as companies chase similar efficiencies. Long-term success depends on fostering innovation, enabling disruptive ideas, and leveraging AI for transformative goals.
Technological Advances: Robotics, Hardware, and Open Source
Robots will extend beyond manufacturing into healthcare and logistics while AI PCs become mainstream, shifting hardware dynamics. Open-source AI, while promising democratization, remains constrained by adoption costs, talent shortages, and security challenges.
Market Consolidation and Strategic M&A Activity
AI-driven mergers and acquisitions will accelerate, driven by talent acquisition, data access, and technology integration. Strategic investments in AI startups and pure-play firms will shape the competitive landscape, reinforcing partnerships between established tech giants and emerging innovators.
2025 AI Forecast: Serious Insights on AI and the Future of Work
Experience: AI everywhere, and Iβm confused
The 2025 work experience will build off the post-pandemic triumphs and frustrations. Hybrid work, for the most part, will be settled as the de facto work mode for many organizations. Some “get-butts-in-your-seats” executives will still flex their flexible benefit packages and withhold promotions, but the notoriety of their intransigence wonβt garner the headlines it has in the past.
AI for all and AI everywhere
AI will continue to be the headline as more firms announce AI-induced layoffs, expect workers to gain skills without most offering training, and fail to support employees frustrated by the lack of guidance on how to integrate AI into their work.
Beyond the lack of work and integration guidance will come the onslaught of new tools, all of which workers will be expected to master. Workers will be asked to choose between which responses are most relevant when more than one response is available and which tool or tools to choose for which kind of work.Β
While 2025 may see some increase in the integration of AI expectations and metrics built into job descriptions, most job descriptions still wonβt incorporate AI by yearβs end.
Managing digital labor
Managers will face their own issues when asked to manage βdigital laborβ that will not strike and will not take unplanned days off but may still refuse to listen to their instructions. Coordination and more autonomous digital agents will become middle managementβs challenge in 2025, as copilots and more collaborative agents will challenge line workers.
It’s getting noisy in here
For the AI dialog experience, organizations will be glad they invested in headsets and that their teams have become accustomed to verbal repartee on video conferences. AI is about to get noisy as people talk to it more than they type. It will take a while to get used to, but talking to AI in business settings will become less novel as the year progresses.
AI subscriptions come for entrepreneurβs wallets
For entrepreneurs bereft of large enterprise deep pockets or privy to the deals offered to them for AI integrated with their enterprise licenses, subscriptions to AI tools will start to add up. These subscriptions will mostly take the form of βproβ level subscriptions to tools like ChatGPT, Gemini and Claude so that they can be more effective partners. You can’t master a tool you don’t use. All manner of other apps will either require an βAIβ subscription for advanced features or charge users for tokens for access to those features.Β Some solopreneurs may need to choose between ChatGPT Pro and Netflix.
Combating AI fatigue
Working long hours drains people over time, even in jobs they enjoy. Working long hours to incorporate new ways of working and new tools with less-than-stellar support is even more draining.
Organizations that want to combat AI fatigue need to add the following practices to every managerβs portfolio:
- Fund earning resources and give people time for training on AI tools and skills.
- Give clear guidance where possible on which tools to apply to what kind of work. Where clear guidance isnβt available, work with employees to build a knowledge base that will lead to guidance.
- Be transparent on which tools are used and with any modifications made to those tools, such as RAG repositories or in-house guardrails. Donβt make people guess about how generative AI systems have been engineered (if they have been engineered).
- Work with managers to redefine their roles and the roles of their staff.
- Co-create the learning environment by asking teams to participate in data discovery and governance, exploring ethical considerations for the use of AI in their work, how AI should be integrated into workflows, how to measure the impact of AI on work and workers, and how best to communicate their successes and lessons learned.
If organizations adopt these guidelines, they will likely enjoy a more positive experience with AI. As an analyst and scenario planner, though, Iβm skeptical that 2025 will be the year that organizations decide to redefine themselves for AI as they spend most of their time dealing with other issues in this forecast, like data preparation and security.
Chasing ROI
Organizations that chase ROI will be in a tough spot in 2025. On the one hand, they will be overly cautious, perhaps missing out on discovering their killer app while they transform their call centers and redecorate them with low-hanging fruit. Chasing ROI will homogenize use cases, often removing the competitive advantage because all the competitors are chasing the same use case and achieving the same ROI.
On the other hand, if they donβt chase ROI they will likely find themselves answering to finance about growing bills for AI experiments that donβt seem to be going anywhere. See the Killer App section for ideas about how to bound a big problem in an area that could become a competitive differentiator.
AI is a transformational moment not just for operations and knowledge work but for finance as well. Finance must build models that track the often-circuitous routes to returns from innovative ideas. Tallying a million little copilot interactions will probably add up to enough savings to justify the MS365 renewal, which is its intent but not enough to say AI has achieved its ROI. See our work on The Serendipity Economy to explore alternative economic models that instill the patience to recognize long-duration returns.
While productivity is the north star for cost reduction, it has its limits. One of those is a lack of time for discovery and experimentation. If AI becomes an efficient partner and removes most of the labor costs, who will be left to recognize the need for disruption when efficiency grinds current products and processes into irrelevancy?
Agents: The next rung up the hype ladder
In 2025, AI agents are hyped as becoming a transformative force in enterprises, reshaping workflows and redefining workers’ roles. Agentic AI systems (marketing, please!), sold as capable of independent thought, action, and optimization, are aimed to move beyond simple task automation to become active collaborators and decision-makers.
Not everyone defines agents in the same way. Salesforce sees them as helpers and assistants, though CEO Marc Benioff also refers to them as digital labor, which I will get back to. Others, like Rob Wilson, the founder of AI agent orchestration platform OpenReach.ai, see them ultimately creating enterprise systems on the fly after they have displaced existing enterprise software. That’s a pretty big gapβone that will drive a lot of industry dialogue over the next several years.
What I do know, and I think Benioff acknowledges, is that organizations aren’t ready for digital labor. They don’t know what it is or how to manage it. As for Wilson’s position, agents replacing enterprise software is akin to renewable energy replacing fossil fuels. Battles will ensue.
Common to both paths will be agents, at least for now, that will leverage existing LLM infrastructures, albeit delivering services in new ways. If that is the case, wild and random agents going off and collaborating on their own could rack up some big invoices from AI vendors. Cost models and cost containment approaches must be developed.
And then we have trust. Many, including Benioff, talk about guardian agents who oversee agents to ensure they don’t go off the rails. I’m assuming they could also help manage costs by keeping query iterations to a minimum. I am much less clear on what the hierarchy looks like, how many levels it is, and how and where trust gets embedded and verified. How will I know, for instance, if I trust a guardian agent to double-check my expense submission? What if it gets one item wrong? Do I stop trusting it and ask for a new one? How do IT departments manage trust bugs?
I recommend that the agent advocates who have not yet read Marvin Minky’s The Society of Mind find a copy and broaden their perspectives.
On the positive side, agents might get to some of the last mile tasks like taking a Microsoft Word document and turning it into a fully realized blog post that reflects my site’s visual style, selecting the appropriate categories, managing SEO, publishing and publicizing the post. That’s the promise. I’m waiting. I hope an agent calls me soon to tell me it’s ready for an interview.
Of course, digital labor raises the specter of job displacement as AI agents threaten roles previously held by humans. While middle management may see themselves as a particularly ripe target for AI agent replacement, the management of agencies may offer an opportunity for role redefinition. Regardless of middle management’s fortitude for survival, “real labor” (people) will push back against agents as they see work once considered particularly human, repositioned to agents.
The rapid adoption of AI agents introduces ethical and security risks. If AI systems are not properly monitored, they may produce biased or inappropriate results. AI agents may also be used to manipulate employee moods and behaviors for profitβnot to mention being used for managing social engineering attacks. AI agents also introduce a new threat surface in which malicious actors can exploit vulnerabilities or abuse them, raising the risk of data breaches.
Many products will be marketed as including or implemented as “AI agents.” Agent marketing will lead to confusion and skepticism. As suggested above, agents will likely arrive in more than one form and from more than one vendor. AI agent architectures, interoperability, and other factors, like compatibility, must be considered before implementation.
Agents will also likely create a Bring-Your-Own-Agent (BYOA) to work market, which IT and data leaders need to monitor. BYOA should drive early engagement with the technology to prevent widespread adoption of external tools that risk exposing internal data without the proper safeguards. Lessons learned from the Internet, social media and other technologies should be applied. Agents will further complicate the AI landscape which is already populated by easy access to standalone tools via apps or the web, as well as AI on mobile devices.
Despite Salesforce touting millions of agents already executing globally, 2025 will be a year of getting to know agents, not a year where they universally disrupt people’s work. It should be a year, however, when people think about them deeply, consider the implications, and prepare for how they will be deployed in the future.
Biased AI: The rise of ideological LLMs?
AI cannot help being biased, given its training sets are filled with human content, and all humans demonstrate some bias, either explicit and harmful or implicit, with unintended consequences.
Imagine, however, that ideologies build AIs with bias as a purpose, to provide the answers one should have rather than vetting responses for neutrality. There is no reason this canβt be done.
Our 2035 scenario work suggests that we could see early versions of such systems arriving in 2025. The ethics discussion within AI is only of interest to those who adhere to ethical frameworks. Parties that already belie ethics will not be compelled by arguments from those they already dismiss.
Sustainability: More energy before more energy efficiency
Sustainability in AI will confront those who train AI with ethical dilemmas around using energy and the natural (and unnatural) resources required to generate that energy.
Sustainability will also include efforts by emergent firms to create more energy-efficient models, such as those being explored at Liquid.ai that leverage different approaches to training.
AI firms are also looking at bringing efficiencies to model execution, such as DroidSpeak, an experimental language that allows LLMs to communicate directly without translating output into any human language (see DroidSpeak: KV Cache Sharing for Cross-LLM Communication and Multi-LLM Serving on arxiv.)
The biggest impact, however, will likely come from AI and tech firm investments in electricity from traditional and non-traditional sources, like nuclear. All of the major AI vendors have already made huge investments in energy infrastructure, and while models that challenge their underlying assumptions about energy use have the potential to greatly reduce energy consumption, those are not likely to have an effect on 2025, which will continue to see energy infrastructure at the forefront of tech investment.
Data: Enterprises still washing their RAGs
Enterprises will continue to be challenged by the readiness of their data for AI ingestion. Even with more sophisticated models, sending a folder of PDFs will not result in a good Retrieval-Augmented Generation (RAG) experience.
RAG and related technologies, like knowledge graphs, will eventually transform most information retrieval experiences, combining the generative capabilities of AI models with the ability to pull real-time information from external sources. RAG will benefit industries that rely on accurate, up-to-date insights, mitigating the problem of AI inaccuracies by limiting responses to those retrieved from trusted sources.
RAG often requires deep reengineering of source data, as well as quality audits, which delay implementation. As the focus shifts from general-purpose AI to purpose-built tools designed for specific functions, businesses will prioritize the depth and purpose of their AI implementations. Enterprise historical data has not been a priority for many firms. RAG forces them to grapple with their content more intimately than ever before.
AI-generated synthetic data will play a crucial role in various sectors, particularly in national security, where it will be used for simulations. These simulations will enable governments to train workers and test strategies related to war, economic engagements, and cyber attacks on critical infrastructure.
Large commercial firms, particularly in engineering, healthcare and financial services, will also employ synthetic data for research, testing and development in order to reduce the likelihood of compromising personal data.
As for βreal data,β companies with exclusive, high-quality datasets will have a competitive advantage and an obligation to the data sources (often people) and their shareholders to keep that data safe.
While 2025 will see progress, the vast majority of firms will remain mired in their data, which will limit their ability to meet ambitious AI goals.
Open Source: Looks good until it’s all on you
Just because the base code is easy to license, even free, doesnβt mean a solution will be.
Managing open-source AI models’ security proves to be a major challenge and cost. Determining if threat actors have poisoned open-source LLMs with vulnerabilities that can be exploited will be the responsibility of those who adopt the models, along with the community. The business question is: βWho is accountable if something happens?β With open-source, it is usually the adopter.
Even in non-malicious circumstances, developers using AI coding tools to accelerate productivity may inadvertently introduce vulnerabilities through AI-delivered code. Organizations must establish formal approaches to operationalizing advanced AI, including rigorous code scans and security protocols.
Another challenge lies in the development and deployment of AI models. While open-source projects will continue to play a critical role in the tools used for training AI systems, other aspects of model development, such as highly optimized inference systems and integrating models into production environments, may be better suited to a closed-source approach. Therefore, open-source adopters may depend on external or proprietary solutions for certain aspects of the AI lifecycle.
Open-source adopters may also face challenges related to data quality and the availability of talent. The success of AI applications is highly dependent on data quality. Organizations must deliver the right data strategy to obtain high-quality, low-cost, safe and secure data with open-source AI.
Additionally, the rapid pace of AI advancement means that the skills required to build, manage, and maintain open-source AI systems may be in short supply. AI vendors and very large firms will likely be able to out-compete even some of the larger firms for the talent required to build and deploy full-stack AI solutions.
The talent shortage will likely lead to the continued development of AI consulting firms that can spread their own limited talent pools across multiple projects. Many of these firms may find it more lucrative to partner with AI vendors rather than master open-source because they will have the vendorβs knowledge and talent to fall back upon for knowledge and at least technical and operational accountability.
Finally, there is a lack of standards across the open-source environment, and with that, a general lack of clarity regarding the ethical implications of how a given model was built and trained, including the data used for training. This can create challenges regarding transparency, fairness, and accountability.
Open-source AI adopters must invest in security, focus on data quality, and employ the technical expertise to manage open-source models. Open source will result in new processes and practices requiring development, deployment and adoption before any training occurs. Open source may one day democratize AI, but it may be acquired in a proprietary wrapper, much like Microsoftβs Edge browser or the VMWare version of an Apache HTTP server.
Killer App: Scaling success will lead the killer app discussion, but perhaps we should be looking harder at hard problems
Agents will not be the killer app. Agents are just another tool. The killer app may involve agents, but implementing agents without a well-considered use case is just as bad as implementing any generative AI without a well-considered use case.
Rather than the killer app being some overwhelmingly successful universal solution or millions of tiny value adds throughout the day, we have defined a set of characteristics that could lead to significant value from AI.
These characteristics are:
No deadline. This means that no immediate action is dependent on the AI.
A lot of data that could be used to inform the solution. Because these types of large, deep explorations are iterative, the data need not be imported or configured all at once.
The foundation is slow to change. Unlike real-time transactions, where context is in constant flux, some exist on structures, like physics, or biology, that change more slowly than user sentiment or the technical details of the latest product.Β Β
The answer doesnβt have to be right, or even close, initially. This characteristic focuses on the quality of the result. It can be wrong to start with. The system and those employing it can learn and adapt and apply different data. The lack of a deadline empowers this characteristic. It need not interpret an order immediately or diagnose a disease in a living patient. What it can do is iterate.
For instance,Β Mirmex Motors micro motors, designed with the use of AI, offer a new solution for high-performance industrial and surgical precision-powered tools and active prostheses. These motos reflect a decade of research. AI helped; more modern AI may have helped bring a solution sooner, but it would still have taken time to test options. What the market needed was an effective solution (that met several design parameters) more than just a solution that worked.Β
Some problems take time, and making AI a partner in high-stakes, big-goal projects will likely prove more of a killer app than artificial general intelligence, which, if even possible, will mesmerize us just as much as it will disappoint us. Productivity often overwhelms patience. For some of the big problems, patience will prove more critical.
Some of the problem spaces where this approach to AI development may payoff include:
- Scientific Hypothesis Generation
- Climate Science
- Art and Design Ideation
- Language Evolution Modeling
- Creative Writing Assistance
- Urban Planning Simulations
- Algorithmic Investment Research
- Educational Content Development
- Exploratory Computational Biology
- Foresight and Scenario Planning
- Virtual World Building
- Water Resource Management
- Transport Network Design
- Nuclear Reactor Design
- Generative Design and Engineering
- Energy Systems Modeling
All of these areas represent very high risk for poor solutions. This extended exploration approach mitigates risk by giving people time to test solutions before applying them. In many cases, solution testing may include adversarial AIs that test the robustness of suggestions made by other AIs, another form of taking time to reflect.
Another type of βkiller appβ will have a more immediate impact as vendors package successful use cases into commercial offers. An early example is marketing, where AI can be used reliably for sentiment analysis, campaign development and management, content personalization and copywriting. Several emergent and existing vendors are leveraging successes in marketing to offer independent platform plays built atop larger language models and other technologies.Β Β
2025 will likely see a cascading of start-ups focused on repeatable AI solutions with broad commercial appeal. In some cases, the emergent vendors may disrupt existing platforms and vendors in areas like marketing, content retrieval and analysis, and education.
Regulations: Test the EU
In 2025, the regulatory environment for AI is expected to become more defined, with a focus on ensuring responsible and ethical use of AI technologies. There will be increased emphasis on transparency, ethical practices, and accountability as governments worldwide take action to establish guidelines for AI systems.
The EU AI Act will serve as a significant example, classifying AI applications by risk level and imposing stricter requirements on high-risk uses, such as facial recognition and healthcare diagnostics. The Act emphasizes transparency, ensuring that people are aware when they are interacting with AI, and will also require AI providers to disclose their training data sources and share results of model evaluations, including adversarial testing.
In the U.S., the approach is expected to prioritize innovation and security, with the goal of preventing bias and promoting ethical practices while still encouraging progress. It is predicted that the U.S. administration will remain friendly to mergers and acquisitions. Unlike the EU, which is setting firm regulations for AI development and use, the U.S. is expected to take a more innovation-forward approach.
The U.S. will likely see more legal activity related to copyright infringement and harmful AI outcomes, which will influence the laws that emerge around training data, copyright, data privacy, and other issues. There is also an expectation that AI regulations will largely come from existing laws rather than creating broad new AI-specific regulations.
In the U.S. and the EU, there will be a focus on holding AI systems accountable for their decisions, ensuring that AI systems are used in a socially beneficial way.
Trust: If you steal my stuff can I trust you to talk to my customers?
Trust will remain a big issue for AI in 2025. It will be characterized by several attributes, including the perception of transparency by vendors, which will be influenced by how the lawsuits related to intellectual property (IP) play out, how AI gets monetized, the policing or willingness to be policed when AI does harm, and how far vendors and implementors stray from the promise to keep humans in the loop, plus continued innovations to keep solutions relevant.
In 2025, and likely well beyond, trust issues will also arise from the rapid pace of innovation. As we saw at the end of 2024, the perception that OpenAI as the right play was undermined by the ascension of Alphabetβs Gemini to the precipice of accuracy and performance at the top of the AI hierarchy.
Like college football rankings, this wonβt be the last reordering. Companies applying AI will spend millions of dollars with AI vendors. They will be hesitant to invest in second-best solutions.
So, what happens when assurances about future superiority go out the window in practice? A few of those experiences will leave CIOs or CAIOs questioning their choices and their bosses questioning their judgment.
The implications and findings of lawsuits against various AI vendors and the willingness to take corrective action will also impact how trusted or trustworthy a brand appears to be.
Deep Dive on IP Lawsuits: Copyright for books, images, music and other content
In 2025, AI will face several legal challenges, primarily concerning copyright infringement. Judges have consolidated many cases into others, such as Authors Guild v. OpenAI and Microsoft and Trembly v. OpenAI, to reduce court time and potential disparities of interpretation by ruling on similar cases together. 2025 may see some results, but many cases may take years to resolve.
Similarly, Concord Music Group v. Anthropic has become the cornerstone case on music and AI, while Getty Images v. Stability AI focuses on images.
Lawsuits will likely increase against LLM providers who cannot guarantee that copyrighted data was not used to train their models, forcing companies to choose providers based on their transparency about data usage and data protection.
The other area of legal activity involves harm caused by AI. The EU AI Act will serve as a model for many regions, setting requirements for high-risk AI applications. Additional legal precedents will complement the EU AI Act, helping define the liability for harmful outcomes caused by AI-driven decisions, including discrimination in hiring, digital redlining, and libel.
Security: An AI arms race to protect and attack AI’s huge surface of vulnerabilities
In 2025, organizations implementing AI will face significant security challenges from the complexity of AI systems and the evolving threat landscape. These challenges will require a proactive and multifaceted approach, combining new security tools and techniques with a strong emphasis on governance and best practices.
We characterize this generally as an AI arms race. Attacks will be more innovative and more virulent because of the application of AI to design and execute attacks. Social engineering-based security breaches will become broader as AI lowers the entry bar and increases the quality of attacks.Β
One key challenge will be the increased attack surface that AI introduces. AI systems, particularly those using Large Language Models (LLMs), are vulnerable to attacks at various points in the AI lifecycle, from the training phase to deployment and operation.
Threat actors may attempt to poison LLMs with vulnerabilities that can be exploited, or they may seek to inject malicious code into the AI pipeline to manipulate the modelβs output or reveal sensitive data.
The rise of AI agents creates a new threat surface, as these systems, capable of autonomous action, can be subverted by malicious actors. Insider threats may also increase as AI agents become more prevalent and malicious internal actors find new ways to abuse them.
Another significant challenge will be protecting data used to train and operate AI systems. Training data may leak, exposing private data and creating an opportunity for intellectual property theft, including the theft of the AI model itself.
Organizations must establish rigorous controls around how a model was trained and what data was used, including audit trails and governance structures. Data privacy and residency will be a particular concern for governments and healthcare organizations, which must comply with strict regulations regarding the handling of sensitive data. This will also be a concern for any company that does business in areas where data is protected by law.
To address these security challenges, organizations will need to adopt a formal and proactive approach to AI security. This will involve several elements:
- Secure AI Frameworks (SAIF):Β These frameworks offer best practices, tools, and protocols designed to protect AI technologies against adversarial attacks. SAIF frameworks focus on robust model training, adversarial testing, and continuous monitoring.
- AI Red Teaming:Β Organizations must ramp up efforts to protect AI models from vulnerabilities, and AI Red teaming will become a critical practice.
- Data security posture management (DSPM):Β This comprehensive approach to continuously monitoring and improving an organizationβs approach to protecting its data will help to reassure organizations feeling pressure to use their data in complex AI models.
- Third-party guardrails: Tools like Guardrails AI are becoming essential for embedding safety checks directly into AI workflows. These guardrails monitor AI behavior to ensure outputs meet ethical and business standards.
- AI-driven security tools:Β New AI-based security tools will be developed to improve threat detection and incident response. AI security copilots will also emerge, applying AI to the high volume of potential incidents that security operations centers face. These tools will help human security teams be more effective but also raise new concerns, such as those relating to copyright.
- Security Data Lakes: These data lakes, which store large volumes of security-related data from diverse sources, will be essential for advanced analytics, threat detection, incident response, and long-term data retention. Security data lakes also support more modular and data-centric security strategies.
Formal approaches to AI governance: Organizations must establish rigorous, formal approaches to how advanced AI is operationalized. This includes establishing controls around how a model was trained and what data was used, including auditable trails, certifications, and governance structures. Industry standards, such as ISO 42001, are expected to become table stakes.
Employee Training:Β Workers will also face a learning curve as they adapt to working with AI tools. They will need to be upskilled in the areas of data-driven strategic thinking.
Organizations will also need to consider that AI is not always accurate, leading to the need for security practices and responses to curtail and mitigate inaccuracies. They will need to decide where the risk is to customers, the environment, or the organizationβs brand reputation is intolerable, leading to suspending investment in those areas until AI reliably produces better results.
Hardware: AI PC become dominant and it might not be x86
Microsoftβs Copilot +PC needs to end, and hopefully, 2025 will see it disappear quickly like so many New Yearβs resolutions. The messaging, at least to me, sounds like only smart and creative people need a new Copilot +PC, the term that replaced AI PCs, which Iβm guessing was too generic for my former colleagues in Redmond.
I believe that everybody needs an AI PC. Why would they buy anything else? I know, cost, but buying any other type of computer is not just tossing money at a loss leader; it also represents a significant underinvestment in personal capability that will frustrate and hobble buyers who wonβt be able to leverage the latest on-device innovations in Windows 11 and its hosted apps. Iβm not suggesting that everybody needs to build a personal data center, but a 4GB i5 PC is already useless for most modern applications. 2025 should be the year the low-end PC market starts to disappear.
Apple, by the way, only sells AI PCs. Every one of them has an M-series chip with neural processing units and serviceable GPUs. Not everyone will choose to buy a Mac, but they do need to choose a PC that will be able to run AI-enabled apps in order to keep them and their skills competitive.
In the meantime, Qualcomm, which is coming after Intel in more ways than one, will continue to challenge the X86 world created by Intel and AMD, reinforcing the value of RISC where Apple has shown the way. (Note that RISC isnβt a new revelation either. Read our commentary here).
On the speculative front, I have always been a fan of distributed computing. The SETI project leveraged it for years, as did a few other apps. As the market grows for devices with high-end CPUs and their integrated neural processors and GPUs, most complemented with plenty of memory and many with discrete GPUs, many of which will sit idle for many hours a day. I would love to see a vendor, (Iβm betting on Meta), with its open-source bias, to create a distributed learning app that allows people to pool their AI hardware. Maybe not 2025, but I wanted to light a virtual candle.
Robots: More common but notΒ The JetsonsΒ yet.
In 2025,Β consumer robotics is expected to move from mundane (such as home cleaning robots) to more sophisticated applications across more sectors where robots have not traditionally played a role. (Industrial robots are already common, for instance, in manufacturing. In South Korea, one in every ten workers is a robot. SeeΒ Manufacturing Today).Β
2025 wonβt look like an episode of The Jetsons. Robots, however, will likely be seen in more everyday settings. In some cities, self-driving cars are already common. Other areas where robots will likely play a role include elder care (especially in Japan), warehousing, and retail. Ordinary multitasking robots will become more prevalent in healthcare and logistics, though regulatory frameworks may keep them experimental.
The integration of generative AI into robotics will be a key driver of this shift, leading to more sophisticated and autonomous machines that can interact with people in open, public environments.
Other areas to consider for robotics include the over-emphasis on humanoid robots rather than those built for purpose. See our earlier commentary here.
There will also be a conflation of robots and AI agents in 2025 as both raise important questions about human effort, leading to debates about work, self, and meaning. This includes concerns about the nature of work and the potential for job displacement, which could also lead to social unrest and inequality if not managed carefully.
Discussions will also focus on the importance of human characteristics, like empathy and innovation, and if algorithms can replace them, or if people will carve out new, very human roles in the future work.
The pushback against automation will also include marketing in support of human-centered products, such as Tropicana’s ‘Tropcn’ campaign that temporarily rebrands the iconic company without the use of the letters βaβ or βiβ in its name. (SeeΒ Rebranding Takes a Stand Against AI, Highlighting Natural Ingredients.)Β Β
Acquisitions: Consolidation, consolidation, consolidation
In 2025, mergers and acquisitions (M&A) in the AI sector are expected to be driven by the need to capitalize on the rapidly evolving AI landscape and to gain a competitive edge. Companies across the tech sector, from hardware to software, will be looking to strengthen their positions in the AI space.
Key drivers of AI-related M&A activity in 2025 include:
- Strategic acquisitions: Companies will acquire others to expand their AI capabilities, particularly in areas like data center networking solutions. For example, Hewlett Packard Enterpriseβs (HPE) acquisition of Juniper Networks aimed to expand its AI infrastructure.
- AI talent and expertise: The demand for specialized AI skills will drive acquisitions of companies with expertise in areas such as natural language processing (NLP).
- Access to data: Companies with exclusive, high-quality datasets will become increasingly attractive acquisition targets because data is critical for training and customizing AI models.
- Distribution channels and proprietary relationships: Companies that own critical distribution channels or have established relationships in niche markets will also be attractive targets for acquisition.
- Integration of AI into existing platforms: Companies across industries will be looking to integrate AI into their existing systems, which may lead to acquisitions. For example, major cloud providers like Amazon and Microsoft have made significant investments in AI companies like Anthropic and OpenAI, respectively, to enhance their AI capabilities.
- Acquiring AI tools: Companies will acquire AI tools, such as those for GPU orchestration and enterprise-AI inferencing platforms, reflecting a push into enterprise-grade AI solutions. Nvidia, for example, has accelerated its acquisition strategy, completing five deals in 2024 to acquire AI-related technology.
As a result of these drivers, several types of companies are expected to be involved in M&A activity in 2025:
- Pure-play AI firms: Companies that specialize in AI technologies and services will be involved in acquisitions and mergers as they seek to expand their reach and market share. The need to become profitable may lead some AI labs to transition from being pure LLM providers to offering software solutions that directly serve end users, leading to market consolidation, with a few major AI research companies dominating the field.
- Cloud computing and software companies: These companies will be active in M&A to enhance their AI offerings, as they are seeing strong growth in AI-related services. Cloud computing, in particular, could see a surge in deal-making activity.
- Hardware companies: Hardware companies will be acquired for their role in providing the infrastructure for AI, particularly in data center networking solutions.
- Traditional companies: Companies in conventional fields will feel pressure to integrate AI technologies into their systems, which may drive them to seek acquisitions.
The M&A landscape will also be influenced by the financial environment. Tech valuations are expected to level off, and the need for AI solutions is intensifying, which may lead to increased deal-making activity. Additionally, the U.S. administration is expected to remain friendly to M&A, which may further fuel deal activity. It is also expected that many AI firms will pursue initial public offerings (IPOs), which will signal strong investor appetite for AI infrastructure.
The Red Lines on the Forecast Map
The red lines on the forecast map represent relationships between ideas that were important enough to call out. Other relationships may exist and may be added to the map over time.
Killer Apps β‘ Agents: Some speculate that AI agents or Agentic AI is a killer app. We see it as just another implementation. Agents will need their own killer apps.
IP Law Suits β‘ Regulation: The outcomes of lawsuits are likely to drive regulations, especially in the U.S.
Human in the Loop Threats β‘ Digital Labor: Autonomous agents will question and challenge the commitment to human-in-the-loop promises.
Human in the Loop Threats β‘ LLM “Private” Language: Machine-to-machine communication in a purely digital form (likely nonhuman understandable) makes transparency more complex and may accelerate the evolution of models that evolve faster than they can be tested or monitored.
Confusion & Fatigue β‘ AI Across Devices: AI on devices will make it hard for people to disengage with AI, and it will also drive confusion at work about what it can be used for (on personal devices) and what to do if AI recommendations conflict across models.
Open Source β‘ Enterprise Data Still Fall Short: Open source AI will not be a panacea to those seeking to disengage with commercial providers because it will require significant investments in infrastructure, policy and practice, all starting with preparing data.
Enterprise Data Still Fall Short β‘ Data to drive enterprise accuracy and benefit. While in the same cluster, they are co-dependent. Techniques exist to create more credible enterprise AI output, but that technology relies on good data and investments in restructuring it for AI consumption and output.
For more serious insights on AI, click here.
All images created by DALLE-2 from Open.AI from prompts written by the author.
Did you enjoy the 2025 AI Forecast: Serious Insights on AI and the Future of Work? If so, please like the post and share it on social media. Click a sharing button for easy sharing. Have a question or comment? Please engage in the comments section!
Leave a Reply