Future of Work Forecasts 2024: Driving Forces Shaping the Future of Work in the Next 12 Months
Cover Photo by Cottonbro Studio on Pexels.com
AI confusion. AI will be everywhere, but it won’t all be the same. As most enterprise software gains some level of AI functionality in 2024, organizations will realize that AIs from different vendors may represent different choices for sources and restrictions. Suggestions and data may not correlate across systems. Some choices will make certain systems better than others at tasks like coding or data manipulation. As with all enterprise technology, it will be up to the buying organization to make sense of what works for them and what doesn’t—what AI to adopt and trust, which to use with caution, and which to abandon.
“So what?” Comes for AI. 2024 will likely see the first, and perhaps several versions, of AI disenchantment from end users as they see the limits of freely available technology and the difficulties of making AI work with an organization’s own content (or their content). People may also just find the generative AI experience less engaging as they find few meaningful applications in their work and lives. Will the shine come off of AI, forcing AI vendors to find new ways to appeal in order to maintain valuations? [CBInsights reported a general slowing of AI investments in Q3 2023]
Generative AI will struggle with value beyond the surface layer. AI will be most useful when it reaches the “last mile” of automation. When it moves beyond generating content or code from prompts to create a draft proposal with internal content or when it can compile and test code.
On personal devices, integration with voice assistants to perform basic OS tasks and to make local content more discoverable and malleable all fall into that “last mile” of automation—detailed tasks people do every day for which AI does almost nothing yet. Some of this is hard, which is why AI doesn’t do it yet. 2024 will likely start with headwinds in this area but it may have some breakthroughs.
Although Apple looked at ChatGPT like Microsoft started at a web browser in 1990, keep an eye on Apple to respond quickly with a smarter, more integrated Siri.
Responsible AI will expand to include other technology. The responsible use of AI isn’t just about AI. It’s about the responsible use of technology in general. AI may be the current poster child for technological threats, but quantum computing, pervasive sensors, and massive amounts of data about people pose just as much a threat as AI, but in different ways. AI will act as the connector between data and sensors that offer both the most promise and the biggest threat.
The AI arms race. As much as governments, organizations and individuals see AI as a potential threat and seek regulatory, technological and human-based moral constraints to contain that threat, others see AI as an opportunity. The others being bad actors who will turn AI into a new tool to create chaos, steal information, and generate emergent attack vectors for systems and society. The illegality of killing people does not stop mass shootings. The illegality of human trafficking may reduce it, but it does not eliminate it. AI regulations won’t work any better.
People will find a way to skirt regulations and technological constraints—and as for morals, the lack of universality in the human moral framework means that there will always be people who choose to see the world through different lenses, therefore justifying their actions as moral within their framework, regardless of how their actions are viewed by those living in other frameworks.
As with all arms races, the technology in play will be used as a deterrence for the emergent threat, and AI will ultimately be put into service to stave off threats from other AIs. That will challenge the underpinnings of transparency, responsibility and governance, as only AI can react fast enough to new threats, but the choices made during moves and counter moves will likely be opaque to the humans employing it.
AI regulation without teeth. AI will evolve faster than regulatory frameworks. I am not prone to stating unequivocal facts about the future, but I make this statement with a great deal of confidence. Because effective regulation will require details and not generalities, the efforts to regulate AI will fall short. The details for how AI works will change even as regulators publish their guidelines.
As with the moral frameworks referred to in The AI arms race forecast above, regulations will not agree, in detail, across major governance boundaries, making compliance difficult to manage. AI developers will likely create a compliance layer that offers alignment with general principles, but most will seek ways to skirt those principles through subtle approaches to implementation.
It also may prove in the interest of some organizations or governments to make broad statements that align with global sentiments while pursuing technological aims that contravene agreements for political or economic gains.
The term accountability is often missing from AI policy statements. And while the word may begin to appear in regulatory language, holding people, organizations, or governments accountable in a quickly evolving space like AI will prove difficult, if not impossible.
Governance will be a big part of the AI and data discussion. At a level down from the regulatory frameworks, organizations that seek to leverage AI effectively will need to recreate their data governance structures, many of which may already be less than adequate. Much of this work will focus on preparing an organization’s data for use with AI, though it will likely include data protection as an attribute.
Shortages of authentic AI talent will curtail adoption. There will be plenty of organizations and individuals espousing their ability to help implement generative AI solutions. People who actually know, those who aren’t learning as they go like everyone else, will be hard to find and expensive to hire. 2024 will expose the lack of authentic AI talent as implementations fail to reach their stated goals. Just knowing how to form prompts won’t be enough. Organizations will require a large set of new skills, including model building, data cleansing and semantic analysis, to build effective AI-enabled systems—and they will all be competing for the same small talent pool.
Large organizations and large consultancies will be the best positioned to attract talent through lucrative salaries and perks. Constant shuffles will be likely for the most adept talent, which will also disrupt projects as they leave one company for another, requiring resets and assessments before work can continue at either firm. Organizations that want to succeed without building their own capability will likely turn to partners with proven capabilities to help them build AI into workflows. The boom to top-end consultancies will defer their own worker displacement from the use of AI internally. Even middling consultants will help manage and report on AI projects for clients.
AI will meet a glass ceiling. As with most new concepts, the gut reaction from directors and boards will be to put “someone” in charge of AI (as they did with quality, data, data security, the Internet, and information technology). On that level, AI will likely drive the creation of an executive role. At the same time, executives will likely be more focused on control and returns on investment rather than using AI to help them do their jobs. AI will not find 2024 as the year boards and senior leadership teams adopt AI as the answer to helping resolve their own dysfunction.
AI costs to buyers will drop while suppliers see increases in costs. AI is expensive. It isn’t clear that the market understands just how expensive AI is to develop or run at scale. Microsoft’s high entry fee for enterprise co-pilots hints at more than a money grab. It suggests that AI isn’t a commodity business—it may be more like aerospace than operating systems. But like the media streaming business (from which AI vendors could learn but probably won’t), they will drop prices to drive adoption, allowing for large losses before they reap the profit rewards of the future. That will be good for buyers, but it will sow the seeds of long-term instability in the market as it drives supplier turnover from failures, mergers and acquisitions.
Legacy AI behemoths will experience the warm, hot breath of emergent competitors riding up behind them. Dropping prices for a long-term profit gain works in a low-competition market. It doesn’t work in a fast-paced, highly competitive market where emergent organizations come after incumbents not just one at a time but in droves. In 2024, some major IT suppliers may start to see the first hints of their core businesses coming under threat because of their AI investments.
Refocus on data. The refocus on data will prove multi-dimensional. Legacy data’s value will be called into question for its relevancy. Why manage and maintain data that no longer offers value? Retail will likely be a driver of this perspective. Other organizations, those with longer-term customer relationships and in regulated industries, will find not just value, but a need to maintain their legacy data and make it more accessible through AI-based interfaces.
As AI moves inside, organizations will use their data to train and augment models, and to leverage new approaches like retrieval-augmented generation (RAG) to bring more concise, business-relevant responses to employees querying their systems. In some cases, AI will take a back seat to data preparation in 2024.
Real-time data will drive investment in sensor research, acquisition and deployments. The real-time dimension of the Refocus on data point requires its own forecast. As AI becomes more real-time, it will require data to manage the complexity of organizations. This will drive deep research into sensor technology.
While 2024 isn’t going to be the year of huge new sensor acquisitions and deployments of sensor data, it will be the year organizations start planning more aggressively for that future—both enterprises that will use the data to make better decisions and the sensor vendors who will see the development and expansion of markets. The potential for better decisions using AI will drive the need to acquire new data sources with more coverage of operational details.
Multimodal. The “chat” AI vendors tout a world where just saying something will make it so. But humans are multi-sensory, and language often proves imprecise. Multimodal AI will integrate vision and auditory features into AI’s pattern recognition capabilities.
Rather than describing a chart, multimodal AI will be shown a chart and asked to find a more recent version, or to convert it to data and update it. The auditory space will likely find huge implications in entertainment but will also make inroads into the nascent market for capturing audio in meetings and using that for mining everything from actions to participation profiles.
Many standards will start to look like legacy technology. APIs and other interface standards will start to look like legacy technology as AI learns how to integrate across boundaries directly, leveraging those same interfaces at first, but perhaps creating or suggesting more effective means as it navigates systems interfaces. While this may seem like the most science-fiction of any of our forecasts, it may turn out to be the most tangible as organizations use AI to decrease their technical debt—with AI finding novel ways to do so.
Digital transformation redefined. Digital transformation focuses on the integration of digital technology across an organization. When done well, it simplifies and integrates operations and offers new value to customers. Successful digital transformation initiatives also include policy and practice changes that challenge assumptions, unleash experimentation, and get people comfortable with uncertainty. AI will transform digital transformation, changing the ways integration and simplification take place and fostering a need for different kinds of knowledge and approaches to work that will strain even the most accepting of change.
The automation trap will become visible. The rapid onset of AI will derail many plans for eliminating technical debt because it will challenge the underlying methods of the plans. This will cause IT departments to question the use of AI. Rather than managed change, AI may introduce ongoing changes, that, even if explained, will likely outpace human ability to understand or modify them.
Seeking productivity through automation may be the lynchpin in losing control of IT systems as they are known today—causing not only disruptions in unique enterprise systems, but in the enterprise software suppliers as the same tactics for eliminating technical debt handover core code to AI, resulting in systems that may no longer resemble their human coded antecedents.
Unions and unionization will not play a major role in 2024 with regard to AI safeguards against jobs and pay loss as they wait for the results of the 2024 U.S. election. The U.S. election cycle will play a major role in moderating many activities as businesses and individuals wait to see how liberal or conservative the government will become or if stalemate will remain the model. Labor, including managers and executives, will be teeing up arguments for and against AI, but they will likely wait until after the election to define their tactics.
Surprising new businesses will emerge. This is a forecast that seems like a platitude, but its semantics state clearly that surprises will emerge. It is easy to forecast surprises but difficult to forecast the nature of the surprise. So much time is currently being spent trying to figure out how to implement the common use cases for AI that the more novel applications will likely surprise markets as innovators test the edges of the new technology rather than its obvious core value propositions.
Quantum computing will make its first forays into enterprises. Quantum computing will spur quantum encryption efforts in 2024 as the first and most immediate threat to its commercialization. Beyond cybersecurity, both protection and threats, quantum computing likely will end up with its own version of AI, not the generative AI or machine learning that is currently being deployed. That may take years, but some specialized applications, such as molecular engineering or complex price modeling, may benefit from quantum computing more immediately. [For more, see HBR, Quantum Computing Is Coming. What Can It Do?]
ESG will become something else. A combination of work transformation and conservative pressure on ESG will reduce reliance on sustainability messages and the need for socially conscious workforce recruiting. That will not curtail the need for some common way to acknowledge and report environmental impacts, even if only internally. ESG will likely devolve as it morphs into something more narrow for each of its components.
The need for improved risk management. AI, political uncertainty, quantum computing’s threats to cybersecurity, and other factors combine to create new risks and opportunities for businesses and other organizations. Tools like scenario planning can help organizations imagine different futures, even if those futures unfold more rapidly than they might have in the past. Even if people can’t anticipate all of the details of how AI and the other factors will play out, they can imagine futures in which different combinations of factors uniquely align.
The future of AI cannot be modeled in a vacuum. Its success or constriction may be based on social or political drivers rather than technological limitations. The same is true of environmental impacts, robotics, and quantum computing.
Some level of activity and investment will likely include all of the technologies and concepts for which uncertainty exists. The question will be: “What are the levels, the goals, and the outcomes of those investments?” Using scenarios to model that question can help organizations better anticipate changes, even if most of the speculation becomes less useful as the future resolves. Practicing a future that proves adjacent to the future that ultimately unfolds will help organizations navigate it more effectively, benefit from the opportunities it presents, and prepare for its risks.
Since we can’t know which future will unfold, it is better to prepare for any future. Preparing for the wrong future is worse than not preparing at all—in the latter, at least, all options remain viable, while preparing for the wrong future will place constraints on degrees of freedom that will likely result in an inability to adapt.
Wild cards
Robots and physical automation get small. AI will likely start to demonstrate a proclivity for physical design to support information goals. While today’s robots attenuate human stature and mimic human skills, AI may be prone to go small, creating robots that crawl in small spaces to seek data that human-scale robots cannot and that stationary sensors may miss. Assembly at levels other than components, such as materials themselves, may make what is currently called 3D printing look primitive, with tiny robots creating structures through collaborative accretion.
AI-based porn will drive credibility. AI is already playing a role in pornography with AI-generated images, which will probably evolve into unconstrained subscription systems dedicated to the generation of images and videos unique to each user’s tastes and proclivities (as mainstream systems seek to eliminate this feature). How that market plays out may shape other perceptions not just about the moral implications of AI, but as proof for what it is capable of doing. While AI-based porn may reduce the actual objectification of humans in the adult entertainment industry, it may also disrupt the livelihoods of those who make a living from it.
Online betting will see AI as a threat. Another area that may find synergy is gambling, where crafty AI experts use algorithms to make better bets, putting online sports books at a disadvantage and perhaps even challenging their chance-based business model by making the correct selection of near-term future events less chancey.
For more serious insights on strategy, click here.
[…] W. Rasmus, talentoso y muy sensato estratega empresarial, uno de cuyos últimos artículos, “Future of Work Forecasts 2024: Driving Forces Shaping the Future of Work in the Next 12 Months”, publicado el 28/12/23, merece la pena ser, parcialmente al menos, desgranado y comentado. […]