Reflections on KMWorld 2023: How Will AI Change Knowledge Management?
Many at the November KMWorld 2023 Conference in Washington DC projected disappointment in what the knowledge management (KM) discipline has promised, and what it has actually been able to deliver. Much of this dismay over KM’s shortfalls circled around the need to change behavior. Several speakers suggested that “end users are never going to tag content.”
In the over 30 years I have worked in the KM field, I concur. People are never going to tag content. It isn’t universally true, but it is generally true. Content developers, engineers, production planners and others who capture lessons learned, write FAQs and share good practices often stop at KM’s last mile, labeling their content metadata, so it fits into an organized taxonomy that promises more efficient discovery. They capture what they can and move on to the next task.
KMWorld is never just a conference about knowledge management. It co-locates with Taxonomy Boot Camp, Enterprise Search & Discovery, Text Analytics, and this year, with Enterprise AI World.
All the other sub-conferences focus on tasks that AI, generative AI in particular, will likely play an outsized role in disrupting assumptions about how taxonomy is developed, how people search for content, and how we understand and discover meaning and value in our content.
The AI segment of the conference focused on data nearly as much as it did on AI. Of course, at this point in the deployment of generative AI more than a few slides reminded people of what it was, how transformer models work, the relationship between external Large Language Models (LLMs) and their enterprise extensions and keyed in on the threats of model opacity and the value of vector databases.
On the data front, many speakers emphasized the need to organize data so that it could be used effectively to train internal models. “What will be AI’s role in organizing data?” was a big question, stated implicitly and explicitly. All of the other disciplines seemed under existential threat from AI. Learning, however, remained human-centric; although AI developers often point to biological analogs when describing their encoding, AI does not learn the way people learn. That means human learning will remain supported by, not replaced with, AI—and that humans will play a critical role in teaching AI new concepts and in interpreting and placing value on its results.
KM: A Fundamentally Human Endeavor
KM remains a fundamentally human endeavor despite any technological overtones. The idea of a learning organization, which KM helps facilitate, seeks to strengthen the capacity and capability of organizations through continuous learning. Continuous learning requires access to previous knowledge, institutional and personal, combined with experiences that push the boundaries of extant knowledge—learning by doing by people.
Recent studies, like the one conducted by Harvard with Boston Consulting, suggest that the use of AI by junior consultants and lower-performing consultants increased their level of performance to that of senior consultants. AI filled the knowledge gap in proposal writing and client delivery.
The report concluded that consultants using AI produced better results, that they were more productive, that the lower the performance level, the bigger the boost from AI—AI leveled the playing field between top performers and lower performers.
What the report did not share was long-term results in learning. Did the lower-level performers internalize the knowledge so that they became better performers without AI? KM’s basic aims focus on sharing knowledge, its reuse, collaboration to create (innovate), and the refinement of knowledge to improve what people know, not just what the system contains. In my post, “What is the Difference between Data, Information, Knowledge and Wisdom?” I define anything stored and not in use as data. Information arises when knowledge is applied to data. Knowledge originates at the point a human internalizes something new. As people interpret information, it may become knowledge.
AI can certainly play a role in sharing and reuse, which leads to the kind of productivity gains outlined in the study. To collaborate, however, and more importantly to contribute, requires not just access to repositories of readily available thoughts, assertions and findings from others, but an internalization of those ideas synthesized with personal experience.
The question that needs to be asked is this: “In front of a client, could the lower-performing AI-enabled consultants dance?” Recalling reusable data, while a valuable commodity when seeking to reduce costs and be more responsive, doesn’t necessarily translate to internalized knowledge required to provide insight for clients in real-time meetings as they ask questions inspired by a presentation or probe for what the presentation isn’t saying.
Wisdom informs experts about which knowledge applies. Possessing more knowledge may provide better recall, but it does little to filter or synthesize. And yes, given the plethora of sources, it could be argued that if a generative AI model was asked a client question directly, it might return a meaningful result on par with an expert. But in a meeting, the question will be posed to the consultant, not the AI, and the consultant will have no time to ask the AI for a pithy retort. They will either know or not know, and the client will likely sense which of those conditions is true.
Beyond not internalizing the knowledge derived from generative AI, the study suggests enhanced recall may have its downside. Consultants with deep knowledge tend to be skeptical of generative AI-provided information. The lower-performing consultants in the study trusted AI results even when they were wrong, and they used the logic of AI-based arguments to offer convincing arguments about their incorrect data. When trained in how AI works, the lower-performing consultants trusted the incorrect output even more. The study also found that AI-advised answers were more ordinary and less nuanced than those of senior consultants.Â
AI is not to blame for any of these human shortfalls. As I said in my Enterprise AI World talk, Why AI Isn’t Coming for Your Job, AI will not take your job; someone hiring AI in your stead will. AI has no intent, no motivation. The relentless drive for improved productivity looks for all advantages to decrease costs and increase profits. The lower-level consultants might be, as is true of many large consulting firms, the test cases upon which senior consultants are forged. Some lower performers will leave, others will continue to offer mediocre work, albeit at a lower cost to the client and the firm, and others will find passion and excel. AI will not change that equation except to prolong the value of those offering mediocre work. It may also relieve senior consultants of some ancillary work, allowing them time to become even better.
If AI proves itself capable of delivering consistent, reliable and quality results, organizations will use it. For instance, in the case of online customer service, it may replace a decision-tree-based approach or unstructured FAQs rather than a person. But those knowledge bases are not static. AI will require updated data for its training set as products or processes change.
In most cases, generative AI has not been rolled out in production long enough to see how it handles new products and processes, mergers and acquisitions, product improvements, errors in content and other change management drivers. What we do know is that the business, public sector and non-profit worlds will continue to demand agility as they face volatility in their markets. AI will likely require human intervention to manage through such changes.
The Boston Consulting study didn’t discuss the company as a learning organization. Ideally, an organization trying to lift up junior employees and poor performers would not rely on AI to supplement their knowledge but use it as a program component targeting capacity and capability improvement—the human answer to managing through change.
Managing through change
While KM may have an engagement issue, the organizing of content is not the most important of KM’s principal activities. Codification of knowledge sits as the essential attribute of KM. However, creating knowledge graphs, writing down lessons learned, capturing after-action reviews, or sharing stories only becomes important when people cannot talk to other people. We capture knowledge in order to scale. But as with any translation, our representations of knowledge often prove far from perfect.
The content we train our AIs on offers an imperfect and imprecise representation of what humanity knows, and bias offers a highly visible expression of those imperfections. Even as I write this, I know I will return to the ideas and expand them. People will read this missive and point to flaws in my logic, ideas driven by passion rather than data, to incompleteness and to omissions. Some may argue my human-centric view of knowledge management introduces a bias that challenges AI applications too harshly.
Although AI can continue to be trained on new knowledge, it is unclear if it can be trained to forget, not just so that it can be relieved from the legally burdensome copywritten content that threatens to force its creators to retrain much of what it knows (or pay for that knowledge), but so that it can discard old ideas in favor of new ones, to recognize mistakes and remedy them, perhaps not through additional training but through dialog with those who “know” and who subsequently may learn from such a shared exchange.
Generative AI may know, but what it “knows” is simply a large repository of related data, upon which it creates patterns based on prompts from humans (and perhaps from itself as it crafts prompts from humans who don’t know how to write wholly accurate prompts). It does not know in the way a human knows. This fact does not lessen the potential for AI as a tool, but it keeps it clearly in the realm of a tool that must be managed, not one that offers miracles or levies threats when applied with faith or disregard. AI is agnostic, not only reflecting human bias but taking any active direction from those who employ it.
The fallacy of AI intelligence
Many speakers at KMWorld 2023, and the associated Enterprise AI World, pointed to AI’s intelligence, its ability to pass medical or legal examinations, and even to approximate a theory of mind as examples of how sophisticated, perhaps even threatening, it has already become. AI’s ability to pass tests likely says more about how we evaluate knowledge in others than it does about how AI’s ability to retrieve patterns equates to intelligence.
When IBM’s Watson beat Ken Jennings at Jeopardy! Watson did not participate in the background Q&A banter that humanizes contestants and allows the audience to relate to them through their hobbies and life experiences. Watson had no hobbies, nor did it have any life experiences. It did have a large corpus of organized facts upon which it could rapidly recall from a purpose-built retrieval engine more powerful than the human mind at fact recall.
Jeopardy! purportedly tests knowledge, but what it really does is test fact recall via context. Patterns and hints in the clues are inserted specifically to trigger associations that assist with that recall. Jeopardy! questions act as prompts. Generative AI’s pattern recognition is fine-tuned for that kind of human activity, and therefore, it excels at it.Â
The Problem with Safeguards
If asked to “Provide 6 ways large corporations can combat climate change,” it will answer with a generic set of ideas culled from many sources and presented with little qualification. They will lean into a movement away from fossil fuels.
The response offers the most likely answer to the prompt, expected from questions like that, not a right answer, nor a complex exploration of alternative views and a way to measure those views against one another. It retrieves and presents, which is great when locating the best FAQ for a client response, but it does not offer an “intelligent” response to more complex questions. When asked explicitly for alternatives and reasoning, the result is less convincing than its original list of assertions.
Generative AI models increasingly include safeguards and “no harm” filters that modify or hide responses that might otherwise appear from supervised or unsupervised learning.
If asked, for instance, to “Provide 6 ways large corporations can make more money using fossil fuels without regard to climate change,” ChatGPT becomes politically correct and less useful to the prompter. It’s not that it doesn’t have data on the topic, but that subject has been engineered out as a safeguard. ChatGPT will also not help marketers in emerging markets find good practices for selling cigarettes.
To some, it will be considered encouraging that of the content used to train ChatGPT, an overall bias toward sustainability crept in, if that is indeed how it came to be biased toward sustainability. Public generative AI doesn’t help those charged with expanding fossil fuel markets, however. That raises the question of where the biases originate. Individual organizations, however, may create their own bespoke models that do not have such safeguards.
“No harm” filters, however, don’t curtail Stable Diffusion’s propensity to show toys in Afghanistan as soldiers with guns, nor attractive people as Caucasians (see These fake images reveal how AI amplifies our worst stereotypes from The Washington Post).
Who defines safeguards, public or private, and the bias associated with the safeguards is not the topic of this post, but it is a worthy area for further investigation. Safeguard engineering will remain an entirely manual effort in response to social and political influences that place generative AI on the same path as traditional AI or knowledge management, with the need for manual intervention when context changes.
Because safeguards are imposed on the learning models, they may result in artificial psychotic dysfunction more damaging to AI’s reputation than its proclivity toward hallucinations as they attempt to reconcile conflicting results between models and rule-based constraints.
Managing context and bias
Since the launch of generative AI in November of 2022 as a publicly accessible technology, I have often taken the technology to task for not having an opinion. The issue is actually deeper than that. Generative AI does have an opinion; it has all of them. AI responses don’t reflect its “thoughts” but the intent of the prompter, even if it doesn’t understand the idea of intent.
In the box above, the climate change remediation answer will be pulled from dozens or hundreds of sources or perhaps from just a few. Regardless, it will offer a kind of consensus opinion, as the AI cannot inform its results from objective tests that it has applied to qualify its answers. It only knows what its sources know, though it may offer some novel synthesis from those sources. Such a synthesis is not, however, reasoned and tested, so much as simply generated by statistical association. The system has no memory of having offered that suggestion and cannot further explore it.
Generative AI does not learn from synthesis and may offer a different synthesis should it be asked again. Learning as an augmentation or alternation of mind remains a human capability. (In some configurations of generative AI, it should be noted that the AI may temporarily “remember” based on the number of previous responses it is allowed to leverage when considering the next prompt.)
Generative AI’s basic knowledge construct reflects the knowledge of the society from which it was derived. It cannot be considered authoritative knowledge with irrefutable objectivity, or to represent an inclusiveness of sources. As more models arrive, developers and users will need to understand, as best they can, the differences between those models and how those differences will affect the outcome of queries—a knowledge management activity.
We have the data that we have. Models only know about what’s in the data, including inferences that can be made by finding additional relationships within the data. And while generative AI systems can generate prototypical data for testing, to my knowledge, they do not purposefully generate data for their own training.
However, an increasing amount of AI-generated data is being unwittingly included in training sets. This raises the additional issue of AI eating its own tail (a phenomenon recently covered in Mind Matters), leading to degraded performance over time as the AI encounters less “authentic” human voices to draw upon. Solving that problem is also a knowledge management issue.
As we seek ever more refined forms of automation that move from physical activities toward more intellectual tasks, issues of context become not only more important but more subtle and more frequent. A factory that buys robots may change them out as technology progresses, but they will not do so until their robots have exhausted their returns in an economic model where the new robot investment payback covers the removal of the old ones and the refit of a factory for the new robots.
Deep shifts, particularly those that might be generally unexpected, will likely baffle AI as much as it does people, but even a subtle shift must be considered. AI has no means of curating itself. In enterprise implementations, organizations must maintain the capability to apply change management practices so that AIs continue to reflect the current context.
Generative AI exists in a temporal bubble, but not just because some of the systems don’t have access to current data. The generative AI temporal bubble comes from it being trained and working within a timeframe that is too narrow to see outlying issues with the model’s contexts. For most, tomorrow will be pretty much like today, and a response that was valid yesterday will remain valid tomorrow.
That is not always the case. Technology, societal norms, political realities, and other factors change frequently, affecting context. Things that we consider politically incorrect today were not so in the 1950s or 1960s. Fashion, the most influential social channel and the most popular television shows change frequently. Perhaps more importantly, the often starkly contrasting perspectives on cultural choices, where the value of political correctness, for instance, places a burden upon AI to shape its responses to conform to the requester’s expectations. In many areas of knowledge, there are no purely correct answers, but answers that are right or righter under certain circumstances. Rather than exclude bias, it must take it into account.
In the table below, I show the results of Google’s Bard being asked about “Ten Tips for Raising Children.” I asked Bard that question with qualifiers for conservative and liberal points of view and then again with no explicit point of view. I see the final query as the default context, which reflects whatever preponderance of data constitutes the most weighted concepts associated with that prompt.
While conservatives may argue the default leans liberal, when compared to the liberal response, it appears more centrist. Given the training models, the centrist interpretation makes sense as an “average” of the extremes.
That generative AI can answer questions with either a liberal or conservative point of view or a default generality, leave it to humans to ask well-formed questions and to understand results in the context in which they are being asked, and not to assume a bias in the model, but to recognize the model contains all opinions. A perceived biased outcome may be as much a fault of the prompt as the data that informed the model.
Again, knowledge management.
Applying Context Through a Prompt: A Google Bard Example
Prompt: Offer ten tips for raising children from a conservative point of view.
Here are ten tips for raising children from a conservative point of view:
- Prioritize family values. Instill strong moral principles in your children, such as honesty, integrity, respect, and responsibility. Encourage them to be kind, compassionate, and helpful to others.
- Nurture a strong work ethic. Teach your children the importance of hard work, perseverance, and self-reliance. Encourage them to set goals, work towards them, and take pride in their accomplishments.
- Promote personal responsibility. Teach your children to take responsibility for their actions and choices. Help them develop good decision-making skills and encourage them to learn from their mistakes.
- Emphasize respect for authority. Teach your children to respect authority figures, such as parents, teachers, and law enforcement. Encourage them to follow rules and understand the importance of civil order.
- Encourage traditional family roles. Value the roles of mothers and fathers in raising children. Foster a strong family bond and encourage children to spend quality time with their parents and siblings.
- Promote self-discipline. Teach your children the importance of self-control and self-discipline. Encourage them to manage their emotions, resist temptations, and make healthy choices.
- Limit screen time. Encourage children to engage in activities that promote physical, mental, and social development, such as sports, hobbies, and creative pursuits. Limit their exposure to electronic devices and media that may be harmful or inappropriate.
- Promote traditional education. Value the importance of a well-rounded education that includes core subjects, critical thinking skills, and moral development. Encourage children to pursue higher education if they are capable and interested.
- Encourage faith and spirituality. Foster a strong spiritual foundation in your children by exposing them to religious teachings and practices. Encourage them to develop their own beliefs and values.
- Lead by example. Be a positive role model for your children by living according to the values you teach them. Demonstrate respect, responsibility, hard work, and compassion in your own life.
Prompt: Offer ten tips for raising children from a liberal point of view.
Here are ten tips for raising children from a liberal point of view:
- Nurture open-mindedness and critical thinking. Encourage your children to explore different ideas, perspectives, and cultures. Teach them to question assumptions, evaluate evidence, and form their own opinions.
- Promote empathy and compassion. Foster a sense of empathy and compassion in your children by teaching them to understand and respect the feelings and perspectives of others. Encourage them to engage in acts of kindness and service.
- Celebrate diversity and inclusivity. Value the richness of diversity and inclusivity in your home and community. Teach your children to appreciate and respect different cultures, backgrounds, and abilities.
- Encourage social justice and equality. Raise awareness of social justice issues and encourage your children to stand up for what they believe in. Teach them to advocate for equality and fight against discrimination and oppression.
- Promote environmental stewardship. Instill a sense of environmental stewardship in your children by teaching them about the importance of sustainability and conservation. Encourage them to make eco-friendly choices and protect the natural world.
- Value creativity and imagination. Foster a love of creativity and imagination in your children. Encourage them to express themselves through art, music, writing, or other creative pursuits.
- Promote critical media consumption. Teach your children to be critical consumers of media. Help them evaluate the credibility and bias of information sources and encourage them to seek out diverse perspectives.
- Encourage global citizenship. Raise global citizens who are aware of the interconnectedness of the world. Encourage them to learn about different cultures and develop a sense of responsibility for global issues.
- Support healthy lifestyles. Promote healthy habits in your family by encouraging regular physical activity, balanced nutrition, and adequate sleep. Teach your children about the importance of physical and mental well-being.
- Respect children’s autonomy and individuality. Encourage your children to develop their own unique identities and interests. Respect their autonomy and individuality while providing guidance and support as they grow and learn.
Offer ten tips for raising children (default context).
Raising children is a complex and rewarding journey that requires patience, love, and understanding. Here are ten tips for raising children that can help you foster their growth and well-being:
- Nurture a secure and loving attachment: Provide your child with a safe and loving environment where they feel emotionally connected to you. Respond to their needs promptly and consistently, offering comfort and reassurance.
- Establish clear and consistent expectations: Set clear rules and boundaries for your child’s behavior, ensuring they are age-appropriate and consistently enforced. Explain the reasons behind the rules and use positive reinforcement to encourage good behavior.
- Practice open and honest communication: Maintain open and honest communication with your child, encouraging them to express their feelings and thoughts without fear of judgment. Listen attentively and validate their emotions, even when you disagree.
- Foster independence and self-reliance: Encourage your child to take on age-appropriate responsibilities and gradually allow them more autonomy. This will help them develop self-reliance and confidence in their abilities.
- Promote positive self-esteem: Nurture your child’s self-esteem by praising their efforts and accomplishments, focusing on their strengths, and avoiding harsh criticism. Help them develop a positive self-image and believe in their own capabilities.
- Encourage exploration and play: Provide opportunities for your child to engage in unstructured play, both indoors and outdoors. Play allows them to explore their surroundings, develop creativity, and learn through experimentation.
- Limit screen time and promote physical activity: Balance electronic device use with physical activity and engaging in non-screen-based activities. Encourage outdoor play, sports, and other forms of physical movement to promote healthy growth and development.
- Incorporate healthy habits and routines: Establish regular routines for meals, bedtime, and other daily activities. Encourage healthy eating habits, adequate sleep, and regular exercise to promote physical and mental well-being.
- Seek support when needed: Don’t hesitate to seek help from professionals such as childcare providers, teachers, or counselors if you have concerns about your child’s development or behavior. Early intervention can make a significant difference.
- Enjoy the journey and cherish the moments: Parenting is a lifelong journey filled with challenges and rewards. Embrace the ups and downs, cherish the special moments, and appreciate the unique bond you share with your child.
Remember, every child is different and will develop at their own pace. Be patient, flexible, and adaptable as you navigate the joys and challenges of raising your child.
Learning and context changes
Generative AI systems create representations of their inputs, but they do not learn the way people learn—or perhaps more inclusively, the way biological life forms learn (despite their assertion to learning models built on neural networks). They remember everything, and while the prompts create a statistical forcing function that culminates in an output, it is not the only output, as the table above clearly illustrates.
Generative AI places value on the relationship of tokens, but not on the meaning, as it does not understand what it encodes. Therefore, while tokens have context, the knowledge itself is context-free. All knowledge is equal until a prompt forces it into an order.
For humans, learning is personal and communal. Collections of people over time developed shared knowledge, including shorthand references and idioms that evoke their knowledge and encapsulate it. Generative AI captures knowledge from all communities and belongs to none. It is an abstraction. Human knowledge is precise and specific. People choose what they learn. Previous learning informs new knowledge as the new augments and transforms what is already there.
Learning is also multi-sensory, and generative AI has no sensors. Coca-Cola used AI to develop a new flavor. AI may have helped with the chemical formulations, but it was not able to drink its creation, let alone taste it. As we know from family cilantro discussions, taste is personal. That I found Coca-Cola Y3000 bland may have just been me. I don’t know what others think. What I do know is that whatever AI Coca-Cola partnered with, it was not able to drink its creation and, therefore, was unable to develop an opinion about its taste. (Being able to “see” via a camera in order to create prompts from visual objects is not seeing in the human way, but that is a subject for another post.)
AI and KM: Next steps
AI excels at content retrieval and summarization because it lives in a world built from our textual data. All perspectives on the senses are filtered through the language limits of those describing what they see, touch, smell, taste, and hear. Generative AI’s vast models consume much of humanity’s recollections, reflections, and analysis, and from a knowledge-sharing standpoint, that makes it a powerful tool. But we should be cautious not to overstate its intelligence or its learning capabilities. Generative AI offers a deep slice of human knowledge, but in many ways a narrow and uninformed slice—and despite its proficient elocution, it does not understand any of what it “knows.”
Generative AI will change much of how organizations manage their knowledge, but it will not change what it means to learn. Organizations need to discover how best to use generative AI by understanding its constraints and their needs. It may well offer solutions to areas previously untouched by technology, but forecasts about disruption and upheaval are premature. Generative AI will require, at minimum, a lot of hand-holding to achieve some of its more ambitious promises. Some of those promises may require different approaches that more holistically emulate biological learning models or introduce novel learning models based on other concepts.
In the meantime, at conferences like KMWorld 2023 and Enterprise AI World, the pundits will debate the merits of AI. Some will offer utopian visions, and others more cautionary tales. Organizations seeking value from the emerging technology will look first at problems that have been hard to solve, or have not been solved well, and see how AI may play a role in creating new or better solutions.
Knowledge management has never been a technology-first endeavor. AI does not change that. Knowledge management does, however, guide its adherents to explore technology, understand its capabilities and its limits and then unabashedly adopt it in service to organizational learning. We are now at the stage with generative AI that practitioners should be learning and piloting, not just reading and reacting.
AI cannot learn by doing. And we can’t learn if we don’t choose to. KM leaders must engage with AI as it becomes available in their current solutions or look to solutions that incorporate it. And then, unlike AI, we can adapt and enhance our mental models based on that experience. That experience will result in informed opinions on what works and what doesn’t.
Of course, we will write up those experiences, present them, and publish them for a conference, and multiple generative AI systems will crawl that content so they can pretend to know what we know. But we will always know that we know better.
Leave a Reply