Writing in a world with ChatGPT
Several jobs and their associated skills face existential threats from automation. Conventional wisdom, and assertions from inventors, almost always focus on automating activities that detract from the human experience by removing physical burdens or eliminating mind-numbing tedium. Automation frees humans, it is argued; it unleashes them from subsistence and allows them to better create, to more effectively, and more freely, do those things that remain utterly and uniquely human.
The latest round of AI, however, overreaches its ambitions by impinging on very human activities such as art and writing. In the name of eliminating the burden of copywriting, commercial tools like Jasper suggest AI can write blogs and advertising copy with the same competency as a human being.
As a writer, I can affirm that writing is not a burden. I have never looked to automation to do anything other than find and replace text or suggest the proper use of a comma. If writing is a burden, hire a writer, not a robot.
AI now targets not just jobs for which people have spent their lives training but avocations that have become vocations. Jobs that weeks or months ago would be considered on the side of utterly and uniquely human.
What is GPT?
GPT is the acronym for a neural network machine learning mode called the Generative Pre-trained Transformer, GPT uses existing text for training. Developed by OpenAI, GPT transforms short text prompts into lengthy, usually relevant machine-generated text.
GPT’s deep learning model employs over 175 billion machine learning parameters to inform its responses. GPT is currently the largest deployed neural network-based system.
ChatGPT is a derived implementation of GPT.
Writers may complain, but most writers I know would love nothing more than for the world to recognize their value and pay enough for their words that they could make a living. Some of us do, perhaps not purely by words alone, but certainly through words. Industry analysts and content creators make their living by transforming ideas into words, as do marketing professionals, lawyers, and management consultants, to name a few.
At the end of this post, I share a ChatGPT post about the future of work written as me by ChatGPT. ChatGPT not only copied my writing style, but I would also argue that it inappropriately appropriated a phrase with which I was associated at Microsoft, The New World of Work.
At a surface level, the writing mimics my style, though subtle tells, instantly identifying it as being written by someone or something else that does not possess my inner editor. For instance, the text uses the term “trends” multiple times, which I actively edit from my thinking and my writing because I don’t believe most trends are trends. I actively disparage most content that employs “trends” without proper statistical support or acknowledgment of the uncertainties likely to undermine said “trends.”
I am not yet personally threatened by ChatGPT as a competitor, but that does not mean I won’t be. And it doesn’t mean that ChatGPT hasn’t already altered the landscape for writing’s role in work and learning, most importantly, its central position as a foundation for almost every type of work or academic communication. Its threat just hasn’t been felt everywhere yet.
The impact on education
Stephen Marche recently wrote The College Essay Is Dead for The Atlantic. In the article, Marche argues that much of the writing taught in school could become meaningless—may already be meaningless, as ChatGPT generates graduate-level essays. Essays that would rate As or Bs against the writing from most students. If AI can write as well as people, why not let it do it? Why not relegate writing to the trash heap of skills no longer taught in most schools, like sewing, animal husbandry, or classics?
At this point, GPT, the core technology from which ChatGPT derives, will likely find its way into many classrooms. Written assignments will be graded by unwitting educators. Will the student’s future be negatively impacted? Perhaps not, because those who master a tool will bring that mastery forward. Eventually, there will be no stigma associated with automated writing on topics that don’t matter. And to be real, most writing in secondary school or undergraduate programs is meaningless to those who are asked to write it. Generating content may become the norm in the not-too-distant future. GPT will become the calculator of language.
Instructors’ belief in writing’s central role does not mesh with the values of their learners. I have argued for years, often to the ire of colleagues, that universities must understand learners as customers, not as clay to be molded. Will that change learning? Absolutely. But it will also likely increase the perceived value of learning and the stickiness of what is being taught. We devalue writing because we decouple writing from its value.
GPT could play a proactive role in learning as the starting point for understanding the value of words to those who undervalue them. Ask the learners to use GPT to write a paper. Then spend time discussing what it doesn’t say or doesn’t say well. Explore when computer-generated text is appropriate and when it is not. I am perhaps overly optimistic. Discussions about AI’s place and its effectiveness will happen, but AI will most likely substitute more often for work than serve as a prompt for learning.
Most people do not like to write. Education makes a poor case for the value of writing to the job market. GPT makes the case that adequate writing for common communications tasks can be automated, and most learners will welcome that conclusion.
The damage to human communication
False assumptions underlie most communications failures. With GPT, we face the very real possibility that people will assume that the output of a natural language generator will prove adequate for its purpose.
At this point in its development, using GPT will highlight the same transparent disengagement from those who adopt it as reading Cliff Notes or watching a movie to prepare students for essays on a book they failed to read. GPT does not transfer knowledge; it only offers the pretense of knowing.
Human communication will be damaged when people accept AI-generated text as fact, as adequate—when they accept cursory arguments without challenge or personal exploration. Of course, politicians say things all of the time they don’t know deeply, don’t care about, or purposefully mislead to achieve some goal. The difference is that in those situations, human writers who do know what they are writing about hold positions to write on behalf of the politician. Those writers may vary really in quality and commitment, and they may misdirect as much as they offer transparency, but their roles exist to intentionally aid in communication—and the act of writing creates the evidence to back up truth or reinforce myths.
GPT offers no intent. Its output arrives with no pride in authorship, no history of accomplishments in a field of study, no originality or unique synthesis, and no underlying research to substantiate its opinions. It cannot check its facts nor respond to challenges to them. As with much ill-conceived automation, GPT may lead to paths that simply make mistakes more efficient, doing the wrong thing faster.
Writing creates a canvas for thought. Editing offers writing a value for controlling unruly, unready, and unpolitic thoughts. Collaboration creates challenges that make all those involved reconsider and hopefully refine their thinking. GPT offers neither editing nor collaboration. It quickly presents an answer in completeness that challenges those who do not pride themselves as writers to wonder why any revision they might offer would add value. At that point, people make the choice to abdicate their communications to an algorithm that doesn’t know what it wrote and doesn’t care that it wrote it.
The ethics of AI artists and writers
AI does not embody ethics. It is not semantically correct to speak to the ethics of AI. Ethics is a human invention that applies to relationships between people and the relationships between people and their inventions like economics, society and tools, such as ChatGPT and DALL·E 2. ChatGPT, for instance, cannot have qualms about its purpose. Only its inventors can. The ethics conversation needs to turn from the ethics of AI to the ethics of building and using AI.
The argument will be made that GPT does not infringe on the right of artists to create. However, it challenges the economics of creation, which threatens the livelihood of artists. The displacement of work should always be an ethical consideration for any technologist. And judgments will be made. Would GPT be considered ethically dubious if it only wrote boilerplate contracts for software license terms and conditions? If it wrote ad copy for Craig’s list? There are some writing activities for which GPT might well serve a positive role, but while the applications may seem broad, those with few ethical considerations are very narrow.
Touting ChatGPT as a generalized writing tool and for companies to charge for implementations that directly compete with human talent suggests a gap in ethics. Is the creation of the tool enough that we should question the ethics of its inventors? Is it ChatGPT’s use as a content creator for commercial purposes that threatens the livelihood of those who work in words—is that the line? Is it the commercialization of the tool that markets it as an alternative to human writing that creates an ethical quandary, creating a vision of democratized content creation without context? Does that vision take the idea beyond an ethical threshold?
Ethics ultimately comes down to intent. Unfortunately, intent is a complex idea, which is why unintended consequences play such a large role in human invention. Inventors often fail to imagine the moral and ethical impacts of their inventions. They work their craft blinded by a clear intention, a purpose, often a technical one, that leaves little room for abstraction. Investors often reinforce the lack of need to broaden understanding to retain the focus on accomplishments that will yield a return.
Why a thing is being done does not have a single answer. The less thoughtful answers often land on monetary rewards. The more thoughtful answers, including questioning the path forward, too often remain inadequately explored if they aren’t simply ignored.
Education’s role, as Marche points out in The Atlantic, which it increasingly fails at, is to teach people how to create context. The speed of the change, and its insatiable demand for the new and disruptive, further stretches the divide between context and action. One of the difficulties with innovation today is that organizations too easily label all nuance as innovation and strive to commoditize innovation, which creates a round-robin of semantic discord that results in the irrelevancy of innovation.
From uninformed invention conducted without context to the immoral application of technology, the ethical dilemmas remain very human. Even as we invent our way toward AI that increasingly mimics humanity’s surface features, we have not invented a technology that will adequately guide us in its ethical use. That activity will remain forever human; if it ceases to be, we will have given up on humanity.
The failures of ChatGPT
As with most modern AI, ChatGPT focuses on patterns, complex patterns, but still just patterns. ChatGPT does not understand what it is writing. Words are data. Books are data. All web pages, posts and their associated PDFs are data. Unlike earlier attempts at Natural Language Processing (NLP), which started with semantics and strived for a common sense understanding of language to drive the dialog between humans and computers, new approaches seek shorter paths to perceived competency without a strive toward actual competency. Therefore, GPT does not understand what it writes. Its rapid response to a prompt does demonstrate superior intellect—just efficient pattern matching. It does not include any negotiation, follow-up questions, challenges to logic, or suggestions for deeper research that would make the writing stronger. GPT delivers a fait accompli.
GPT regurgitates text in a way that meets the prompt, much as modern chess programs use board patterns to determine the next move in a chess match, no longer relying on mastering chess as logic but in seeing board positions as patterns on the way to victory. I use the term regurgitates not because I want to enter the fray of those negative toward technology’s advances but as a way of differentiating my writing from GPT’s, as I suspect we will find over time that GPT will generate increasingly bland text as it learns even more. It will move toward the mean, ignoring outliers like Joyce or Pound in favor of comprehensible, non-challenging narratives that read like poor copy from an underfunded advertising campaign.
Interestingly, in the ChatGPT post below, Grammarly found several points of disagreement with ChatGPT’s writing. I did not change the original text. Perhaps that proves its competency in emulation as it mimics my errors.
Finally, ChatGPT too easily sweeps to huge conclusions, like the “rise of the global economy,” which, writing this week, I would pick apart as a concept teetering on the verge of obsolescence rather than ascendency (see Foreign Affairs, The End of Globalization?).
Prompt: Future of work written in the style of Daniel W. Rasmus
The future of work is a topic that has garnered a lot of attention in recent years, as advances in technology and shifts in the global economy have brought about significant changes in the way we work. As we move forward, it is clear that the nature of work itself is changing and that the traditional model of a 9-to-5 job is no longer the norm.
One of the key trends that is shaping the future of work is the rise of the gig economy. This is a model in which workers are engaged in short-term, often temporary, work assignments, rather than being employed by a single employer on a long-term basis. This trend is being driven by a number of factors, including the increasing availability of flexible and remote work options, as well as the growing popularity of platforms like Upwork and TaskRabbit that connect workers with employers.
Another trend that is transforming the way we work is the increasing automation of many tasks that were previously performed by humans. This is being driven by advances in artificial intelligence and machine learning, which are enabling machines to take on a wider range of tasks and responsibilities. As a result, many jobs that were once considered safe from automation are now at risk, and workers will need to adapt to these changes by acquiring new skills and expertise.
In addition to these trends, the future of work will also be shaped by a number of other factors, including the changing demographics of the workforce, the rise of the global economy, and the increasing importance of sustainability and environmental responsibility. As a result, the future of work is likely to be a dynamic and constantly evolving landscape, and workers will need to be adaptable and resilient in order to succeed in this environment.
Overall, the future of work is an exciting and challenging time, and it will require us to rethink many of the assumptions that we have held about the nature of work and the role of the individual in the economy. By embracing change and staying flexible, we can position ourselves to thrive in this new world of work.
The legal battle ahead
Technologists often develop technology because they can with little concern for if they should. Human progress, and the existential threats to humanity, all hinge upon that dictum. There is no moral code, religious or otherwise, that can govern the implications of a new technology for which the implications only arrive upon success, and then with such complexity that the technologist often proves ill-equipped to evaluate the meaning of their invention. As noted above, the act of invention often dissuades the appetite for implication.
GPT will end up in the courts as writers claim plagiarism. Unions will assert their rights to be creative. Politicians will say things they don’t check for accuracy or sources, bringing them before the court of public opinion. Journalists may challenge being fired for finding inspiration in GPT.
AI will also face regulatory frameworks designed to prune the burgeoning edges of the possible. As those regulatory frameworks arrive, AI vendors will be asked to defend their positions. It will be interesting to see how many of them rely on their products to write the words they speak in court or at regulatory hearings.
ChatGPT has clearly read my work at Serious Insights and Microsoft. It may not have impinged on trademarks, but I certainly think it leans toward a violation of copyright when it borrows exact phrases bound to a voice that created them.
Writing with closure
GPT is a child. A precocious child, but a child—as is Open.ai. The nascent organization and its nascent products seek to demonstrate the democratization of AI with little thought about their impact on democracy, which requires deep memory and dialog, negotiation and tolerance, respect and shared understanding. Humanity requires ideas forged with words exchanged not as weapons intended to hurt or maim but as tools to disrupt complacency, foster discourse, and prompt learning. GPT embodies none of those ideals.
The biggest threat we face is not living in a world where AI writes or creates art but living in a world where humans are satisfied by those creations.
GPT is simply a tool to generate text based on a large corpus of other text that meets a prompt entered by a human. GPT does offer an amazing glimpse at the possible, but it also demonstrates the limits of the machine-learning path toward the emulation of human characteristics. Despite GPT’s vast data store, it knows nothing that it can discuss or defend. It cannot draw conclusions or make meaningful recommendations or generate original insights. GPT is, by its nature, derivative. The biggest threat we face is not living in a world where AI writes or creates art but living in a world where humans are satisfied by those creations.
For more thought leadership on strategy and technology, click here.
Leave a Reply