SBN

Generative AI and the Future of Technical Writing

In the 1980s, when Generative AI was more a matter of speculation than a widespread implementation, philosopher John Searle proposed a thought experiment that he called the Chinese Room. He put forward a hypothetical scenario in which a person confined to a room receives paper messages through a slot in the wall. The person is then tasked with returning intelligible responses through another slot in the wall. The messages contain symbols which represent Chinese characters, and the person uses an instruction manual that tells them which symbols to return based on the symbols they just received. 

 

Here is the point Searle is demonstrating: it is possible for a person to effectively communicate in Chinese without any direct experience or inherent understanding of the Chinese language, drawing a parallel to how artificial intelligence essentially works. In response to information input, meaningful responses are returned, while the machine lacks an inherent understanding of the language it uses.

 

Claroty

Evolving Generative AI

 

Fast-forward to today, and AI is still fundamentally the same, except for one big difference: its machine learning capabilities. AI now evolves in its enacted intelligence through cumulative learning. Large language models (LLMs) are trained on vast amounts of data: variables are defined via matrices, data sets are compiled into layers, and relationships between variables are discerned via mathematical models. After extensive training, models can make useful predictions based on the statistical likelihood of relationships between the words contained in your training data, and these predictions are used to generate natural language responses. 

 

 

Models in training mimic the way humans navigate the world. For instance, when we read a sentence on a page, we build on an inference about how the sentence will conclude as we progress through it. This inferential ‘leaping ahead’ to bind ourselves to a standard outside of ourselves is a defining aspect of being human; we’re always contending with the unknown, uncovering new phenomena, and unveiling their interrelatedness to other known phenomena. We adjust our conceptual frameworks according to what happens to manifest before us, and if what manifests undermines our current conceptual frameworks, we make an attempt at a new, more encompassing framework (known as a paradigm shift in philosophy of science). This is an ongoing, essential aspect of embodied being-in-the-world, and though our understanding of the world is never complete, we’re always evolving in our understanding through these inferential leaps of faith that we make – whether we’re discerning objects in space, the meaning of language, the nature of a relationship with a friend, or a complex natural phenomenon. Everything comes down to this basic inference, and inferences are predicated on leaps of faith into the unknown. 

 

So this is what it is to be a human in the world, and AI is a derivative of our way of being. But despite being a derivative, AI can be much more powerful when it comes to statistical analysis: in many cases, naïve human assessment of statistical likelihoods pales in comparison to AI’s ever-increasing potential for making highly informed and highly accurate predictions about complex bodies of information – rapidly and at scale.

 

 

How Will Generative AI Affect Technical Writing?  

 

So, what does this mean for those human beings whose jobs currently involve writing natural language for some business purpose (like myself, Banyan’s technical writer)? 

 

Technical writing is uniquely dedicated to conveying information about technology in the most accessible way. As such, there’s great potential for generative AI to take over the lion’s share of documentation writing. 

 

Would this then mean that, sooner or later, the technical writer will be displaced?

 

I’ll make my own inferential leap here, and say: not necessarily; however, it may lead to a significant change in the technical writer’s core set of tasks. While AI should help streamline user searches and automate text summaries about new technology features, it is only as good as its input and its training. In other words, the robustness of the AI is only a reflection of the team of humans at either end of it all – the ones who monitor the quality of input and output (for factual/technical and grammatical accuracy, as well as the relevancy of the data). So the final limiting factor still comes down to human judgment. 

 

Plus, training models is an arduous process. So, at least for a while, there’ll be a bit of a gap that technical writers are likely to have a role in filling, to ensure that ChatGPT responses aren’t just highly general, abstract strings of text that lack nuance and depth of meaning.  

 

 

I’d even venture to say that generative AI could make documentation writing/documentation better for both the technical writer and the end user: as it stands, technical writers don’t have a consistent pulse on how their documentation is experienced by end users; and end users only seek out documentation when they absolutely need to. I think it’s safe to say that no end users are savoring their experience of reading documentation; it’s sort of like a hospital visit for a broken piece of technology: it can go better or worse, but the experience is never one visitors want to dwell on. 

 

And so overall, there’s very little feedback from end users to technical writers about the utility of documentation. Customer surveys are seen as a nuisance that takes away even more of the end user’s time, and surveys are limited when it comes to grasping  meaningful feedback. This means that, as things currently stand, both the technical writer and the end user are stuck in a state of mediocrity. 

 

ChatGPT could solve this problem. Think of an organization’s documentation as kind of like a mythical sea creature lurking beneath murky waters, slowly morphing over time without anyone there to witness it in its entirety: some parts are old and potentially outdated, some parts change weekly, and other parts are so deeply nested no one knows they exist. ChatGPT could act as a light in the dark, illuminating fragments of documentation that are exactly what the end user wants and needs, at the precise moment that they need it. No more having to endure wonky docsite search bars. 

 

Since ChatGPT works so much better than a traditional search, this would drive higher customer engagement, and better analytics would result. Technical writers would then have better insights into what their end users are actually seeking. If we see that end users are searching frequently for information about one of Banyan’s new functionalities, then we have some sense of how important the feature is to our customers, how challenging it is to understand or use, and what pieces of information may be missing or hard to reach. We could then surface more relevant information in the docs, add new documentation, or clarify concepts within existing documentation. Tech writers could also have visibility into which questions ChatGPT isn’t answering well, and they could then help adjust documentation so that it picks up on the right pieces of input.

 

In short, I could see the technical writer moving closer to the UI side of the house, where they spend more time on content strategy, information architecture, and usability of documentation. I could also see a future where the tech writer’s workflow is more intimately tied to that of data scientists, AI researchers, and machine learning engineers. The technical writer could spend time that used to be spent writing getting a better idea of technical user requirements and key aspects of new features worth showcasing in the documentation. 

 

What Computers Still Can’t Do: The Inextricability of Embodied Intelligence 

 

The question of whether there is anything embodied cognition can do that AI can’t is an old one. And, given recent AI’s utterly impressive demonstrations of enacted intelligence, maybe the answer to this question is blurrier than ever before. 

 

But, despite the fact that AI may be better equipped than humans to make statistically sound inferences, I happen to think we’ll always fundamentally rely on embodied “intelligence” – or human beings. One (seemingly) essential difference between humans and AI is that humans are defined by experiential freedom; that is, as embodied agents, we’re always, by default, leaping ahead to make inferences about the world that we’re directly experiencing and directly a part of (some have described human beings as “ek-static” – or those beings that are essentially outside of themselves). AI, on the other hand, is bounded: it needs to be trained to make inferences, and it relies on humans to provide it with input – all input that we’ve first crafted, spontaneously discovered, or interpreted ourselves. AI in its current form does not have this quality of experiential freedom, this open-endedness, that human beings are defined by; it simply runs regressions that we set into motion. 

 

An AI that is not reliant on human input and training, an autonomous AI, is an idea that’s been considered before and perhaps been feared for a long time – judging by the number of popular sci-fi horrors on the subject. It’s hard to conceive what such a thing would look like, if it were possible: what function(s) would it serve? How would it evolve over time? What would become of human beings? But all of this conjecture only points to the importance of ethics around existing AI: it’s something we need to consider as we input data and train models – which could almost too easily reinforce inferences that lead to harmful bias or the spread of misinformation. So, once again, human intervention would be required, and this is why there are currently efforts to make AI data publicly accessible – to provide more transparency by having more human eyes on how models are trained and what data is used, to the end of mitigating bias and improving the reliability of data.

 

So, while I’m all for AI that can ground its inferences in sophisticated mathematical models and produce outstandingly accurate results – which has simultaneous potential for great good and great evil, depending on how it all plays out-, I’m still reveling in the freedom of sentient being-in-the-world, witnessing the unveiling of the unknown, and gazing upon the great mystery of being. At Banyan, we’ve been training our own LLM to better address customer queries. So even if AI takes some of our jobs in the future, it can’t take that great gift away from me. 

 

 

See how Banyan provides ChatGPT Security as well as for other LLMs and schedule a custom demo today.

The post Generative AI and the Future of Technical Writing first appeared on Banyan Security.

*** This is a Security Bloggers Network syndicated blog from Banyan Security authored by Clara Christopher. Read the original post at: https://www.banyansecurity.io/blog/future-technical-writing-generative-ai/?utm_source=rss&utm_medium=rss&utm_campaign=future-technical-writing-generative-ai