Generative AI – Q&A with Dr Tanya Kant

Generative AI tools, such as ChatGPT, have recently burst into the public consciousness. However, AI tools and chatbots have been in use more many years.

A headshot of Dr Tanya Kant
Dr Tanya Kant

Dr Tanya Kant is a senior lecturer at the University of Sussex. Her research explores algorithmic power, social media identity verification, targeted advertising and bots. She has published journal research on chatbots and human decision-making, gender targeting on Facebook and ethical social media research, and is author of Making It Personal: Algorithmic Personalization, Identity and Everyday Life (Oxford University Press, 2020).

We spoke with Dr Kant to get her insight into generative AI, the ethics and risks associates with these tools, and authentic writing.

What is the difference between chatbots and generative text AI?

Chatbots have been around for longer than people realise. The first notable chatbot was Eliza, which was developed in the 1960s. It was a reasonably basic piece of software designed to mimic human conversation. Modern day iterations of chatbots can be seen as personal assistants or the chatbots you see online that help with customer service. Chatbots don’t use massive data sets; they are script-based and quite simple. They are quite different from generative text AI. Where chatbots traditionally “help” or “assist”, generative text AI is designed to create content. The tech that powers them is very different, as is how people interact with them.

Let’s focus on generative text AI. What does ‘authentically’ human writing look like anyway?

A black keyboard at the bottom of the picture has an open book on it, with red words in labels floating on top, with a letter A balanced on top of them. The perspective makes the composition form a kind of triangle from the keyboard to the capital A. The AI filter makes it look like a messy, with a kind of cartoon style.
Credit: Teresa Berndtsson / Better Images of AI / Letter Word Text Taxonomy / CC-BY 4.0

Software like this is built on predictive text and intrinsically cannot write creatively as would a human – it’s based on what already exists online. The content it draws from is outdated so it’s not useful for generating new ideas. I think there may be benefits by turning predictive text on its head. We could use this type of AI to get a feel of what not to write and what is already being predicted. It’s a more strategic use that does require human input. However, one issue that stems from this is that we don’t know the implications for Intellectual Property (IP) and privacy. Simply put, you may not be paying for generative text AI with money, but with your ideas. These tools will use people’s questions to identify what knowledge creation they need to be working on.

Does that mean companies should advise their employees on how to use generative text AI?

Companies should be looking at what they give away in order to use these free systems. Open AI used a version of the Common Crawl dataset to develop ChatGPT. Common Crawl is a non-profit that “crawls” the web to create an archive and datasets for public use. If and when your website is crawled, your content is added to that dataset, so ChatGPT is potentially learning from your content. This raises questions of ownership and IP. There is a file called robots.txt which can be used to exclude web pages from being crawled, but whether that’s the solution is another debate as it doesn’t address the core questions of content ownership.

What about reputational risks of using AI software?

Some experts think that Google has not pushed ahead with chatbots because they undermine reputation – bots are often associated with being inauthentic, untruthful and spreading misinformation. In fact, this probably applies to industry in general. Companies value paying for ideas and creativity from a human rather than a chatbot. There is an inherent risk of relying on a generative text AI to produce content, which of course carries a reputational risk.

How can humans mitigate the risks involved with generative text AI?

Adding references will help and I believe that both Google and Microsoft are planning to add this as a function of their generative text chatbots. However, references themselves do not constitute trustworthy knowledge, and can easily be wrong. Some experts are worried that providing references will simply lull editors into a false sense of security that the AI is speaking the truth. Until they develop a software that can manually and authentically verify the truth or truthful knowledge, anyone using software will have to be very aware of fact checking and potentially take more time fact checking content than if they wrote it themselves – and referencing is the most boring part of writing! Truthful knowledge is something that humans prove, test, discuss, interpret, peer review and agree upon, so expecting AI to provide the “truth” at all times is exceptionally difficult – and arguably philosophically impossible.

Truth and authenticity raise moral questions. How can these systems be used ethically?

Plastic figures resembling humans who sit at tables in front of laptops. the lack of background makes their envoronment look bleak.
Credit: Max Gruber / Better Images of AI / Clickworker Abyss / CC-BY 4.0

This report identifies some of the more ethical ways to use generative text AI, such as for structure and efficiency, rather than content and ideas. So we can consider it ethical to not be deceiving your customers or audience. There are other ethical implications that come with using free-to-use, but problematically monopolistic, digital tools. These tools can often perpetuate biases and stereotypes around race, gender and other social issues. Microsoft’s Tay chatbot is a prime example. Within 24 hours of its release it was making racist statements. By giving free software lots of time, content and attention without knowing how the data will be used in the present or future, companies might be unwittingly feeding into socio-economic issues they seek to challenge elsewhere.

Would you use ChatGPT to create academic writing?

No. Not just because it doesn’t work that well, but it would mean taking as much time engaging with it to ensure what it’s saying is true than writing an article myself. The latter is more creative, rewarding, trustworthy and, quite frankly, more fun.

If you’d like to read more about generative AI tools, download our free white paper: Generative AI and its impact on the communications industry.

Greg Bortkiewicz