How does AI impact content creation?

Generative AI has revolutionised content creation.

At Magenta, we’ve been exploring this shift from every angle. We’ve interviewed SEO strategists, comms consultants and AI practitioners in our Search Forward Q&A series. We collaborated with the University of Sussex on a nationwide study into how the UK’s PR and communications professionals are using AI in their workflows. And our Responsible Communications Charter guides how we use these tools with purpose and integrity.

Here’s what we’ve learned about how AI is reshaping content creation, and where that leaves brands, marketers, and communicators today.

AI enhances efficiency – but demands human oversight

AI tools such as ChatGPT, Claude and Gemini are transforming the content process. They’re speeding up research, simplifying tasks like transcription and summarisation, and helping teams repurpose assets faster across platforms.

In our CheatGPT research with the University of Sussex, 68% of communications professionals claim AI makes them more efficient. Many told us they now rely on AI to create early drafts, sense-check structure and tone, and generate campaign concepts faster, leaving more time for strategic thinking.

A blue graphic with the Q&A series title and a picture of Mary Kemp

As Mary Kemp, founder at AI Potential, shared in her Search Forward interview:

“We use LLMs for ideation, competitor and sentiment analysis, summarising market trends, and reworking content for different audiences. But always with a human in the loop. The biggest shift has been using AI to accelerate first drafts, so we can get to the strategic thinking quicker.”

But with these productivity gains comes a word of caution: over-reliance on automation can dilute brand voice, blur nuance and contribute towards generic, boring “AI slop”.  If content is to engage target audiences, it must always be reviewed and refined by humans who understand context, emotion, tone and can inject some personality into it.

Search is shifting from keywords to answerability

AI isn’t just changing how content is written – it’s changing how it gets discovered. As large language models (LLMs) begin to summarise web content for users, the goalposts are moving.

Where once we optimised for keywords, we now need to focus on answerability. That means producing content that is authoritative, quotable, and structured in a way that AI can easily interpret and reference.

This means clear headlines, named authors, rich FAQs, embedded video transcripts and digestible expert insights. These can all help increase the chance of being cited in AI-driven answers.

A blue graphic with the series name and a picture of Kevin Indig

But there’s another factor to consider. Numerous studies have found that AI answers result in fewer clicks. As Kevin Indig shared in his Q&A:

“Even if your brand is mentioned in an AI summary, that’s usually a worse outcome than ranking on a traditional SERP. It doesn’t drive clicks in the same way, and there’s no good workaround. Telling leadership you’re now optimising for impressions instead of traffic is not an easy conversation.”

Businesses must adapt their marketing strategies to reduce the reliance on search traffic.

Authenticity is a growing differentiator

In a world of AI-generated everything, human voice and storytelling matter more than ever. Originality, clarity and emotional connection are becoming key differentiators.

Our CheatGPT study reinforced this, with respondents noting that AI-generated copy can often sound “flat” or “generic.” Brands that stand out will be those that lead with insight, personality, and purpose, and make it clear there’s a human behind the message.

As AI generated content continues to plague inboxes, trust is shifting from brands to individuals, with people hunting out authenticity, personality and, well, things written by people not LLMs. That makes visible thought leadership, named authorship, and values-led communication even more important.

Content strategy must adapt to AI discovery models

AI is trained on the web’s existing content, learning to surface answers based on clarity, usefulness and attribution. This opens up new opportunities for niche authority, and risks for brands that lag behind.

A blue graphic with the Q&A series title and a picture of Tom Swinbourne

Tom Swinbourne, digital marketing manager at Reconomy, summed it up neatly:

“Looking five years ahead, the team believes AI will shape most customer discovery and engagement, especially in B2B. The goal is to be part of those journeys by contributing circular economy expertise and thought leadership in the places where LLMs are sourcing and surfacing information.”

We’re also seeing a shift in how people search: not just typing in keywords, but using natural-language prompts. Brands need to consider whether their content is prompt relevant. Does it answer a specific question clearly? Is it written in a way that could be cited in a zero-click environment?

Risks include hallucinations and dilution

While AI offers huge potential, it also brings significant risks, especially if used carelessly or without governance.

From so-called “hallucinations” (a phrase that essentially means making things up) to brand tone erosion, the consequences of unchecked AI use are real. In our CheatGPT study, many respondents flagged concerns around accuracy and quality.

I’ve experienced this myself when writing this very blog! I shared the Search Forward Q&As with ChatGPT and asked it to pull relevant verbatim quotes to insert into my content. It consistently and confidently gave me quotes to use, which were completely made up. When I challenged it, it apologised and provided me with a new verbatim quote – which was again made up. We went round in circles a few times (I was curious to see if I could find a way for it to overcome this issue) before I gave up and chose suitable quotes myself.

Internal upskilling is crucial to address these risks, yet many comms teams still lack formal AI policies or training. Sensitive data handling, IP ownership, and alignment with GDPR remain grey areas. Without governance, the risks to brand reputation increase.

What now? Practical steps for brands and content teams

To safeguard content quality, we recommend:

  1. Audit your existing content: Is it structured, attributable and clear enough to be surfaced in AI-driven search?
  2. Review internal AI use: Develop a team-wide policy on AI usage, data sensitivity and review processes.
  3. Double down on human-led storytelling: Prioritise content that connects, inspires, and adds real value.
  4. Build visible expertise: Showcase thought leaders, named authors and experts your audience (and AI) can trust.
  5. Stay alert to developments: Keep testing, learning and iterating – AI is evolving rapidly, and so must your strategy.

Magenta’s approach: responsible, curious, and human-first

The Magenta team consider AI to be a powerful tool, but not a shortcut. It helps us move faster and think bigger, but we never sacrifice quality or integrity for convenience.

We’ve committed to responsible communications, built policies for how we use AI tools internally, and are helping our clients prepare for the future of content. Our approach is collaborative, human-led and grounded in insight, from our Search Forward expert interviews to original research with the University of Sussex.

If you’re reassessing how your brand uses AI in content, we’re here to help. Get in touch with me at greg@magentaassociates.co if you’d like to discuss how we can support your content and communications strategy.

Feature image credit: Kathryn Conrad / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

Greg Bortkiewicz