Scroll to top

2023: ChatGPT and asking the right question...

Malmö, January 2023

“What do you think of ChatGPT in the context of journalism? Saviour or enemy? Will AI make or break the news industry?” This is certainly the question du jour in our industry. 

We suggest it’s the wrong question. After seven years of providing automated articles to newsrooms (using a different type of AI), at United Robots, we’ve heard it all before. The fears of robots stealing jobs, of factually incorrect, untrustworthy content written in robotic language…

It turns out that – surprise, surprise – reality is never as black-and-white as fears suggest. 

And in the case of this newer, generative AI (used in e g ChatGPT) – from where we stand – the scope is at once immense and limited. So, rather than focus on “saviour or enemy”, let’s take a step back and ask the question: “What can generative AI do for journalism, and what can’t it do?”

And – most importantly – what role should people play in this process?

Publishers are in the driver's seat

ChatGPT is just a tool – albeit a brand new, powerful tool with huge scope, but a tool nonetheless. It does not change the guiding principles of journalism – a fundamentally human activity. 

Of course this type of AI can be used for nefarious ends, but so could the printing press. We are in the business of journalism and we should work out how the new tools can help us do that even better – as well as identify what risks may be involved.  

In mid January (2023), Futurism broke a story that perfectly illustrates the latter. Publisher CNET is using AI to write short financial articles, but has not been open about it. Some of the aspects of this story shine a bright light on the choices publishers have, irrespective of what type of AI they use:

Transparency. We always recommend that AI written articles have a byline which makes it unequivocally clear that it was written by a robot, not a reporter. Transparency is critical internally as well as externally, and key for trust. In the case of the CNET story, the Verge reports that there seems to be a lack of transparency around the actual purpose of the content too. According to the Verge, the business model of CNETs relatively new owners Red Ventures, is about creating content designed to get high rankings in search, and then monetise the traffic. Their business model is not publishing journalism for people.

Accuracy. It goes without saying that any content published within a journalistic platform needs to be correct and reliable – whether it’s a groundbreaking investigative piece by a seasoned journalist or a small text about a local football match or financial news. AI tools always need to be controlled by journalists. And if you’re going to auto publish AI generated texts, you cannot use generative AI tools like GPT-3 / ChatGPT, which cannot distinguish fact from fiction and tend to make "facts" up. 

Trust. The issue of trust really encompasses both of the above. Trust is the currency of journalism. Any deployment of new tech tools must in no way leave room for people to question the integrity of a publication. Having said that, we’ve found that readers are generally happy to embrace robot written content – as long as the information is valuable to them, and clearly labelled.

If a publisher asked “What does generative AI mean for our business?”, we’d like to ask back: “What do you want it to mean? The AI is not in control, you are.”

We would advise publishers to keep focussing on delivering solid, valuable journalism and use generative AI tools where they are helpful in this mission. Charlie Beckett, director of the JournalisAI project at LSE expressed it perfectly in a podcast recently, saying that these tools cannot ask critical questions or work out the next step in investigating a story, but that they can be a support to journalists in doing this work. “But I think it’s even more interesting how it puts a kind of demand on those journalists, saying ok – you’ve got to be better than the machine – you can’t just do routine, formulaic journalism anymore, because the software can do that.”

We’re only at the beginning of exploring how generative AI can support the business of journalism. Trying out ChatGPT is easy – working large language models into robust and useful processes within a publishing business will be considerably harder. It will be crucial to keep a razor sharp focus on the use you’re trying to extract from the tech and not get sidetracked by its inherent capabilities.

At United Robots, we’re testing a number of possible uses for large language models, including prompting them to turn text into structured data (our “raw material”), also attempted elsewhere. It’s early days, there are lots of opportunities, and the measurable use and value we can derive from this tech is what will ultimately determine how we deploy it.

Good journalism is about people – those who produce it and those who consume it. It’s about the unique work and voices of great reporters, something that can’t be replaced by ChatGPT. It’s about meeting the needs and expectations of readers, in a way that differentiates yours from other publications. Large language models are not able to work out what your unique product should be.

AI can help improve our work processes, but it cannot produce journalism. 

Let’s not have an identity crisis. 

 

Previous
Unique local content and why it matters