Apple suspends its controversial AI news service after proving that even tech giants can't control the bots

The Apple logo on a grey background
(Image credit: Armand Valendez)

Apple has suspended a new artificial intelligence (AI) feature that summarized its news headlines, after a slew of complaints about repeated factual errors.

The tech giant – which recently pushed back against pressure to repeal its Diversity, Equity and Inclusion programs – has responded to calls to withdraw the service, which sent users news notifications with inaccurate headlines that appeared to be from the news organizations themselves, using logos and titles.

The BBC complained to Apple in December, and reports that it didn’t reply until December, promising a software update and clarification on its use of AI in creating the summaries – which are optional and only available with newer iPhone models.

"We are working on improvements and will make them available in a future software update," an Apple spokesperson told the BBC.

The BBC was among other groups to complain after an alert generated by Apple’s AI inaccurately told users that Luigi Mangione – the man accused of murdering United Healthcare CEO Brian Thompson – had shot himself.

The feature also inaccurately altered headlines from Sky News, the New York Times, and the Washington Post, according to reports from journalists on social media.

Journalism organization Reporters Without Borders has said in a statement that the situation highlights the danger in rushing out news features, adding that “innovation must never come at the expense of the right of citizens to receive reliable information.”

On one level there is something amusing about these mistakes. For example, anyone who decided to watch the Russell Crowe epic Gladiator on BBC iPlayer over Christmas was treated to the subtitles from Aardman’s Chicken Run.

However, when it comes to current affairs, there is a more sinister issue of misinformation when it comes to more intermediate uses of AI, that can seriously further damage trust in the mainstream media.

'Hallucinations' refer to when AI models make things up, and are a “real concern” according to Jonathan Bright, head of AI for public services at the Alan Turing Institute. “And as yet firms don’t have a way of systematically guaranteeing that AI models will never hallucinate, apart from human oversight.”

"With the latest beta software releases of iOS 18.3, iPadOS 18.3, and macOS Sequoia 15.3, Notification summaries for the News & Entertainment category will be temporarily unavailable," an Apple spokesperson said.

Despite Apple retracting the inaccurate service, it speaks worrying volumes that an industry leader, with all its billions of dollars and expertise, still can’t control the AI it releases on its consumers.

You might also like…

Feel like escaping the digital world for a while? Take a look at our guides to the best film cameras, the best 35mm film, and the best dumb phones.

TOPICS
Leonie Helm
Staff Writer

After graduating from Cardiff University with an Master's Degree in Journalism, Media and Communications Leonie developed a love of photography after taking a year out to travel around the world. 

While visiting countries such as Mongolia, Kazakhstan, Bangladesh and Ukraine with her trusty Nikon, Leonie learned how to capture the beauty of these inspiring places, and her photography has accompanied her various freelance travel features. 

As well as travel photography Leonie also has a passion for wildlife photography both in the UK and abroad.