3 min read

Some Recent News Items

Some Recent News Items
Photo by Haberdoedas / Unsplash

Short post this week just to share a few recent news items I've been seeing in the generative AI arena and to let you know that I'm still here after a bit of a hiatus. I'll be back next week with a longer, more substantive newsletter post.

Matthew McConaughey and Salesforce have another (newish) advertisement for the same product I wrote about last month. This one was first released as a Super Bowl ad, but is getting a lot of air play right now.

In the more recent spot, Mr. McConaughey is making his way through Heathrow airport and is experiencing a bit of a mad rush as a result of a gate change. How, really, an AI agent is supposed to help in this situation is left unclear. Airports have ubiquitous message boards listing departures and airlines have apps that communicate things like departure times and gate changes in real time.

Are AI software companies really so bereft of realistic use cases that they have to produce ads like this one, humorous as they may be? If so, shouldn't that tell us something about the utility of their products.

Following up from a post decrying the deletion of government data from a few weeks ago, a team of volunteer archivists called RestoredCDC.org went live earlier this week with the archived Centers for Disease Control and Prevention website, exactly as it appeared before Donald Trump's inauguration.

The site is hosted in Europe and has a disclaimers that it doesn't contain information on recent disease outbreaks (it's an archive after all), that videos are not archived, and that the folder/link structure is not fully intact, which affects indexing and Google searches. They're working on fully restoring those capabilities though, and are looking for additional volunteers to help out with those efforts.

A statement on their mission:

The CDC site provides a wide range of information and data. For healthcare providers, there is information about the latest recommendations on diagnosis and treatment. For consumers, there is health information and recommendations. For researchers, there are datasets that provide valuable information. For public health departments, there is information regarding trends in infectious diseases and other information required to be prepared for outbreaks. These are just some examples of the information contained in over 46,000 pages.
In the last few days of January 2025, webpages began to disappear. After several days, many of the pages returned, but some alterations have been noted. Some sites include a disclaimer regarding “gender ideology” which “this Department rejects.” It is unclear if these pages will be updated to remove the language deemed to be offensive by the administration.

It is hopeful for our future that organizations like this exist that are capable and willing to restore some of the data that have been deleted in recent weeks.

Last week, Yoori Hwang and Se-Hoon Jeong at Korea University in Seoul published some new research on generative AI around the idea of countering AI hallucination with warning labels.

The authors describe AI hallucination as "statements [generated by AI tools] that are seemingly plausible but factually incorrect ... which can contribute to the generation and dissemination of misinformation." They found that forewarning participants about the possibility of AI hallucination significantly reduced the acceptance of AI-generated misinformation.

A separate but related finding demonstrated that participants who had a preference for "effortful thinking" were even less likely to accept AI-generated misinformation when they were forewarned of the possibility. This finding ties in nicely with another article I wrote about recently.

This area of research suggests that it may be possible to offset some of the negative effects of AI content being wrong by applying disclaimers to the content. That may not be in the immediate interest of the companies developing generative AI products, but it is probably necessary to overcome the perception that AI content is (increasingly) a source of mis- and disinformation.

More to come next week.