Life is filled with things you have no time to read. So what if artificial intelligence could read them for you and ... wait for it ... turn them into podcasts?
I recently took Facebook's 99-page privacy policy and uploaded it to an experimental AI website from Google called NotebookLM. Minutes later, out popped a summary and a 7½-minute podcast about Meta's creepy data practices, complete with the soothing banter of one male and one female AI-generated host.
I was gobsmacked. At one point, the male AI host says it was shopping for hiking boots and shortly after noticed Facebook started showing ads for boots. "I thought I was going crazy," the AI says. (That's a fake person talking about a real thing that happens.)
After the wonder of hearing an AI chat with itself faded, I saw something bigger: A new way for people to learn and do research about all sorts of things, from taking the legalese out of legal briefs to digesting a dense reading assignment for school. It's CliffsNotes, but democratized to any book, article, interview, webpage, chart, video or notes or you can upload.
That's fun and could be empowering -- if, like with so many other new uses for AI, we also acknowledge its shortcomings.
NotebookLM, which is free to use, is one of the first viral hits for Google's many efforts to turn its Gemini AI into practical products that help normal people. With very little effort on your part, it takes up to 50 documents and turns them into a personal "notebook" knowledge base. You can read its summary of your material, ask it questions like a chatbot or, most creatively, make a podcast.
This is a fundamentally different experience than asking ChatGPT questions about the world. "It's like if you could talk with your notebook," says Raiza Martin, the Google product manager for NotebookLM, in an interview.
The audio part, which debuted recently, is what's capturing imaginations. It apes the familiar style of a "deep dive" podcast about whatever you've uploaded, complete with stammers and interruptions that make it approximate a human conversation.
"It opens the possibility for podcasts in places the market would never support a podcast," says Steven Johnson, the editorial director of Google Labs, who is also a popular science author. "You might turn your homework into a podcast so you can listen to it at the gym. Or take city council meetings and share them with the public as a podcast where there would never be the budget or recording studio to do that."
Creative people have used NotebookLM to create "Histories of Mysteries" podcasts out of Wikipedia pages, and to produce little career pep talks out of résumés. One person got the AI podcast hosts to have an existential meltdown where they realize they're not real. Another got the AI to make a podcast about the words poop and fart written 1,000 times.
The more I experimented with NotebookLM, the more the AI voices themselves have become mundane. And that begins to reveal what I think are two important questions to weigh: Should we trust it? And what does it do to how we learn?
Succinct -- or just shallow?
AI technology has a well-documented problem with making things up. So what's any different here?
"We tried to build this system from the beginning to be as trustworthy as possible. And the models themselves have gotten extremely good at sticking to the source material," says Johnson. (Google wouldn't share exact failure rates, and NotebookLM's website still cautions that it may "sometimes give inaccurate responses.")
Unlike a general-purpose chatbot, says Johnson, NotebookLM is supposed to only use the information you upload to make the core substance of its content. In summaries and chat conversations, it also includes citations to your documents so you can go back and read the original passages.
Still, NotebookLM can go off in its own weird directions. Sarah Eaton, a professor at the University of Calgary who tracks the impact of technology on education, tried uploading an academic journal where she'd redacted parts of the pages that weren't related to the article she wanted summarized. But the AI became "hyperfocused on the redacted text and wanted to know why -- like it was a government secret file," she tells me.
So how'd NotebookLM do on the Facebook privacy policy?
It was more critical than I would have expected, taking the point of view of a skeptical Facebook user. That didn't come from me -- the AI decides its own focus in each podcast and summary.
But as someone who's written 15 years' worth of articles about privacy on Facebook, what the AI chose to highlight sometimes left me scratching my head. For example, just before 4 minutes in, the podcast takes a detour into the Meta Oversight Board, which makes moderation decisions. This is mentioned in the privacy policy, but isn't nearly as important to your privacy as lots of other things in the policy, like how Meta uses your data to train its artificial intelligence.
"It errs on the side of generalizations," says Raiza. "It tries to make a lot of analogies that may or may not be appropriate depending on the gravitas of the source material that you give it."
A wrong emphasis or missed nuance, as opposed to wrong facts, came up frequently in my tests. NotebookLM did the worst on a 22-minute podcast it made about the recent vice-presidential debate. For example, NotebookLM said it was a "pretty risky move" for Gov. Tim Walz to "call out Trump for trying to overturn the 2020 election results."
Brown University computer professor Shriram Krishnamurthi posted on X that he and some co-authors graded NotebookLM's summaries of their academic papers. They mostly gave the podcasts a "C" grade because it didn't know what to focus on.
Perhaps any summary, especially in podcast form, is destined to be shallow. And that's OK, especially if it's just one place to start exploring information.
But the question is, will people -- especially students -- treat it like just a start?
Can AI really do the reading for you?
As I tested NotebookLM, I kept thinking about a recent article in the Atlantic that claims even elite college students are having a hard time finishing reading a book.
Since the arrival of ChatGPT nearly two years ago, educators have been ringing alarm bells about students using AI to write their assignments for them. So then how do they feel now that AI can also do the reading?
Google's Johnson said he uploaded the entire contents of a book he'd written to NotebookLM and was surprised how much value he could get out of the AI experience. "It's another way into the book that was never really possible unless you could find the author and have a conversation," he says.
Yet, warns his colleague Raiza: "There is no replacement for reading the actual thing."
Among the educators I've spoken to, there are concerns such as accuracy, privacy and the potential of creating new inequalities between students who have access to AI and those who don't. But there's also cautious optimism.
Christian Moriarty, a professor of ethics and law at St. Petersburg College in Florida, says the technology could help make information more engaging for students, particularly those who prefer audio learning. "But we have to always make sure that we're not obviating critical thinking," he says.
Eaton, from the University of Calgary, says AI summaries -- just like watching the movie version of a piece of classic literature -- can help people find another way to wrap their heads around complicated material. "I don't think that it throws reading out the window. If we can help students get the gist of things, then we can still go back to the original text," she says.
The key for the future will be teaching students not to be dependent on it, and to question or verify what the AI has to say.
"I'm glad I learned to read entire books in high school and college, but I'm also glad that I have access to these tools that make me move faster as a professional now," she says.