Google on Thursday announced new updates to NotebookLM, its AI note-taking and research assistant. This will allow users to get summaries of YouTube videos and audio files, as well as create shareable AI-generated audio discussions. The tool was originally launched as a project at last year’s I/O Developers Conference and expanded to markets including India, the UK, and more than 200 countries a few months after its public release in the US.
Although NotebookLM was initially used by educators and learners, recent years have seen a significant change in its user base and it is now appealing to more people in work environments.
Liza Martin, senior product manager for AI at Google Labs, said in an exclusive interview that the tool’s users are currently split evenly, with 50% being educators and learners and the other half made up of business professionals. He said he is doing so.
“People are sharing notebooks now, creating a network effect,” she told TechCrunch.
This led the NotebookLM team to push for new features in hopes of increasing network effects and making the tool popular among different demographics.
Earlier this month, NotebookLM added audio summaries to help users turn documents into engaging audio discussions. Our latest update extends that experience by allowing users to share NotebookLM-generated audio summaries to a public URL.
To use this feature, click on the share icon available in the audio synopsis generated by the tool to get the URL, which you can copy and share with others.
Martin said his team has seen professionals upload web pages, resumes, and even presentations to NotebookLM to generate audio summaries and share them with employers, colleagues, and customers. Ta.
In addition to existing support for Google Docs, PDFs, text files, Google Slides, and web pages, NotebookLM also added support for YouTube videos and audio files (such as .mp3 and .wav) as new source types. This new feature helps users summarize key points from YouTube videos and generate takeaways and insights from audio recordings of study sessions and projects.
Image credit: Google
Martin told TechCrunch that Google Labs has a small team working on NotebookLM, which leverages the company’s multimodal large-scale language model Gemini 1.5 Pro, so any new features the team adds to the tool depend on user feedback. He said it is based on
“The interesting thing about AI tools is that a lot of the assumptions change,” she said. “What may have been useful last year may not be useful this year.”
Google expanded access to NotebookLM to more than 200 countries in June after first launching the service in the US late last year.
Martin told TechCrunch that while the majority of NotebookLM usage remains in the US, Japan is emerging as the next big market for the tool, without providing specific numbers. The executive also highlighted that some users are using NotebookLM to get AI-based summaries in a different language than the one they set in the tool.
“Especially in Japan, there are a lot of documents that are not in Japanese, but NotebookLM is set to Japanese,” she said. “So people are running queries in their native language and using their native language against documents that are probably complex and dense in English.”
Google said the information users upload to NotebookLM is private and not used to train AI models. Users must be 18 years or older to access the Tools.
Still, NotebookLM faces challenges inherent to its nature as an AI tool. One is that if users rely too much on NotebookLM, they can quickly lose the habit of reading long-form content and research papers. This can also lead to oversimplification problems.
Martin told TechCrunch that her team is well aware of these concerns.
NotebookLM provides clickable quotes from user-uploaded content, allowing users to better understand summarized notes.
“We recommend reading the original text. We recommend double-checking all the answers you get from NotebookLM. You can also read SparkNotes or the actual book. It’s always up to you.” she said.
NotebookLM is currently limited to the web, but Martin hinted that a mobile app could arrive within the next year.
In the meantime, the team is busy adding more new features. These will focus on adding support on the input side and adding new sources of output, Martin said.