At google I/O, there is this demo shown here:
Where Josh Woodward shows how to create an audio podcast using Google NotebookLM. Not only does he demo the podcast he also shows how to interact with it. I tried to dupe his results using AIStudio using Gemini Pro 1.5 which is supposed to be multimodal, but it was reporting that it was a text only model. It did generate a transcript which we used as a source for NotebookLM and it did not do what was shown in the demo. Also no clue how to enable the join/interact capability in the demonstration.
Here is the notebookLM setup:
https://notebooklm.google.com/notebook/f64d9c93-bd3d-43ea-8fdd-17d225746895?_gl=1*1anl9fy*_ga*MTkzNDc4MDkyOC4xNzIyNzgxMTc2*_ga_W0LDH41ZCB*MTcyMzk4ODcwOS4yLjAuMTcyMzk4ODcwOS42MC4wLjA.
Here is the aistudio setup:
https://aistudio.google.com/app/prompts?state=%7B%22ids%22:%5B%221iUIqzTRhvS0Rp4eN3iRjfH2ApQ4vY2ka%22%5D,%22action%22:%22open%22,%22userId%22:%22102084885303228414345%22,%22resourceKeys%22:%7B%7D%7D&usp=sharing
Any advice is appreciated.