top of page

Blog

Articles exploring the many ways to use AI to create, manage and deliver projects.

  • Writer's pictureMarcelo Lewin

Google Live Event about AI and Search from Paris


I wached the Google Live Event live. Below are my notes and my takes on it.


Takeaways

Nothing really new announced. We already got the Bard announcement 2 days ago, so this event was really just a recap of that and everything else they are doing with AI inside of Google. To me it felt more of a reactionary event put on to try to get some attention away from the Microsoft / OpenAI / Bing announcement. But don't get me wrong, they are doing some really interesting stuff with AI including:

  • Image with text translation now uses AI to recreate the background instead of blocking it with the translated text.

  • Updates to Google Lens where they showed an example of receiving an email with an image of a place and then using the integrated Google lens to search for where that place is on earth.

  • Multi-search lets you search for both text and images at the same time. For example, you have an image a of red shirt, but you want it in blue, so you type blue and show the image. It searches for that image in blue.

  • Bard is the competitor to ChatGPT and will be incorporated into Google Search soon.

  • Generative language API will be available next month to developers.

  • New immersive view. It creates a fully immersive model of the world. It offers a time slider to see what it looks like at different times of the day. You can peak inside buildings. It uses 2d images and AI to re-recreate the location in full immersive 3D.

  • Live example of pairing AR with AI in Google Maps to identify coffee shops in a busy street just by pointing the camera at that street and filtering by "coffee shops"

  • Glanceable Directions. If you are walking to a place, you can see the entire route and track where you are in that route at a glance.


Live Notes
  • Event concluded.

  • She's speaking about the different ways AI is helping preserve languages and culture.

  • She's speaking about how AI learned how to sing opera.

  • Marzia Niccolai is now on stage speaking about culture and technology.

  • Glanceable Directions. If you are walking to a place, you can see the entire route and track where you are in that route at a glance.

  • Google maps will use AI to identify the most efficient route for electric vehicles including places to charge your vehicle. It can also identify places that has supermarkets where you can stop and re-charge your car.

  • They showed a live example of pairing AR with AI to identify coffee shops in a busy street just by pointing the camera at that street and filtering by "coffee shops"

  • New immersive view. It creates a fully immersive model of the world. It offers a time slider to see what it looks like at different times of the day. You can peak inside buildings. It uses 2d images and AI to re-recreate the location in full immersive 3D.

  • Chris Phillips is on stage to speak about Google Maps.

  • Now he's talking about Responsible AI. "AI at Google: Our Principles"

  • Now he's talking about Generative language API. Next month they will start on-boarding developers.

  • They are merging generative AI directly to search results. When you ask a question, you'll get the answer at the top along with search results below it.

  • He's speaking about NORA (No One Right Answer)

  • They are combining LaMDA with Search. They opened LaMDA to trusted testers this week.

  • We are seeing an overview of LaMDA and Bard via a chat agent (similar to ChatGPT).

  • He is now speaking about Transfomers and Neural Network Architectures for understanding language.

  • Prabhakar Raghavan is back on stage.

  • Multi-search is available in multiple languages.

  • Multi-search lets you search for both text and images at the same time. For example, you have an image a of red shirt, but you want it in blue, so you type blue and show the image. It searches for that image in blue.

  • You'll be able to use Google lens to search. They showed an example of receiving an email with an image of a place and then using the integrated Google lens to search for where that place is on earth.

  • Liz Reid is now on stage.

  • Image with text translation now uses AI to recreate the background instead of blocking it with the translated text.

  • "The age of visual search is here".

  • "Your camera is the next keyboard". Using your camera, you can search for images or do image recognition for product search.

  • They use zero-shot machine translation in Google Translate. Thanks to that, they were able to add 24 new languages.

  • Now he's showing Google Translate. I think he is doing all of this to show a new "integrated experience" with AI (I'm guessing for now).

  • He is also talking about Google Earth

  • They are working on a new search experience that works more like our minds.

  • He's going over an overview of Google Search today using a mobile app.

  • Prabhakar Raghavan is live on stage.

  • Event starts.

bottom of page