google assistant

How Google evolved to use AI and create Google Assistant

Introduction

Artificial intelligence is no longer a Sci-Fi fantasy. Its applications are more real than ever in this digital age. One of the best examples of how well AI has woven into our day-to-day lives, is that of the Google Assistant.  

The Beginning

Google started as a search algorithm called BackRub, developed by Larry Page and Sergey Brin in 1996. Named after Googol [the number 10¹⁰⁰], it was predicated on its search engine. Over the years Google has launched several products, but the first hint of AI came after the launch of Google Chrome in 2008.

2010

Long before any other company was even thinking of AI, Google was using it to refine search results. In an interview with Schmidt, then CEO, said, “I actually think most people don’t want Google to answer their questions. They want Google to tell them what they should be doing next.”

2011

This led to the creation of the Google Voice Search. While this seems blase in 2019, back then, being able to give commands to your phone and have it follow them was groundbreaking. Thus began a series of dominos that led to the creation of the ever evolving Google Assistant.

2012

Now, imagine Siri being unveiled for the first time, there is awe and excitement and in comes Google Now. With not one or two, but 18 new input languages, improved accessibility for those with specific needs[external Braille input], Google Now was becoming more personalized. In a nutshell, a  device with Google Now could and would learn things about its user to offer more relevant information, with responses spoken, out loud.

In parallel, Gmail Integration began. This meant the software was capable of tracking users’ search histories, calendars and more to stay up-to-date on what they might want. Here began the concept of  showing information cards such as news, updates, reminders about appointments, tracking packages, boarding information, directions, local recommendations and more.

It was around this time that Google began seriously looking into deep learning relying on unlabeled data for algorithms to build artificial neural networks, that loosely simulate neuronal learning processes. Until that point ML was accomplished on a scale of 1-10 million connections. Google trained larger networks, on a scale of more than 1 billion connections, hoping for more accuracy.

This led to the biggest breakthrough, yet – teaching object recognition to a collection of synthetic, silicon-based neurons, leading to GoogleNow connecting every backend of Google, every different web service that’s been developed over the last ten years.

This was a baby step in the direction of a more autonomous, evolving Google Assistant.

2013

This year saw the voice search capabilities in identifying voices [Natural language Processing] being refined and making the neural networks more seamless. In many ways Google cemented its position as an innovator with Google Glass and Chromecast.  

2014

Google bought DeepMind for $400 million. It was now able to use a DNC [Differentiable Neural Computer] – which uses memory to represent and manipulate complex data structures, and can learn to do so from data.

2015

Google developed a novel algorithm called a deep Q-network (DQN), that worked straight “out of the box” across different games – with only raw screen pixels, set of available actions and game score available as input. DQN outperformed previous ML methods in 43 of 49 games.

2016

AI was starting to spread ink on blotting  paper. Google Allo was the stage for the debut of the first version of Google Assistant. Later in the year, Google Home, was launched, which could be integrated with your phone. Primarily focused on speed, latency and responsiveness, ML was later incorporated.

Google worked towards making it more intuitive and more personable. This was a less sleek version of the Google Assistant. DNC enabled creation of AlphaGo, a program that beat world’s leading players in the board game Go after 4 hours of reinforced learning.

2017

Google now famously changed its rallying cry from ‘Mobile first’ to ‘AI First’.  Google Assistant was a standalone feature for the Pixel and Google Home. This was adapted with added functionality to control other devices.

2018

As of May 2018, more than 5000 devices are connected to Google Assistant and available on more than 400 million devices. Google went on to launch Duplex, capable of carrying out natural conversations after being deeply trained across domains.

With Duplex being naturally conversational, Google Assistant was going global. With better features and more seamless integration, the Assistant was evolving.

2019

Your home is now smart and connected. Google Assistant will be getting integrated to Maps and offering other accessories to make your car safer. The assistant is now capable of helping you stick to your New Year resolutions, maintain a work-life balance, have a mindful digital presence, being your interpreter, managing your travel, and more.

 

It’s been quite a journey for Google

In this journey of creating an AI-driven digital assistant for the consumers, Google has managed challenges which include finding the right set of people, providing adequate resources, and getting users to actually adopt and use the digital assistant. Google has effectively created a more tangible, pocket-sized, silicon based butler that can only evolve from here.

Consumers have come far

Today, consumers use Google Assistants in their everyday lives to do things like scheduling a meeting quickly to pulling up the best travel plans for their next trip. These consumers are doing things that they could only speculate in a SciFi movie nearly ten years ago. In one of the latest updates, Google has unveiled a new capability of its voice assistant to do a human-like call for you to schedule appointments, and that is truly mind-boggling.

AI Capabilities can change the world

New AI capabilities like these are challenging people, processes, and institutions to transform and evolve with them. In different domains, AI products are doing and supporting more than people could think of.  More people and industries are being exposed to AI-driven methods, and are discovering new efficiencies and creating new, personalized experiences.

Addressing Key Challenges before the Collections Industry Today