Advertisement

AI was everywhere in 2016

Our board games, our phones, our living rooms, our cars and even our doctor’s offices.

Engadget; logo by L-Dopa.

At the Four Seasons hotel in South Korea, AlphaGO stunned grandmaster Lee Sodol at the complex and highly intuitive game of Go. Google's artificially intelligent system defeated the 18-time world champion in a string of games earlier this year. Backed by the company's superior machine-learning techniques, AlphaGo had processed thousands and thousands of Go moves from previous human-to-human games to develop its own ability to think strategically.

The AlphaGo games, watched by millions of viewers on YouTube, revealed the ever-increasing power and progress of AI. This contest between man and machine was not the first of its kind. But this time it was more than just a computer beating a human at a game. AlphaGo not only conquered the complexities of the game but seemed to surpass the intelligence of the grandmaster across the board game. The unpredictable moves that shocked Sodol (and the world) revealed AlphaGo's ability to think and respond creatively. It is the kind of intelligence that has long been an asset for Hollywood's all-powerful versions of AI, but one that had been unattainable for computers in reality.

That victory marked a shift in the trajectory of AI this year. The technology that has long been aimed at replicating human intelligence now seems to be paying attention to human patterns and behaviors. Recent advances in deep learning have enabled that kind of insight, but it's not limited to beating humans at games. In 2016, AI broke out of the confines of research labs to transform the way we live, communicate and even conserve the planet. Chatbots popped up in group texts. Personal assistants invaded our homes. Cognitive systems are detecting cancer. Bots are writing movie scripts. And car makers are gearing up to unleash a bevy of autonomous vehicles onto public roads.

Professional 'Go' Player Lee Se-dol Set To Play Google's AlphaGo

Grandmaster Lee Sedol looks on during a match with AlphaGo. Credit: Google via Getty Images.

For a few years now, a cohort of mobile assistants like Siri, Cortana and the new Google Assistant has been getting people in the habit of talking to their devices so they can spend less time swiping screens. But now, these personal assistants are swiftly moving past the basics of reminders and internet searches. They are invading our homes as efficient helpers.

One of the highlights in the talking-devices category this year was Google Home. The voice-activated speaker, designed for personal spaces, joined the ranks of Amazon Echo. These at-home digital helpers carry the same promise of efficiency as their smartphone counterparts, but they seem to have a different agenda. They are efficient assistants that not only want to understand human needs, but predict them to create an environment of reliance and reciprocity.

That kind of environment, depicted in movies like Her and Iron Man, is essential to the next stage of human-machine interaction where assistants can turn off the lights in a room for you and also, one day, tell you when you're out of diapers for your child. Mark Zuckerberg's Jarvis, a Morgan Freeman-voiced AI helper he recently built for his own house, is a glimpse into the kind of personalized AI that will coexist in the connected homes of the future.

The ability to comprehend humans is integral to AI in all its forms, present and future. With the recent boost in speech-recognition and natural-language-processing techniques, machines are getting closer to understanding humans than ever before. With companies like Tesla, GM, BMW, Fiat Chrysler all rolling out autonomous vehicles, the ability to communicate with these moving machines will play a pivotal role in making the experience stress-free.

An interior shot of Tesla's self-driving car.

Smart cars promise to bring down the frequency and casualties of road accidents. They are also expected to boost mobility for the elderly and people with disabilities. This summer, that promise came under scrutiny for a fatal Tesla crash. But a couple of months later, when a Missouri-based lawyer suffered a pulmonary embolism while driving on the freeway, the autopilot in his Tesla Model X reportedly drove him to the hospital and saved his life.

Just as the narrative started to shift back to the benefits of self-driving cars, Uber launched its semiautonomous fleet of Volvos in Pittsburgh. The ride-sharing company also rolled out its autonomous car service in San Francisco this month, but city officials swiftly cracked down on the company because it did not have the state permits required to operate the cars. This week, Uber pulled back from the city and is now looking to redeploy the vehicles in Arizona.

Uber's antics aside, the enthusiasm around self-driving vehicles has been palpable. Whether we see AI increasing mobility for people who need the autonomous services the most is still a thing of the future. But the one area where AI is already pitching in is medicine.

With troves of raw medical data being gathered through computers and personal devices across the world, doctors are increasingly turning to algorithms and cognitive computing systems for help. While access to the data is transforming the way doctors diagnose diseases, the sheer volume of data has made it virtually impossible for doctors to process the information for a timely diagnosis.

It's also increasingly hard for a doctor to match the information intake of a computer brain like IBM's Watson that has the ability to absorb every medical journal that has ever been written. In addition to already deploying Watson in major hospitals, IBM recently partnered with more than a dozen cancer institutes to train its cognitive system. The exposure will enable Watson to find personalized treatments for patients who have already tried existing treatments with no success.

The diagnostic potential of AI also extended to the field of ophthalmology. According to a study, Google's deep learning algorithm was able to detect diabetic retinopathy through photographs. The most common method for determining signs of diabetic eye disease, which reportedly affects about 415 million people across the world, is to have a doctor examine the images of the back of the eye for lesions. But a recent experiment revealed that Google's algorithm was able to recognize the lesions as accurately as the doctors. While the company points out that a lot more work needs to be done in this area, the initial results reveal that AI assistance could speed up a doctor's diagnoses drastically.

This year, AI also pitched in to save the planet. From water conservation in California to saving tuna in Palau, AI was deployed in environmental efforts across the world. OmniEarth, an environmental-analytics organization, used Watson to map and classify irrigated and nonirrigated areas through satellite images to improve water conservation in California. IBM's AI was able to process the images 40 times faster than humans who were tasked with the job.

The Nature Conservancy, a global nonprofit, also turned to machine learning as it ramped up efforts in the Pacific region of Palau to monitor fishing activities. The organization already equipped a fleet of ships with cameras and GPS devices to hold fisheries accountable for their catch. But last month, it launched a competition to find an algorithm that can speed up the process of identifying sharks, tuna or turtles that might be brought on board the ships.

The presence of AI was felt and needed in both personal spaces and far-off reaches of the Earth. But it was not entirely unexpected. The preoccupation to make computers think like humans has been evident for decades and 2016 was as much a culmination of those efforts as an indication of things to come.

Despite the constant debate around the dangers of AI, with every new development machines become more capable of human thought. And the concept of intelligence is no longer limited to personal assistance or medical-speak. For a technology that's built on human culture, it's bound to tackle avenues of creativity.

Benjamin, a self-improving neural network, wrote its own short sci-fi movie in June. The AI, which is good at text recognition, was fed human screenplays so it could learn to write a script. The film, titled Sunspring, turned out to be an incoherent mess, but it reportedly picked up on the repetitions and patterns of human writing.

The scriptwriting AI might not be ready for the film circuit, but it seemed to follow in the footsteps of AlphaGo whose stunning victory in Korea already revealed AI's capacity for creative intelligence. While none of the machines have made their mark as filmmakers or musicians yet, it's not for lack of trying.

Check out all of Engadget's year-in-review coverage right here.