Addressing thousands of developers at the annual Google I/O conference on Wednesday, Google CEO Sundar Pichai outlined the company’s new strategy to transition from mobile first to artificial intelligence and machine learning. The goal is to equip the company’s line of digital assistant products and services to anticipate the needs of users, and comprehend sights and sounds in ways never before possible on a massive scale.

Google’s deep learning and computer vision capabilities have advanced dramatically, according to Pichai, and now impact everything from cloud computing to Gmail, search and mobile devices.

“We spoke last year about this important shift in computing from mobile first to AI first,” Pichai recalled. “Similarly, in the AI-first world, we’re rethinking all our products and applying AI and machine learning to solve human problems.”

Among the major new rollouts, Google Lens technology will become a part of Google Assistant and Google Photos. The technology essentially allows users to convert their smartphones into intelligent devices. They can use computer vision, for example, by pointing at a router barcode. Users can log on to a WiFi network automatically, or point the phone camera or point at a restaurant storefront and find out contextual information about its cuisine and ratings from a knowledge graph.