...

How can we teach machines to be truly intelligent? What does it even mean to be intelligent? And where lie the limits of machine intelligence? Join me on the quest for general AI with researcher Nick Hay.


When we agree to talk, I don’t know much about Nick Hay. He is a researcher at Vicarious AI. He is from New Zealand. Hard facts, but nothing that could give me a sense of what this person is like.

But then we start talking and even through the limited video quality, I immediately know one thing: this man is passionate about his job. Nick’s excitement when talking about AI is contagious. His eyes light up, he throws concepts and ideas at me at machine-gun speed, and before I know it, we’re exploring the depths of theoretical computer intelligence.

The road to Vicarious AI After finishing his degree in math and computer science in New Zealand, Nick went to the United States for graduate school. “I came [to the US] in 2008 and I took eight years to figure out what my thesis would be and then finish it.” During his studies, he met Scott Phoenix and Dileep George, the founders of Vicarious, at a conference on AI safety in Puerto Rico. During a conversation, they asked a question that sparked Nick’s interest. “What is a concept and how would you define it?” It was a simple, abstract question, and it was exactly the kind of question Nick had always been fascinated by.

Vicarious AI is one of the best-funded and also most secretive startups in Silicon Valley. Backed by some of the most famous names in tech, the AI research company is “developing artificial general intelligence for robots”, according to their website.

Their statement continues: “Our architecture trains faster, adapts more readily, and generalizes more broadly than AI approaches commonly used today.” Instead of focusing on solutions for narrow, specific tasks, the company tries to teach their algorithms to generalize and actually understand a problem.

Even with the scarce amount of information that is publically available, Vicarious seems like the perfect workplace for Nick. He thought the same and joined the company soon after finishing his Ph.D.


Transferring between Knowledge Domains

On our journey through the realm of AI concepts towards forming an idea of what intelligence might be, we touch on the topic of transfer learning. A neural net might be brilliant at recognizing handwritten digits, but the same network would fail horribly if it had to process natural language.

“We have huge amounts of data and computing power to solve a problem,” Nick says. “But we’ve got this other problem that now needs a whole different data set and amount of effort.”

But how could a machine possibly use information learned from one task and apply it successfully to another? What seems so simple to us humans is almost impossible for a computer. Such a transfer of knowledge would require some sort of mapping between the two domains of knowledge.

Or would it? “Well, we [can] look at humans and how we do things,” Nick explains. He tells me about efforts to formalize how humans use metaphors. The good old writer’s tool, the metaphor is a prime example of transfer learning.


Teaching Computers to Love?

Take the metaphor “Love is a journey”. If we have ever traveled anywhere, we have an idea of what a journey is. Applying that knowledge to a new domain — love — gives us an understanding of what love might be like. “We can understand abstract domains through our understanding of concrete domains in a metaphorical way,” Nick says.

ai-love

Formalizing this transition of meaning between domains might eventually allow machines to do the same. Nick tells me: “We could metaphorically take knowledge [from] one domain and map [it to] another domain.”

But in order to teach a machine how to do such a transfer of knowledge, we first need to find a way to tell the machine about those domains. To stick with the example above, the machine has to be familiar with the concept of a journey.


Solomonoff and AIXI

Nick knows this all too well. In his Master thesis, he examined Solomonoff induction, a mathematical theory about making predictions based on previous observations. To make such predictions, a formal description of these observations is needed. That might be easy when trying to guess a letter from a series of previous letters (like a word). But when it comes to describing abstract ideas like that of a journey, complexity quickly skyrockets.

Nonetheless, Solomonoff induction is one the best bets for describing the world to machines. “Marcus Hutter has developed this model called AIXI,” Nick explains. AIXI uses Solomonoff induction to build a mathematical model of complex environments. “It’s trying to predict its inputs [by assuming that] there is some computable relation between the effect of its actions and the observations that come in,” Nick explains.

Concepts like Solomonoff induction and AIXI may allow machines to grasp an environment by observing actions and reactions. They make predictions about outcomes based on events, and then adjust as they see how their predictions differ from what really happened.


Pac-Man vs. the Real World

pacman

Such strategies have been successfully employed to teach a computer to play Pac-Man. But a big yellow ball munching on pills is remarkably different from living in the real world. Could a computer really learn to manage the complexities of our lives just by looking at it? Nick is not so sure.

He reminds me how humans have developed knowledge over the course of millennia. “We’re the result of tens of thousands of years of figuring-things-out,” he says. “People started eating random stuff and dying and then the people who survived remembered. That’s not something they could have figured out.”

Nick argues that maybe it’s not enough just to let a computer learn about the world by itself. “It’s an interesting question: how much can we actually expect a machine [to] figure out on its own?” Maybe, he tells me, there is just no algorithmic solution: “Maybe we just have to effectively do 1,000 years of work.”


Taking Baby Steps

An option would be to “raise” machines similar to how we raise children and teach them all our hard-won knowledge about the world. “I don’t think that we would want to create the robo-baby that goes through all the same stages [that] humans do,” Nick objects. “It’s not really practical.”

But the general idea might not be that far off. Babies learn about abstract concepts from concrete examples. “[Observing concrete examples] forms the foundation for all the other — more abstract — forms of intelligence,” Nick says. “Picture human development,” he explains. “A baby start[ing to] stack up blocks” and learning motor skills and balancing along the way while developing a feeling for Newtonian physics.


What it Means to be Intelligent

“Coming at the abstract from the concrete is not just a weird quirk of how humans are,” Nick believes. Instead, it might be something deeply embedded into intelligence itself. A first clue about an answer to the question he has set out to solve.

AI still has a long way to go until it will be able to look at the world and be able to interact with it. Nick admits that: “There are some really easy things that everyone can do that are just super hard for machines.”

But visionaries like Nick Hay do their part to create the architecture for the next generation of AI. If they just stack enough blocks on top of each other and watch them fall to the ground, maybe Nick and his fellow AI researchers can one day solve the riddle of what it really means to be intelligent.

Interested in learning more about General AI? Check out this fascinating paper by Nick and his colleagues at Vicarious outlining a new way for AI to learn behaviors.