...

When George Williams talks about his work, the language is stark: he speaks of staying a step ahead of “the enemy,” of putting yourself in the shoes of “the adversary.” The war Williams is helping to fight isn’t physical, though; as chief data scientist at Capsule8, a cybersecurity startup, his field of battle is digital.


The world of cybersecurity is a rapidly changing one, with the recent shift to containerized, cloud-based systems opening up new vulnerabilities. Capsule8 is a member of a new generation of startups that combat these challenges, designing security solutions for fast-moving containerized systems where traditional approaches like IP tracking are ineffective.

Artificial intelligence techniques play a role in Capsule8’s products, helping to identify threats while minimizing false positives. In general, Williams said, data science is opening up new frontiers in cybersecurity.


AI Comes to Cybersecurity

“We’re moving into a new age in cybersecurity: the age of data,” Williams said. “I like to tell people that are three aspects: the good, the bad, and the ugly.”

good-bad-ugly

The good: AI can be applied in numerous ways to combat many cybersecurity threats, from ransomware to distributed-denial-of-service attacks. In many cybersecurity applications, telemetry embedded within the protected systems produces a huge amount of real-time data. The copiousness of the data makes it more difficult for humans to analyze in real time, but a natural fit for machine learning applications.

The bad: “AI is not a silver bullet for cybersecurity, it’s just another weapon in your arsenal.” In fact, the rapid progress of machine learning actually brings new risks, since the technology can be tricked and exploited by attackers.

“The adoption has been so fast that I don’t think we really grasp the vulnerabilities,” Williams said.

The ugly: Advances in AI are, of course, also available for use by attackers. A high proportion of AI research is open-source. While open-sourcing has proved to be a valuable tool to communicate new ideas within the research community, Williams said, it also makes cutting-edge AI techniques available to the enemy.

As examples, Williams pointed to the development of AI models that can produce speech mimicking someone’s voice, or create edited images that appear real.

“Actually it’s kind of cool, but what’s scary is that a lot of these things don’t require advanced skills,” Williams said. “They’re something that a novice can do on a laptop.”

Many of these applications rely on increasingly popular generative adversarial networks (GANs), Williams said. Inspired by game theory, GANs pitch two neural networks — a content generator and discriminator — against each other in training, resulting in increasingly effective classification and generative networks. GANs can create better generated content than traditional techniques; for example, high-resolution facial images of people that never actually existed.

GANs have many positive potential applications, of course, but Williams said he’s keeping his eye on them.

“They’re also an avenue for those who want to do harm,” he said.

Williams mentioned possible ramifications for identification and social hacking in particular.

“We place so much trust in [our] ability to recognize other people,” he said.

In the field of cybersecurity, adversarial techniques are not limited to model design, Williams said. Throughout cybersecurity development, thinking from the point of view of an attacker is ubiquitous.

“With cybersecurity this is really nothing new,” Williams said. “Adversarial thinking is just part of our culture.”

Williams noted that there are also ways to automate adversarial testing for cybersecurity applications. For solutions based on machine learning, he said that it’s important to think about not only malicious adversaries but also unanticipated situations that could trick a model.

As AI is increasingly used in critical contexts such as autonomous car control systems, where lives are at stake, Williams said this sort of testing is becoming increasingly critical.

“Before, it was a puzzle; it was fun,” Williams said. “Now, it’s real.”

While some are searching for ways to exploit AI advances, Williams said the good news is that there are many people on the other side: employees at large companies looking for vulnerabilities, ethical hackers, and those like him in a new wave of cybersecurity startups.

“Like any advance in technology, [AI is] somewhat amoral, in the sense that you could use it for good or it could use it for bad,” Williams said.


One Step Ahead

Key to Williams’ job is staying on “the bleeding edge” of AI and cybersecurity research. One way his team does this is through an internship program to bridge the gap with academia and bring on new thinking.

Staying abreast of new developments isn’t enough, however; Williams and his colleagues also must react rapidly to emergent events in the field. When the Meltdown and Spectre vulnerabilities became public knowledge, for instance, Capsule8 released an open-source detection solution.

As a data scientist, the process of developing solutions follows a common path, Williams said: figuring out what type of telemetry to collect, data exploration, statistical validation, robustness testing, and finally deploying to production.

Around 80% of the time is typically spent on data collection and feature engineering, according to Williams — “just sitting on your Jupyter notebook looking at data.” Since this feature engineering or “data wrangling” is so time-consuming and expensive, Williams said he’s interested in technologies that might aid in the process.

One such technique is what Williams calls “game AI” — deep reinforcement learning in which a ML model generates its own training data. For example, Google’s AlphaGo Zero achieved superhuman performance just by playing itself.

Self-play and generative networks will be especially helpful in applications like cybersecurity, whose nature makes it difficult or impossible to gather necessary training data. These techniques will enable AI to approach such new unpredictable or “data-starved” fields, Williams said.

Another trend in data science that Williams is interested in is developing common standards for procedures, quality, and reproducibility — what he describes as something of a formalization or maturation of the field, which still lacks some of the standardized metrics of software engineering.

The challenge is greater than just borrowing from software engineering, however, due to the differing underpinnings of the disciplines, Williams said. He mentioned Andrej Karpathy’s Software 2.0 as an example of an attempt to form a common paradigm for data science.

“[Karpathy is] pretty brash, which is reflected in that label, but I think what he’s saying is true: that AI can borrow a lot from how engineering works, but there are fundamental differences,” Williams said.

Data science is inherently based on statistics, in contrast to much of conventional computer science, which Williams said is reflected in the nature of product pipelines. Debugging is very different, for example. Though data science uses different language — “underfitting, overfitting” — “It’s kind of the same thing: You’re looking for certain behavior and you’re not getting it,” Williams said. “You’re getting something unexpected.”

In the end, Williams said data science differs from pure computer science in that it’s more experimental: proving a hypothesis with data is the kernel of any data science project.

“There’s a word in data science — the ‘science’ part,” Williams said. “A lot of people don’t really realize that all of this is based on the scientific method.”


The Road to Artificial General Intelligence

While 80% of a data scientist’s time might be spent on the data pipeline, modeling is of course challenging as well. There’s a huge combinatorial space of hyperparameters and architectures that the data scientist has to search for the best model — but, as with data collection hurdles, researchers are working on streamlining the process through automation, termed “AutoML” for short.

“The promise of AutoML is to do this for you,” Williams said.

Recent advances in this “meta AI” include model-generating techniques that create optimal and sometimes novel network architectures, and ML-directed mapping of computing tasks onto hardware. Williams said these developments, further removing human input from the development of AI models, are a step moving AI models towards self-awareness.

“Self-awareness is one of the pillars of being sentient,” Williams said. “Giving AI the ability to look at itself and judge itself and to prove itself — I feel like we’re getting close to something.”

That “something” is human-style intelligence, general AI. Williams said he hypothesizes that the interplay of neuroscience and computing will drive the advances needed to approach general AI, with both fields benefitting from exposure to the other.

Williams initially estimated that general AI could be 20 years away, to “distance myself from people who say five or ten years,” but said that time isn’t the best metric.

“Instead of time, I like to measure things in breakthroughs,” Williams said — and by his estimate, we are four to five major breakthroughs away from general AI.

Williams said these breakthroughs would be on the order of moving from neural networks with a few layers to deep networks. They could take the form of new algorithms (“Neural networks are too simplistic a model”), quantum computing, or advances in materials science.

“It’ll probably come from left field and we won’t expect it when it happens,” Williams said. “My feeling is that it’s not going to be algorithms or mathematics. It’s going to be, I think, somewhere in quantum computing, and something in material science. There’s something there.”

ML hardware is heating up in general, Williams said, with hardware advances having largely driven the resurgence of neural networks in the last six or seven years. However, he thinks a fundamental shift in computing may be needed to achieve general AI.

“If you compare a human brain to a silicon brain, they’re really pushing the silicon brain to be on par,” Williams said, but there’s a long way to go. “We’ve been so focused on silicon, and moving electrons in a specific way [but] I think human intelligence follows from something else. But that’s me getting a bit too futurist.”

“I’m now starting to borrow a bit from science fiction, but what’s interesting is that we have to in some sense,” Williams said. “We understand so little about cognition, about how the brain works, we do have to speak abstractly about these things a little bit and borrow from our imagination.”


What’s Next

In the short term, on the other hand, Williams thinks the AI field might be in store for a bit of a rough patch.

“There’s going to be a shakeup in machine learning within this year, and I think it’s going to happen because of the autonomous vehicle industry,” he said.

Williams has had this prediction for a while now. He pointed to the huge amount of money and many American startups within the autonomous vehicle space, each promising a similar product. Despite real progress, Williams said he thinks autonomous vehicle algorithms have not reached a point where they can generalize sufficiently to handle the many anomalous conditions drivers encounter.

“There are so many exceptional conditions that your brain is handling all the time,” Williams said. “You can record from existing data as much as possible to train, but there’s always going to be something that’s not in the training data. It’s going to encounter some weird situation.”

There’s still a major gap between autonomous vehicle algorithms and the human brain, Williams said: “There are some fundamental, high-level abstraction or cognition things that are hard-wired into us that we haven’t figured out how to put in silicon yet.”

Meanwhile, media and cultural narratives of AI are once again jumping ahead of reality.

“There’s also a celebrity aspect that’s happening,” Williams said. “There’s a cultural angle even beyond just science fiction.”

The tension between hyperbolic promises and the real challenges remaining will have to come to a breaking point, according to Williams.

“There have been amazing advances, but I think there’s been a lot of overpromising in this industry,” Williams said. “And there’s going to have to be some consolidation — not everyone can survive. I think it’s going to happen pretty swiftly.”

As a consequence, Williams said he thinks there will be some backlash to AI in general, with a domino effect throughout the field. Despite this prediction, though, he is no less bullish for AI in the long term — “There’s too much momentum.” At no point does Williams think the field might return to the “AI winter.”

“I don’t think that can happen anymore,” Williams said. “We have so much invested in this kind of progress. The genie is out of the bottle and the train is out of the station.”

Williams thinks AI will have more successes in other domains, such as healthcare, before autonomous vehicles become widespread. His cautions: progress may be slower than people think, and as advances occur, applications must be hardened against unintended consequences and adversarial attacks.

“I’m a little bearish on autonomous [vehicles], but I’m also adventurous and I like to be surprised,” Williams said. “If nothing else, what’s happening in AI is going to be one surprise after the next. That’s part of the reason why I’m excited by data-driven technologies and AI in general. There’s a lot of creativity that’s happening.”