Several years ago, Jonathan Dambrot, a partner at KPMG, was helping customers deploy and develop AI systems when he started to notice certain gaps in compliance and security. According to him, no one could explain whether their AI was secure — or even who was responsible for ensuring that.

“Fundamentally, data scientists don’t understand the cybersecurity risks of AI and cyber professionals don’t understand data science the way they understand other topics in technology,” Dambrot told TechCrunch in an email interview. “More awareness of these risks and legislation will be required to ensure these risks are addressed appropriately and that organizations are making decisions on safe and secure AI systems.”

Dambrot’s perception led him to pitch KPMG Studio, KPMG’s internal accelerator, on funding and incubating a software startup to solve the challenges around AI security and compliance. Along with two other co-founders, Felix Knoll (a “growth leader” at KPMG Studio) and Paul Spicer (a “product owner” at KPMG), and a team of about 25 developers and data scientists, Dambrot spun out the business — Cranium.

To date, Cranium, which launches out of stealth today, has raised $7 million in venture capital from KPMG and SYN Ventures.

“Cranium was built to discover and provide visibility to AI systems at the client level, provide security reporting and monitoring, and create compliance and supply chain visibility reporting,” Dambrot continued. “The core product takes a more holistic view of AI security and supply chain risks. It looks to address gaps in other solutions by providing better visibility into AI systems, providing security into core adversarial risks and providing supply chain visibility.”

To that end, Cranium attempts to map AI pipelines and validate their security, monitoring for outside threats. What threats, you ask? It varies, depending on the customer, Dambrot says. But some of the more common ones involve poisoning (contaminating the data that an AI’s trained on) and text-based attacks (tricking AI with malicious instructions).

Cranium makes the claim that, working within an existing machine learning model training and testing environment, it can address these threats head-on. Customers can capture both in-development and deployed AI pipelines, including associated assets involved throughout the AI life cycle. And they can establish an AI security framework, providing their security and data science teams with a foundation for building a security program.

“Our intent is to start having a rich repository of telemetry and use our AI models to be able to identify risks proactively across our client base,” Dambrot said. “Many of our risks are identified in other frameworks. We want to be a source of this data as we start to see a larger embedded base.”

That’s promising a lot — particularly at a time when new AI threats are emerging every day. And it’s not exactly a brand-new concept. At least one other startup, HiddenLayer, promises to do this, defending models from attacks ostensibly without the need to access any raw data or a vendor’s algorithm. Others, like Robust Intelligence, CalypsoAI and, offer a range of products designed to make AI systems more robust.

Cranium is starting from behind, without customers or revenue to speak of.

The elephant in the room is that it’s difficult to pin down real-world examples of attacks against AI systems. Research into the topic has exploded, with more than 1,500 papers on AI security published in 2019 on the scientific publishing site, up from 56 in 2016, according to a study from Adversa. But there’s little public reporting on attempts by hackers to, for example, attack commercial facial recognition systems — assuming such attempts are happening in the first place.

For what it’s worth, SYN managing partner Jay Leek, an investor in Cranium, thinks there’s a future in AI robustness. It goes without saying that of course he would, given he’s got a stake in the venture. Still, in his own words:

“We’ve been tracking the AI security market for years and have never felt the timing was right,” he told TechCrunch via email. “However, with recent activity around how AI can change the world, Cranium is launching with ideal market conditions and timing. The need to ensure proper governance around AI for security, integrity, biases and misuse has never been more important across all industries. The Cranium platform instills security and trust across the entire AI lifecycle, ensuring enterprises achieve the benefits they hope to get from AI while also managing against unforeseen risks.”

Cranium currently has around 30 full-time employees. Assuming business picks up, it expects to end the year with around 40 to 50.

Cranium launches out of KPMG’s venture studio to tackle AI security by Kyle Wiggers originally published on TechCrunch