0:00
/
0:00
Transcript

What it Really Means to Build AI Standards

An Interview with Rob van der Veer, Chief AI Officer at SIG

AI is no longer the Wild West.

Unlike when AI was first introduced to industry, we’ve been at this long enough to spot the patterns of what generally works and what doesn’t when it comes to AI development. Five years ago, responsible AI was seen as nice to have, today it is consistently ranked among the top three determiners of AI success - and for good reason.

A significant factor in the rise of responsible AI’s importance comes down to the growing call for international standards in AI development. No matter the market, everywhere we turn there is a clear consensus on the desire for clarity that only standardization can bring.

But what really goes into creating these AI standards?

We may have consensus on the necessity of AI standards…but coming to consensus on these standards is an entirely different beast.


Want the full conversation? Watch Rob’s interview here.


The following is one-part summary of Rob’s interview, one-part my reflections on the conversation. To listen to the full discussion, check out the recording here.

Who is Rob van der Veer?

In his own words, Rob is, and has always been, a computer nerd.

Thanks to his fascination with programming and robots, Rob started his career out over three decades ago in information science which eventually led him to his entrepreneurial journey in AI. An experienced technologist, advisor, manager, author, keynote speaker, and entrepreneur with an extensive software background in AI, Rob is always seeking to break new tech ground.

He is currently the Chief AI Officer at the Software Improvement Group (SIG) and co-author of the children’s book Luna and the Magic AI Paintbrush.

Rob and co-author Bessie Schenk

What values drive Rob’s work in AI?

Rob started out in AI when standards weren’t even visible on the horizon, only to quickly come to the conclusion that some form of standardization was desperately needed to drive the space forward.

AI standards imply consensus, as a body of experts must come together to agree on set parameters for AI development in order to be able to deem it a standard. However, in our increasingly interconnected world, finding consensus on a global scale is not a walk in the park. And it was this challenge of consensus that Rob found threatening to block any significant progress on the standards front.

Now one of Rob’s driving values is impact, meaning the value of putting out work that actually works and that can leave some kind of beneficial dent on this world. While the standardization of the AI process is not necessarily as shiny as the technology, it is one dent-worthy.

Spoiler alert, today there is a very nice dent in the AI standard world in the form of the ISO/IEC 5338 standards, of which Rob played a significant role in forming.

But, back when Rob first started out on his endeavor to impact the AI standards space, he had been faced with a complete lack of consensus seriously threatening to stifle any progress. So how did Rob ever manage to create such significant impact?

He cowboyed it.

prompt: robot cowboy on a horse

What does it mean to “cowboy” AI standards?

For Rob, the process of coming to a consensus may be even more important than the process of sharing it - as he so discovered during the creation of ISO/IEC 5338.

Consensus is a long and painful process full of red tape and bruised egos that involves (by some miracle) finding a way to channel a plethora of voices and opinions into a single useful standard. The potential for death by committee is high, as you do need to find compromises…but not too much otherwise the standard simply won’t work. And in the case of ISO/IEC 5338, they were at risk of a slow and painfully comprised death.

Enter in the cowboys.

In order to move the conversation forward, Rob brought together a merry band of experts on AI security, specifically focusing on people that were motivated to have an impact on the standard but did not have the time or ability to join the formal standardization process. In the basement of a hotel, Rob gathered this smaller group separately from the main group to collaborate on the standard and eventually cycle back to the main group for approval.

The meetings were confidential, on neutral ground, and had the “roll your sleeves up, we’re getting dirty” attitude. Rob would throw the standard up on the screen, and through an improvised process it was all hands on deck to hash through it until they arrived at a solution. This solution was then presented back to the main standard committee for consensus.

It was a far cry from the typical standardization process - and yet it enabled Rob and the official standard committee to follow protocol while completing the standard in record timing.

The key that made it all work? The smaller group all shared Rob’s driving value: impact. At the end of the day, this shared value cut through the challenge of comprise and aligned the group in a single determined direction.

What cowboys can teach us about AI governance

AI governance is all about setting internal standards for AI development. And while it may not be a glamorous job, it is one that certainly leaves a dent when done well.

However, AI governance often encounters similar challenges faced by standardization processes. There is such a thing as too much compromise, and death by committee is a very real threat. So how do we avoid these common pitfalls to reach our desired impact in AI governance?

We cowboy it.

AI governance does not need to be perfect from the start, but it does need to work. So if red tape and corporate politics is holding back your progress in setting clear governance protocols…maybe it’s time you go find your own cowboys to wrangle up a solution.

This does *not* mean going off the ranch and ignoring all protocol. Instead, if you find yourself at an impasse in establishing AI governance standards due to death by committee, consider forming a smaller working group of motivated individuals to work out the kinks and then present that solution to the committee for approval. It is far easier to find consensus on a draft framework then hope to pull it out of thin air.

Rob van der Veer’s definition of Good Tech

Good tech is technology that invites, enables and allows good use. It’s not just the tech, but also everything around it and how it fits into society.

Watch full interview

Say hello to the human

Connect with Rob on LinkedIn or visit his website for a growing list of resources in AI governance and security.

Discussion about this video

User's avatar