Do you trust me?
How ever you answered that question, I want you to pause for a moment and question why you answered the way you did.
If you said yes, was there a caveat to your trust? Do you trust me in certain contexts, such as being a resource in AI, or is it more a blind trust in who I am as a person?
If you said no, was there a specific reason? Is it because of something specific I said or did that made you not trust me, or is it more due to you not really knowing me beyond my professional life?
When you start to break it down, trust is full of fascinating intricacies that when thoroughly explored can lead to nuanced solutions within the world of AI development.
And it is these exact intricacies that I had the pleasure of sitting down with Philipp Adamidis and Antoine Gautier from QuantPi to investigate.
This newsletter edition is a poor attempt at capturing all the golden nuggets of our conversation, so I cannot stress enough that you are going to want to listen to the full interview on this one - trust me đ
TL;DR - Watch the interview with Philipp & Antoine here
The following is one-part summary of Philipp & Antoineâs interview, one-part my reflections on the conversation. To listen to the full discussion, check out the recording here.
Who are Philipp Adamidis and Dr. Antoine Gautier?
Meet Philipp and Antoine, two thirds of the founding team at QuantPi, a startup composed of âthe technologists of trustâ and whose vision is to enable society for a safe and self-determined existence with intelligent machines.
Philipp, CEO, caught the entrepreneurship bug at the early age of 16 when he started his first company and ever since has been fascinated by the idea of driving large-scale positive impact.
Antoine, Chief Scientist, completed his PhD in mathematics within a machine learning group and was destined to continue down the academic career path until Philipp managed to convince him of the potential to have a different kind of impact on the world.
Together, they have become one of my favorite dynamic duos Iâve been lucky to cross paths with in the responsible AI startup ecosystem.
What is Philipp and Antoineâs driving value?
Although we are not always aware of it, our values are what bring us together. Philipp and Antoine, however, are somewhat of a unique case, as they are highly aware of and actively engage with the value that has brought them, and the entire fabric of QuantPi, together.
Going all the way back to day zero, the earliest discussions the founders shared covered everything from how their personalities would work together, to what their vision for the future was, to, most importantly, what values they wanted to guide their work going forward. It was in these initial conversations that the value of trust was established at the very core of QuantPi.
Flash forward seven years and trust has been baked into every aspect of the company. As one of Antoineâs favorite quotes goes, trust is something you lose in buckets and win in drops. From how the growing team interacts with each other, to how they nurture customer relationships, all the way to the DNA of their product, the ripple of trust continues to add its drops to QuantPiâs bucket.
What does trust look like in the context of AI?
Trust, in theory, is a beautiful concept everyone can agree on as being an essential element that the strength of our relationships is built on. However, when it comes to AI, trust in practice can be harder to translate into precise terms, let alone test for.
To help practically break down the concept into the world of AI, Philipp uses the analogy of trusting your best friend. We never truly know whatâs happening inside our friendâs head, yet many of us will blindly trust our best friendâs decisions without the specific insight into how that decision was made. Why is this?
Over time we experience our friend as they make decisions throughout a variety of scenarios. Whether it be stressful, sad, angry, happy, etc. situations, each experience adds up overtime to develop something like a statistical picture of how this person will behave given different scenarios relevant to our lives, generating trust. In short, we trust that our friend will behave in a certain way because they have done similar over a multitude of times and situations.
Now letâs bring this back to AI. When it comes to building trust, Philipp and Antoine would define it as the ability to trust that an AI system will perform in a certain way within a limited selection of scenarios. In other words, there is a certain amount of predictability about the behavior of that AI.
However, there is still a key difference here between your best friend and your AI. While you may have a lifetime to build up trust with your best friend, you do not have that same luxury in the case of AI. This is where QuantPi comes in. By narrowing the scope of application and running a multitude and variety of scenarios against the model to see how it performs, QuantPi shrinks the timeline to trust in AI by testing the predictability of an AI systemâs behavior in certain relevant scenarios.
How can you bring trust to life in your own AI work?
Letâs come back to our best friend analogy. The blind trust you have in your friend may not necessarily be all-encompassing, but instead apply to specific scenarios. For example, you can trust your friend will bake an amazing cake, but have no idea if they will remember to water your plants. This is to say, your trust in your best friend can be narrowed down to specific situations, and those are the situations you will rely on your friend to come through on.
Which is exactly where Philipp and Antoine suggest to start the journey to testing for trust in AI. Instead of attempting to test an AI system for performance in every single possible use case you can imagine, start by selecting a single use case that you will be applying that AI system to. In the end, it is more important that you discover the scope of relevant use cases to your needs rather than to have an AGI-inspired system that claims to do everything.
Once you have visibility into the scope, predictability, and confidence intervals of the AI system, you can then implement any necessary supporting safeguards, mitigations, or training to accelerate your rate of innovation. Remember, AI is always a human story, and unless your humans trust your AI to be aligned with their needs, you will never reach the full potential of this technology.
An additional factor to consider, when you are selecting your first use case, start with one that you already know a little of what the results should look like but still have things to be discovered. Why? As Antoine explains, this reinforces the building of trust, as you will see the things you already know being reflected in the results which helps validate the system to you and will increase your likelihood to trust the new information you learn in the process.
The foundation of any trust-building effort is understanding how your systems actually behave, not how you hope they behave.
Philipp & Antoineâs Definition of Good Tech
âGood tech is a technology which sustainably has a positive impact.â
Subscribed.
P.S. If youâre using the Values Canvas methodology, Philipp & Antoineâs customizable and scalable solution to AI trust fits perfectly with the Instrument element *hint hint*.
Say hello to the humans
Connect with Philipp and Antoine on LinkedIn, and donât forget to check out what their team is working on over at QuantPi!