Episode Transcript
Ian Swanson: “If we take a look about what’s happening within the White House, which is a blueprint for AI Bill of Rights, there’s four key things that is being discussed right now. Number one, the identification of who trained the algorithm, and who the intended audiences. Number two, the disclosure of the data source. Three, an explanation of how it arrives at its responses. Four, transparent and strong ethical boundaries. We have to have the systems built to govern these because the penalties could be severe.”
[INTRODUCTION]
[00:00:33] ANNOUNCER: Hi. You’re listening to The Secure Developer. It’s part of the DevSecCon community, a platform for developers, operators and security people to share their views and practices on DevSecOps, dev and sec collaboration, cloud security and more. Check out devseccon.com to join the community and find other great resources.
[EPISODE]
[00:00:55] Guy Podjarny: Hello, everyone, welcome back to The Secure Developer. Thanks for tuning back in. Today, we’re going to embark on a bit of an AI security journey, which I think will be fun and interesting. To help us kick that off here, we have Ian Swanson, who is the Co-Founder and CEO of Protect AI, a company that deals with AI security, and specifically runs ML SecOps, which is a great information hub for defining what is AI security, or at least what is ML SecOps, and we’re going to dive a lot into that. Ian, thanks for coming on to the show.
[00:01:26] Ian Swanson: Thanks, Guy. It’s awesome to be here. It’s a great podcast, really excited to talk about the security of ML systems and AI applications.
[00:01:33] Guy Podjarny: I guess, just kick us off in context a little bit. Can you tell us a little bit about who you are? Maybe a bit of background? I guess, how did you get into AI security in the first place?
[00:01:43] Ian Swanson: I’ve been in the machine learning space for 15 years. It’s been an area that I’ve been incredibly passionate in. I’ve had multiple companies in machine learning. I had an ML centric FinTech company called Symmetrix that I sold to American Express. Then, I started a company called DataScience.com, that back in 2014, was setting the groundwork of what is today known as an MLOps. That company was acquired by Oracle.
Today, I’m the CEO of Protect AI. Protect AI is all about security of ML systems and AI applications and we think the time is now. In terms of why I’m passionate about this space, well, prior to starting Protect AI, I was the worldwide leader of go to market for all of AWS, AI machinery. My team worked with tens of thousands of customers, and I’ve seen the rapid rise of adoption of AI and machine learning across all key functions within a company. As we think about the evolution of adoption of AI, yes, how do we get these models in production? How does it drive digital transformation? How do we make sure that we de-risk them from ethics, trust, and bias? But what about protecting it commensurate to its value?
If the CEO of JPMorgan Chase, Jamie Dimon and his shareholder letters talking about the rise of adoption of AI with hundreds of applications within that bank, then we better make sure that we protect it commensurate to its value. I think that moment is here today. That’s why we started Protect AI and it’s another reason why I’m just so passionate about this space is I believe in the potential of AI, but also understand the pitfalls within ML systems and AI applications.
[00:03:21] Guy Podjarny: Yes, absolutely. I totally relate to the urgency around this, especially with almost like unnatural rate of adoption due to both usefulness on one hand, and maybe competitiveness on the other thrown in some hype. A lot of these systems are going in so much faster than past notable technologies, even clouds and containers and other capabilities that relative to their prior trends were adopted quickly. Let’s dive in. We’re passionate about it. Before we dig into ML SecOps, you chose to call it ML SecOps, not AI. Do you want to share a quick view on how do you separate AI and ML?
[00:04:00] Ian Swanson: It’s kind of funny. A lot of people use it interchangeably, where ML, machine learning is a subset of AI, and you can think of deep learning as a subset also, within the AI category. The reason why we focused on machine learning, at least from a messaging perspective, is a little bit of the core persona that we’re working with that is at the centre of operating ML systems. What I mean by that is the role of an ML engineer. So, think about parallels within software development. But on this side, the person who owns the CI/CD, the software development stack, if you will, for ML systems is an ML engineer. We think it’s the ML engineers, they own the systems, they also own the responsibility of, yes, getting models into production, but they need to think about how do we protect those models? How do we secure those models? How do we make sure we’re working with the data scientists, practitioners, the line of business leaders that we are de risking those models?
We really wanted to pay homage to that persona and also to this space and this skill set within machine learning. Now, you use machine learning to build AI applications, right? So, that’s definitely down the stream. But we think it’s really critical for us to focus on the machine learning development lifecycle.
[00:05:14] Guy Podjarny: That’s useful, I think, both from a practicality perspective and general definitions, because I indeed, oftentimes think about AI is the value proposition. Eventually, produce me some artificial intelligence, please, with machine learning being maybe the primary means of achieving that. With that, let’s dig in into this of ML SecOps. That’s a new term. I’ve been part of the DevSecOps. I have my love/hate relationship with the term. Help us a little bit with some definition. What is ML SecOps and then we will start breaking it down.
[00:05:44] Ian Swanson: ML SecOps stands for machine learning security operations, and it’s the integration of security practices and considerations into the ML development and deployment process. Now, why is it different than DevSecOps? It goes back to what I was talking about previously, that the ML development lifecycle is different than software development. Software is built on requirements provided during the first phase of SDLC . But machine learning, the model is really built based on specific data set. Software systems likely won’t fail once they’re deployed, as long as requirements are not changed.
That’s not the case with machine learning. The underlying characteristics of the data might change and your models may not be giving the right result, and it’s dynamic. Machine learning is again, not just about the code on that side. But it’s this intersection of data, the machine learning model building an artefact and the pipeline, that yes, turns into code and a model that gets deployed. But it’s dynamic. It’s constantly learning. The data is changing. So why, again, a new category is machine learning development lifecycle, it’s just different than standard software development lifecycle. That really goes into why is the need for ML SecOps.
[00:07:01] Guy Podjarny: Before we go there a little bit. It sounds like you’re emphasizing not just the different tools of like, look, I need to be able to inspect this or inspect that. But you’re describing a different nature of those types of systems, not just almost the predictability or it’s just a different form of agility, instead of it might change with every request that flows through the system, or the data come in. You find that to be more important or more substantial than the specific tools or phases that might be introduced into the development lifecycle.
[00:07:31] Ian Swanson: There are some different tools that is used in