People & AI - ​​Decoding AI Ethics and Governance with Var Shankar

In this episode of People & AI, host Karthik Ramakrishnan engages with Var Shankar, Executive Director of the Responsible AI Institute. Together, they dissect the complexities and necessities of advocating for ethical artificial intelligence – a critical conversation in today's advancing tech world. Var brings knowledge from diverse arenas, including international policymaking and legal academia, to illuminate the operational challenges and international standards shaping responsible AI. Listen in as they delve into the intersection of law and AI governance, AI implementation in enterprises, and decode policy instruments like the G7 code of conduct and ISO 42001. This episode is a must-listen for anyone passionate about the implications and evolution of AI ethics and governance.
April 12, 2024
5 min read

Listen on

Apple podcasts: https://lnkd.in/ga4t4WuZ

Spotify: https://lnkd.in/gBzmKsDE

YouTube: https://youtu.be/5LLdCVpFTlA

Transcript

Karthik Ramakrishnan [00:00:07]

Welcome to this episode of people in AI, where we dive into the intricate world of artificial intelligence, ethics, and governance with our distinguished guest, Var Shankar. As the executive director of the Responsible AI Institute, Var has been at the forefront of advocating for the ethical use of AI, leveraging his vast experience from Harvard Law School to international policymaking forums. Today we'll explore the depth of Var's work in AI ethics, discussing the challenges, solutions, and the future of responsible AI development. Discussing the operationalization of responsible AI through assessment programs and the implications of international standards on national practices. As you may have noticed, this is a common theme throughout all of our podcasts. And the idea is, what we really need to see is how we can make responsible AI from just a word, this sort of fuzzy word, into something tangible that organizations can get behind and implement tactically. So, join us as we uncover the insights of a leading voice in shaping the ethical landscape of AI technology. And to note, Var and I share a background of being both Canadians. Var now lives in New York. Before we actually get into even you, why don't we warm up with a little bit of a discussion on what happened in Canada over the weekend. We had Prime Minister Justin Trudeau announce a $2.4 billion fund specifically targeted towards AI and AI funding. We had a bunch of other policy announcements. It was a pretty busy weekend for policymakers, it looks like. So, over to you. Would you like to share sort of your synopsis of what happened?

Var Shankar [00:01:49]

Yeah, Karthik, and thanks so much for having me on the podcast. I think it was a huge weekend for Canadian AI policy. We saw the $2 billion investment in Canadian AI computing capabilities and infrastructure. We also saw kind of the creation of a new AI safety institute being announced, which is kind of modelled on the US and UK AI safety institutes, as well as kind of keeping up the support for the under discussion AI and Data act that's in the House of Commons. So I think Canada has always kind of punched above its weight on AI, both in terms of talent and in terms of leading the field in responsible AI adoption, roll out the directive on automated decision making for canadian AI systems. And I think what we're seeing here is kind of the government make a big announcement and kind of a directionally a big step towards ensuring public sector control of AI capabilities. This is a game changing technology, and we need to have kind of public involvement in not just shaping and regulating it, but also in developing the artifacts, so developing some of the cutting edge models and kind of making them accessible to researchers and coming up with kind of appropriate testing and benchmarking and supporting all of that with kind of long term funding that take a big step in that direction.

Karthik Ramakrishnan [00:03:16]

Yeah, absolutely. I completely agree with you. I think to your point, Canada has always punched above its weight. I mean, a lot of the breakthroughs for AI came from the University of Toronto and University of Montreal with Doctor Hinton and Doctor Bengio, and even the announcement, you can see Doctor Bengio hanging out in the background with the prominent idea. We've always been academically driven, and I think CIFAR has been critical in enabling this over the last 30, 40 years. Not just in AI, but other fields, but certainly in AI. Handpicking and bringing researchers to Canada, having them, funding them, identifying artificial intelligence or even back then machine learning or neural networks as an area of study that needs to be invested in and cherry picking. That's why Doctor Hinton is here. That's why Doctor Bengio is here. And that's been fantastic. I mean, I think it remains to be seen how this funding will get deployed to whom and how, and how it actually is made useful. But yeah, I agree with you. I think it's a step in the right direction and hopefully we can make all of that worth it. But maybe turning over to you VAR, you've got you today. Could you start by sharing what initially drew you to the field of AI ethics and responsible AI, especially coming from a background, you know, at Harvard Law School and your work as a legal associate? I mean, sort of. I can see how the connection could be made, but would love to hear your words, how you got here.

Var Shankar [00:04:42]

I've always been interested in law and technology, so, you know, after my legal education and practicing law at a major New York law firm, I kind of turned to public service. And this comes back to the Canadian, you know, the Canadian digital government ecosystem. So as part of that effort, I joined the government of British Columbia's exchange lab, which is kind of a creative space within the government that prioritizes simple, intuitive and human centered citizen services and thinks of puts users at the centre so that could be citizens, but also intuitive services for government users and government contractors. And so the idea there was that a digital government service should be as accessible and as effective as an in person service, and it should be available on any device, in any language or in kind of one of several major languages. And so I got to work on those efforts. A really foundational component that BC already had in place, there was a digital ID, so it was possible to kind of authenticate a person's identity through their cell phone, if they had their digital card with them, and if they completed a short live session to kind of authenticate their ID. So that digital push really gained momentum during the COVID-19 response. So I got to work on a lot of those kind of complex topics at the intersection of law enforcement policy, technology, privacy, security risk, bringing everybody along and rapidly procuring tools, and then supporting keeping the end goal in mind, whether that's virtual education or whether that's making sure everybody has a seamless kind of single sign on experience. And so those are kind of the experiences that led me to get deeper into this space of responsible tech. And then because of that experience in the government and that legal background, I was asked to kind of co-develop an AI ethics course for Kaggle. And that was really fascinating for me, because coming at the machine learning fairness literature with a legal lens, I saw a lot of the same terms, but I could already identify the way that machine learning fairness is conceived in the machine learning community. It requires a lot of data. Companies often don't have that kind of data. It would run up against a lot of legal barriers in terms of the ways that organizations are allowed to use data. And so I felt like it was a really fascinating area. So that's how I got into the AI ethics and responsible AI space. And then two and a half years ago, got in touch with Ashley Casovan, who was the previous executive director of the institute, and comes out of the developed the directive on automated decision making in the government of Canada, and joined full time the institute and have really enjoyed working at the intersection of law, ethics and technology.

Karthik Ramakrishnan [00:07:39]

Very cool. Fascinating. Especially on the Kaggle side where you were. Would love to understand, I don't know if you have a background engineering, but, you know, when you think about data, when you think about, you know, how these algorithms need to work and how they process things, there's a lot of math involved. So, you know, as a lawyer, how do you bring that element to what you do? I think that maybe if you want to spend a little bit of time thinking, talking to us about, like, how you perceive the problem coming from where you do into these very, very technically engineering, perhaps oriented subjects as well.

Var Shankar [00:08:10]

Yeah. So I think lawyers that work with companies now, a law degree is kind of table stakes and an understanding of the legal environment as table stakes. You do have to understand how organizations collect and use data. Increasingly, of course, there's counseling from a product lens or counseling from a privacy lens. But increasingly, you really need to understand the kind of actual technology risks in terms of cybersecurity, in terms of what kind of design patterns you can use that help you achieve legal compliance down the line. And so I think I'm not the first person with a legal background to have to kind of delve deeply into these technical matters. And then also, of course, you need to, of course, do your homework and understand all these different pieces. But at the end of the day, you also need a good team. You need a good technical team to kind of engineer privacy. Engineering is kind of a thriving field. And, you know, the EU AI act, I'm sure, is going to, the way it's currently drafted and the way it's going to go into force is going to kind of force further training of lawyers, further development of different functions within organizations that are very specialized and kind of require these cross disciplinary skill sets.

Karthik Ramakrishnan [00:09:24]

No, I mean, you're actually right. I mean, who, it does need different elements, people with different perspectives to come in for any technology. And I think you nailed it right. It's a function of working with the right individuals and bringing what you do to the table. So on that note, can you explain the mission of the responsible AI Institute and the role you play as its executive director?

Var Shankar [00:09:49]

Yes. So, yeah, the responsible AI institute is we provide practitioners with guidance, templates and tools for responsible AI adoption. And we're an independent nonprofit, so we try to bring all of our methodologies and materials, our view of the current best practices. And so, for example, we have assessments at the organizational level, at the system level certification program for AI applications. And then we also have forums for organizations, people at organizations, to learn from other practitioners and from policymakers, from researchers, and to develop. Just talking about the machine learning fairness community, the kind of responsible technology trust and safety community and the legal community, one thing that we do is kind of convene these groups and help develop a common understanding of what responsible AI is, what good looks like, what the cultural and tooling elements of it are. So that's kind of our organization in a nutshell.

Karthik Ramakrishnan [00:10:54]

No. Excellent. Having been partners with the responsibility Institute for the last three years, I think we certainly see the amount of education involved with enterprises. And so we need an independent body to give them that support as they go through that, so we can do what we do really well now. And given your experience, how do you see the intersection of law and AI evolving, particularly in terms of regulation and governance?

Var Shankar [00:11:21]

Yeah, so we kind of broadly see two models of AI governance emerging. With the AI act in the EU coming into force soon and the looser US model that's kind of led by federal agencies, by states and by municipalities. I think, you know, I'm curious to hear your view on this as well. What I see is an evolving and kind of technology, technology adoption and organizations developing common ways of dealing with it. And so the types of policy instruments that I'm really bullish on are, for example the G7 code of conduct, which involves a lot of the big players in terms of EU countries as well as the US and Japan, and is kind of limited to really advanced AI systems, but shows you what the best practice might look like when, when incorporating really advanced AI systems. Another piece that I'm really bullish on is ISO 42001. That's the ISO for AI management system standard. It's even more global because it's, you know, beyond the G7. It involves all of the countries that ISO kind of interfaces with, including China. And, you know, I've, that kind of brings in its own set of challenges at companies that operate in a variety of different political and cultural contexts. And I recently co authored something with Phil Dawson, who is on your team and kind of a leading light in the AI policy field about how to think about 42001 in that global context. But these are the sorts of mechanisms I think that will be the glue that kind of brings together how industry actually builds processes and reporting mechanisms that can report out to the EU AI act, that can report out to the various requirements in the US, Canada and UK. And that can kind of bring everybody around a common set of terms, concepts and ways of responding quickly. When you see new developments.

Karthik Ramakrishnan [00:13:09]

It's hard to put your arms around all of these moving trident. We had Martha on a previous episode where she discussing ISO for group initiative. But one thing that I find people that we talk to or enterprises that we talk to and folks there is it is too much, a little bit. You've got 42001, you've got local regulations, you've got national regulations, and then you have internal policies that you're not supposed to make, right, your responsible AI framework or your ethical AI framework or just your AI policy, period. And especially when you are an organization that goes beyond 2000 people, it's difficult, right? So that's where the challenge comes in. It's not so much, you know, if you had one directive, great, right, but you have I think, seven or eight different moving parts here. So how do you advise clients when they, when they think about this? Because ultimately, you know, we can write anything we want on paper. Ultimately it's the people who have to deal with it, that have to kind of ensure that it's actually practice. So how do you bridge that gap, and how do you think about that problem?

Var Shankar [00:14:22]

Great. Yeah, great question. So I think it's important to think about, you know, always be grounded in specific use cases. Think about your two or three large use cases that you might have for generative AI or kind of cutting edge AI, and then think about what are the really key pieces that are foundational, that are going to be long term, that you need to get into place. And so some form of governance, some form of independent review internally culture and upskilling and training, of course, throughout the organization. And a lot of these are not new things, right. These are things that digital organizations with digital transformations underway have been doing for decades. And so really doubling down on those fundamental pieces, having high quality data and kind of having less hierarchy in your organization, but kind of ensuring that you have the right checks at each lifecycle stage. So in terms of how the institute helps our members kind of navigate the complexity here, we try to provide friendly tools and templates and webinars where they can learn from each other. So we'd be putting out, for example, a policy template for enterprise policy, which is kind of footnoted to which regulatory and assurance regimes are you most concerned about? Which use cases are you most concerned about? Do you build mostly or do you buy mostly? So those sorts of things, those sorts of kind of friendly tools that they can make their own kind of conversations with a trusted set of industry peers. And, of course, our assessment and certification program. We really strive in all of those efforts to reduce the complexity and to focus on what's essential. That doesn't reduce the need to comply. It just reduces the friction in terms of having to comply to any regulatory regime.

Karthik Ramakrishnan [00:16:12]

I think that part of what you said is this is not net new. We've been doing this, especially in regulated industries, for a very long time. And so I think it's just helping understand the delta between what you're doing today, what this is, as an extension of what you did, versus a complete rewrite. I think that's where regulations are trying to get it right, too, where I think good regulation is not about creating something from scratch, but really saying, well, now that you have AI in here, what we've told you before, you need to practice it in AI as well. Fairness or lack of bias or discriminatory practices, at minimum, those things, but also things like governance or security and privacy is also not new. We've done that cyber security wise for a very long time, decades or more. So, yeah. And I think if you treat it as such, as an extension, I think it's an easier thing to get your head around. But again, that is easier said than done. What are your thoughts on that? Because that's that part. How do you get, psychologically, people over that hill?

Var Shankar [00:17:20]

For sure. And it's, we're just sitting where I do at the institute. It does seem like there's kind of several announcements every day, a lot happening in the ecosystem, and it can be challenging for organizations to deal with that kind of pace and level of complexity. But we are at the beginning of the AI era, and so this is a marathon to kind of keep the fundamental pieces in mind and absolutely agree about kind of, there are some elements that are net new here. A couple of them are the adoption rate. So just in terms of how if ChatGPT, or a new OpenAI model, for example, drops, the time between it dropping and it being adopted by millions of people is a matter of days, which is kind of unprecedented in tech or in the history of any product. So I think that's unprecedented. And then the kind of technology change, the technology adoption, as well as just frankly, some of the capabilities, like OpenAI, Sora, I was blown away when I saw that. I wasn't expecting that. I mean, obviously you can tell when it's, you can tell that it's computer generated, but it's very impressive in what it can do on its face. And so I think that's definitely net new. But I think we do need to stay grounded in real use cases will be impacted much faster. And so going back to your kind of initial question about the Canadian announcement, I saw one of the things in that announcement was funding for creative industries, which are a big part of the Canadian economy, and the disruption that AI is going to cause in creative industries, which I think is kind of playing out faster than it might in other industries because of the ability in digital media and entertainment are more significant, whereas in more traditional business use cases and in organizations and other industries, people are a little, you know, the use cases aren't necessarily as obvious or as dramatic. And so, yeah, I think definitely staying grounded in, given your industry, given your country or jurisdictions that you operate in, what are the foundational elements you need to get in place that's really important to keep sight of.

Karthik Ramakrishnan [00:19:32]

Very well said. Especially, as you said, in terms of speed, that illustration is fantastic. Right from this capability coming in 18 months ago or less to now, us having to put in the right programs in place so people can upskill themselves. Like, this is super fast in policy making worlds anyways, for sure. Now shift gears a little bit. Would you like to talk to us about what is the responsibility AI Institute's initiatives? Can you explain the structure and objectives of the assessment program itself? And ultimately, how does it help organizations align their AI applications with ethical and legal standards?

Var Shankar [00:20:10]

Yeah, great question. So our organization is focused on providing our member organizations with kind of actionable and easy to use assessments and templates that help them develop and scale their responsible AI programs. So we have an organizational level assessment, we have a system level assessment, and then we have supplier assessment. All of those are kind of mapped to different major regulatory instruments, like the AI act, like the US trustworthy AI executive order, and then ISO 42001, the G7 code of conduct. But they're meant to be kind of easy to use instruments that organization, anybody at an organization should easily be able to understand and be able to use. And so, you know, we hear from member organizations the same set of questions. Good. Look, like, in terms of governance. How do I think about third party risk? How do I kind of bring generative AI up? How do I think about how to prioritize which generative AI products to bring to market first? And so we really try to be responsive to these kind of pain points and questions that members have and provide community informed, so informed by policymakers, informed by researchers specific use cases, community informed kind of templates and tools to help them to help organizations navigate the complexity in their context.

Karthik Ramakrishnan [00:21:29]

Cool. Yeah. And you recently wrote an article or a piece on an underestimated tool that could make AI innovation safer. I would love for you to elaborate on the role of certification programs in enabling responsible AI.

Var Shankar [00:21:42]

Yeah. So our responsible AI certification program is based on our system level assessment. And that, you know, when you're thinking about AI governance, as you mentioned earlier, there's no carve out for AI. So existing laws all apply to AI systems. That's kind of the baseline that you're starting with. Then you have kind of new regulations coming in, like the AI act, like Canada's ADA potentially. And then you have this whole area of soft law. So that includes things that we've talked about, like ISO 42001, like the G7 code of conduct, and also the responsible AI Institute's certification program. And the idea behind the certification program is that once your, if your AI system needs a higher level of assurance, you'll need certification, that's delivered by a third party. And so for example, our system level assessment, you can use it internally. You can just say, okay, I'm going to check these boxes and kind of file it away. As I undertook this assessment, at some point in the lifecycle of designing the system and mitigated all the risks that came up. For some assurance regimes, you might want to, if you're going to have a large number of users, or if this is kind of an experimental or early product that you're interested in, kind of signalling to others like investors, like government or like consumers, that this is kind of developed with mind, then you might seek a third party independent delivery of the responsible AI certification scheme. And so, you know, somebody else who's not the institute would then go through and say, okay, this is how this organization used the responsible AI assessment, given its use case and given its jurisdiction, and this is how it came up with what it needs to do to mitigate some of the risks that came up. The third party organization delivering the certification, we think that it's kind of, it meets the requirements of the certification scheme. So that's the idea around both soft law and the place that the responsible AI certification program plays in that ecosystem.

Karthik Ramakrishnan [00:23:41]

Interesting. So as we're thinking now about where organizations go through this process, getting themselves assessing internally, getting external certification, and then you have frameworks like the NIST framework, the National Institute of Science and Technology, the AI governance framework, with its four pillars. I think it's actually a very, very solid one. From your experience, what are the top three challenges that an enterprise should be thinking about that you've seen that they face sort of common three top ones, and what mitigation strategies can they put around that implementation?

Var Shankar [00:24:16]

Sure. So on the NIST AI RMF piece, I think our assessment is really, I agree with you that it's a very thorough and great taxonomy in terms of both the lifecycle stages, the functions, as well as the characteristics that it outlines. And all of our materials map very closely to the NIST RMF. In terms of challenges that we see, it's hard to say a top three. But, for instance, developing AI risk assessments and integrating them into other risk assessments, I think is a big challenge. Organizations don't necessarily know where their AI risk is coming from. Is it from kind of explicitly procured third party AI platforms? Is it from kind of shadow AI, where employees are accessing commercially available AI systems? Or is it from non AI tools that you've already procured that are dropping AI functionality? Or is it from AI you're developing internally? So there's all these kind of different entry points. And so really thinking about what the net new in terms of what risk does AI introduce into your organization is a coordination problem. And so the best mitigation for that is frankly identifying all of those entry points, bringing together all the people that are involved in assessing AI at those different entry points, and then kind of having a chat about what are some of the top ways in which we can kind of raise red flags around AI and around our AI use, and how can we have a common playing field in terms of you don't want to apply a higher standard to an AI system that you're procuring or building, but then just have something that you've already procured, drop an AI capability that's kind of more significant. And so really ensuring a level playing field in terms of AI use within your organization is what you're seeking there. The other two that I'll just touch on very quickly, training people that's not just traditional training in terms of courses, but also developing a bit of a repeatable, trustworthy, internal communities of practice where people can discuss some of the latest developments. In terms of both AI industry developments, tooling responsible AI practices and research is really key. And to the extent that you can bring in external resources, whether from the institute or whether from other industry peers, and kind of compare notes there, that's also been a really important way to kind of upscale. And then finally, I think just keeping up with the pace of change has been a major challenge. And we discussed that a little bit earlier. And some of the ways that organizations are dealing with this is by kind of having a centralized, small, nimble kind of office of responsible AI or chief AI ethics officer type role, whose kind of job it is to monitor developments in the AI landscape and then to kind of translate them in ways that are not overwhelming to the rest of the organization. And that's kind of beyond in addition to all of the kind of educational aspects that I mentioned a little bit earlier. So those are some of the kind of major ways in which organizations are mitigating some of these AI risks.

Karthik Ramakrishnan [00:27:25]

Obviously, you've been in the space for the last few years and you're seeing enterprises adopt AI. Are you seeing that accelerating? Are you seeing that stagnating because of these risks? What is actually happening? Where's adoption going? What do you see the future of AI in the enterprise?

Var Shankar [00:27:40]

Yeah, and I'll give myself high level views on that and would also love to hear what you're seeing. So it's kind of an interesting place because you have significant advances in AI models and AI adoption, they're making an uneven impact. So in some of these creative industries, you know, they're massively disruptive in some of the other kind of industries and use cases, see the potential. You can see how they could be transformative. But because of challenges in your accuracy, explainability, privacy, access to ongoing high quality data, organizations are not quite ready to push them past pilot and into production. And then you layer on top of that, the economic environment that we're in, where IT budgets at big companies are kind of under a lot of strain, and yet at the same time, AI budgets might be growing. So it's a very interesting kind of set of factors coming together. I would say that in the long term, I'm very bullish that the investments that organizations are making in those kind of foundational elements of their AI programs are going to pay off, especially given that the era of AI is in its infancy. Those foundational pieces in terms of governance, tooling, training. But in the short term, I think you see a little bit of caution in terms of organizations want to separate what's real from what's hyped. AI is both simultaneously very transformative and overhyped at the same time. And so I think that caution is well placed and I think it'll be good, you know, it'll be good to take a human centered and kind of use case centered rethinking of what does this, what do these amazing new developments in terms of technology really mean in terms of products and services and creating value?

Karthik Ramakrishnan [00:29:26]

Yeah, no, I think that's right. Look, I think organizations have seen the ROI from AI systems already. It's just your subsequent investments should be not sort of experimental, but you want to drive more value. So I get that part. I think there's another piece which is, I think the risk in these models is delaying adoption. And so to your point about, and maybe I have a different view of this, because you're right, we do see AI budgets growing, but at the same time, you know, is there a drag on moving things beyond the pilot phase? Right. And I think it's actually, I see this happening right now. Everyone, a lot of people who just showed up on the AI scene these days think that generative AI is AI, right? Like AI is equal to generative AI, but it's not. Right. Generative AI is a type of an implementation. If it's a type of a model, it's a, it's a multimodal system, whatever the case may be, it just does things a little bit differently. But you have lots of other types of machine learning systems, and I think we do ourselves a disservice just by putting in this term AI and then putting everything under it. But point being that investments into machine learning systems and the value extracted from them is there. You have a lot of applications that are in production, they're just not generative AI.

Var Shankar [00:30:52]

I think we're in agreement there for sure.

Karthik Ramakrishnan [00:30:56]

Right. And so what's going to happen, I think, is we're, I'm a little bit worried and concerned that because the hype is around generative AI adoption or moving from the pilot to production chasm hasn't yet happened. There's a ton of VC investment and company investment that's going into these things that we may see a slump. Right. So we're going to see a correction, how big that correction is will depend. But a couple of breakthrough implementations would be fantastic. And I think there was one example, oh, shoot. I forget the name of the company, but they put out a publication, they saved about $40 million. Do you remember the name? Do you know what I'm talking about? They put in a customer service agent using genitive AI and say $40 million in EBITDA, which was ridiculous. $40 million is not number to laugh at. Yeah, we will do a quick search. Yeah, no, that's my point. Right. I think we need more of these kind of models to come through. I think we need more of these stories to come through. I think the takeaway is generative AI is not just generative AI. I think we have a lot going on in production today that's outside of the generative AI realm. It will catch up. You know, it's just, it's a matter of time.

Var Shankar [00:32:13]

Yeah.

Karthik Ramakrishnan [00:32:14]

And the time may not be.

Var Shankar [00:32:15]

Yeah, absolutely. And so that's what, that's, that's where seeing everything as kind of a continuum of organizations becoming more digital, adopting modern ways of working, adopting cloud and then developing good data science practices, and then eventually developing the ability to either better understand and procure or even better build kind of what is considered non generative AI on the foundational pieces of modern ways of working. Adopting, making digital learning organizations is really the best strategy to deal with all of the complexity and technological change and hype. Absolutely agree with you that the set of concerns with generative AI is kind of a superset. You have to have responsible AI regardless. And then you have to kind of also consider, in addition to that, the new risk that generative AI poses. And so it's really important to have that foundational program in place and not to develop a discerning eye towards which of these are really trustworthy methods which we can bring into production today versus what is more experimental that we should be dabbling in and developing capacity in.

Karthik Ramakrishnan [00:33:24]

There you go. The company was, Klarna just googled it. So Klarna put in a generative AI for customer support and expects now it's doing the work equivalent to 700 full time agents, leading to 25% drop and repeat inquiries. That means it's getting things done in the first call, resolves errands in less than two minutes compared to eleven minutes previously. That's a significant jump or savings or efficiency. And yeah, 23 markets. Twenty four, seven. And in 35 languages. Can you believe it? I mean, that's what the power of these models are so fantastic. And of course, $40 million in savings. Let's not forget that as a result.

Hey, listen, this has been a fantastic challenge. Thank you so much. Any party thoughts for young professionals? I get pinged quite a lot. I'm sure you do too, more than me, but folks who want to get into AI ethics, thinking about responsible AI, both technical, interestingly these days, a lot from grads in the computer science as well as social sciences grads, but there's an increasing interest in getting involved in the space. So what advice do you have for kids trying to get into the space?

Var Shankar [00:34:36]

Yeah, so I think just two pieces of advice. The first is do your technical homework. I think we talked about this at the beginning. Regardless of what your background is, it doesn't take a lot to learn the basics of AI, machine learning, data science, as well as peripheral fields in terms of how organizations work, how organizations think about privacy, security and product development. So I think that's kind of baseline to be doing. And then the other piece I'd say is think about kind of real problems and how you would, what value you'd like to provide in solving those problems. You know, the technology stack will follow, the kind of network will follow, but, you know, where you're solving real business problems or real societal problems, you can kind of think through the set of tools you have available. And with that kind of technical background that you have or that you've kind of adopted, you'll be well positioned to kind of navigate the changes ahead. And so those are the two kind of pieces of advice that I'd have.

Karthik Ramakrishnan [00:35:35]

Well said. Couldn't agree more. I think there's definitely a lot to take away from your career and have you made that transition and sort of got involved in these things. If people want to reach out to you, any advice on how they can do that?

Var Shankar [00:35:51]

Sure. Yeah. LinkedIn is probably the best way to do that. Add me on LinkedIn, send me a message, and always happy to connect.

And Karthik, thank you so much for inviting me onto the podcast, and congratulations with all the the success that Armilla has been having and that you personally have enjoyed.

Karthik Ramakrishnan [00:36:06]

Awesome. Thank you so much and thanks for the partnership. I think it's been a fantastic few years that we've been working together and so really happy your successes as well. So excellent. We'll stay tuned and we'll see you in another episode of People and AI. Thank you.