• 16 hours ago
Lan Guan, Chief AI Officer and Head, Center for Advanced AI, Accenture, Mahesh Saptharishi , Executive Vice President and Chief Technology Officer, Motorola Solutions, Jeremy Kahn , Fortune
Transcript
00:00Thanks so much for joining me.
00:02Thanks for having me.
00:03I think there's been a lot of talk that 2023 is this year of experimentation.
00:09A lot of people doing pilot projects.
00:11As we move into 2024,
00:13there are a lot of businesses that are thinking about,
00:15okay, now I want to take things out of pilot.
00:17I really want to deploy things at scale.
00:19I really want to see how I'm going to get ROI out of AI.
00:24I want to go to both of you.
00:26How do you look at and how do you advise companies
00:28to pick a pilot project to scale up?
00:32How do you come up with some metrics for success and some metrics around ROI?
00:37We'll go to you first, Lan.
00:38How are you advising Accenture's clients on this?
00:41Yeah, sure.
00:41I think that's a very common question.
00:44I think, first of all, we always advise the client on the definition of the scale.
00:48What does the scale mean, right?
00:50Yes, the scale means you're taking a lot of your POC experimentation into production,
00:55but also we remind clients that there are other imperatives
00:59that they need to be investing in things like,
01:02okay, setting up the very strong value management process to track the value
01:09and also focusing on some of the talent challenges.
01:12It's not one-size-fits-all.
01:14It's not one single answer,
01:16but starting from taking what you have built into production,
01:20but then also focusing on all the surrounding things
01:23that you will require to scale Gen AI.
01:26And should the metrics all be quantitative ones
01:28or should you use some qualitative metrics?
01:30Yeah, no, I don't think so.
01:32I think quantitative is where everybody's head is going to initially.
01:38And I think that's very important.
01:40In fact, one of the solutions that we have scaled within Accenture
01:44for our marketing team,
01:46we are tracking about 20% to 30% of the reduction in the existing workflow steps.
01:53But then at the same time, we're tracking a lot of the qualitative metrics,
01:58things like employee satisfaction, things about culture changes.
02:03So I would say these two are both very, very important.
02:06Great. I want to go to you, Mahesh.
02:07How do you look at this within Motorola Solutions?
02:10How do you decide which products you're going to scale up?
02:13Yeah, so to begin with,
02:15Motorola Solutions is very focused on public safety and enterprise security.
02:19Our mission is solving for safer communities.
02:23When we think about a mission-oriented business,
02:28for us, we really focus on two metrics, top-level metrics.
02:31For public safety, if you save 60 seconds from a nine-on-one response,
02:36you save 10,000 lives a year.
02:39In enterprise security, after about 20 minutes of watching video,
02:45your level of attention, the ability to detect something that is important,
02:50drops to about 20%, and it drops exponentially below that as time passes.
02:55So attention management is the key to success when it comes to
03:00enterprise security.
03:02When we apply AI, it's about how many seconds can be materially safe, and
03:06how much attention can be preserved for
03:09our operators in the enterprise security space.
03:12And that's kind of how we think about metrics in this context.
03:15Obviously, there's many other factors that play into it, but
03:18ultimately, those are the two things from a mission standpoint we have to go
03:21fulfill, and that's when we decide we need to move further with AI.
03:25Great.
03:26Another thing people are grappling with,
03:28particularly as they try to deploy at scale, is reliability and
03:30trustworthiness, particularly gender of AI.
03:33A lot of people are concerned about hallucinations.
03:35A lot of people are worried about using what is ultimately a probabilistic
03:38technology in a business context.
03:41How have you sought to kind of overcome those challenges with this technology?
03:46Maybe we'll go to Mahesh first, and then to Lon.
03:49Sure.
03:50So when we think about AI in the context of public safety and
03:54enterprise security, we think about it as mission-critical AI, right?
03:57And for us, it's the ABCDs of mission-critical AI.
04:02It's availability, it's benefit, it's capability, and it's design.
04:07Those are the four factors, alphabetically organized, not by priority.
04:12But it starts with benefit.
04:14We need to have that overwhelming benefit,
04:17consistent with the two metrics I mentioned before.
04:21With that overwhelming benefit,
04:23we need to deliver the user experience that is right.
04:26And that user experience is one where we can calibrate how the human plus
04:31the AI is better than each one by itself.
04:35And everything that we focus on is workflows that are manually executed
04:40that now can benefit from AI and automation.
04:43So there's a baseline of performance.
04:45And that baseline of performance needs to allow us to say, hey,
04:48human plus AI does better, provably.
04:51And that's the capability part, mapping the capability correctly to that.
04:55And ultimately, in public safety, we could have the Internet go down.
05:01There could be a hurricane, which means communication is difficult.
05:05Things that may run in the cloud may not be available to us anymore.
05:09So availability and making sure that performance is consistent in that time of
05:14need, and if performance has to degrade it just so
05:17gracefully, that becomes the availability element of it.
05:20Those three things, the ABCDs of mission-critical AI,
05:23that's the fundamentals of trustworthy AI for us.
05:26Right, Alon, and how do you look at this at Accenture and
05:29when you're talking to your clients?
05:30And while Alon's answering, I'm gonna come to the audience for
05:33questions in a minute, so please think of your questions.
05:35Yeah, sure, it's a very important question.
05:37And how we bring trustworthy applications to client,
05:42I will highlight three things.
05:43One is the accuracy.
05:45I think the, again, performance accuracy of Gen AI,
05:48especially when you're considering in the scale stage, is so important, right?
05:53So there's no such a thing that you can just go talk to an off-shelf AI,
05:59and in enterprise setting, and it will give you the accurate response, right?
06:03There's a lot of performance tuning, a lot of the techniques that our center for
06:08advanced AI team is bringing to the market, so that we can tune this off-shelf
06:12models, right, to actually help solve a lot of what I call the enterprise
06:17messiness, right, within each organization.
06:19So accuracy is key.
06:20I would say the second one is the adoption, right?
06:23So when we say, okay, this thing is trustworthy,
06:26you have to be very audience-centric, right?
06:29So who are the users, right?
06:31How do we provide this kind of capabilities to users so
06:34that they can have the hands-on experience is super important.
06:38I would say the third one is ensure the user feedback
06:42is actually being taken into consideration.
06:45So leveraging techniques such as reinforcement learning with human
06:50feedback, have the users actually be part of this communication flows so
06:55that their feedback, their interaction with the AI in the back scene is being
07:00comprehended, digested, and leveraged by our research scientists to make
07:05this application more trustworthy.
07:07I think these are the three reminders that I want to give to everyone.
07:10Right, and I know you've done some things with coupling different models together,
07:13and you also have this, there's a model switchboard product that if you're trying
07:17to figure out which model you should route to for which use case,
07:20that helps people do that, is that?
07:21Yeah, absolutely.
07:22I mean, model-based switchboard gives you the flexibility and agility,
07:26which I think is part of the adoption, key adoption factors, right, for trust.
07:30Interesting, and I know Mahesh at Motorola Solutions,
07:32you've done some interesting things with sort of marrying predictive AI with
07:36generative AI.
07:37So you may have predictive AI that works on computer vision,
07:40identifies what's being seen in a video feed, but
07:43then can alert the user in natural language, is that how that works?
07:46Yeah, so let me give you an example, and this is a combination of a few.
07:53A person comes into a large city,
07:55the person's in that large city with a child with special needs, let's say.
08:02Suddenly the person looks around, gets distracted a little bit,
08:05the child's nowhere to be seen, calls 911.
08:09But it turns out that that person only speaks Portuguese,
08:12doesn't speak English.
08:14First step within the 911 system, we have the capacity to translate in real time
08:19between Portuguese and English, and English to Portuguese.
08:21So now, even though there's a language disconnect there,
08:24there's a conversation that is possible.
08:27Now, during that conversation, the person describes how that child was clothed.
08:33And as soon as that description of how that child was clothed is understood by
08:38the system via generative AI mechanism,
08:41that description is then pushed to every single camera in that locality.
08:46Where now every camera's automatically looking, not through generative means,
08:50but through discriminative means, differential means, to say, hey,
08:54do I see any person wearing that set of clothing fitting that description?
09:00When it sees somebody fitting that description,
09:03that information is then automatically distributed to law enforcement or
09:09first responders who may have a phone with an app,
09:13they may have radios with a screen, and they see that description.
09:16Now, if this child is with special needs, the other additional context,
09:19the generative AI context that needs to be applied,
09:21is when that person approaches the child,
09:24maybe there's some special care that needs to be taken in that process.
09:27So that context, but with the ability to have the sensory infrastructure,
09:33that is where there's the combination of generative AI plus traditional.
09:37I think that's a fascinating use case.
09:38I'm just gonna interrupt you there cuz I wanna go to questions from the audience.
09:41Who has questions for Lan and Mahesh?
09:43Please raise your hand, I'll come to you.
09:44I've got more questions for them if everyone's feeling a bit sleepy this
09:48morning and doesn't wanna ask questions, but who's got a question?
09:51I'm not seeing any questions yet, so I'm gonna keep going.
09:54One of the big concerns at scale with this technology is cost.
09:57People are very concerned that cost will get out of hand and
10:00that you're not gonna see RRI because it's simply too expensive
10:03to run these models at scale.
10:05Lan, how are you seeing clients deal with that issue of cost?
10:09Yeah, no, that's a very common question.
10:11And I would say the super important to understand the full architecture of
10:17what you are standing up so that you understand, right?
10:20You can actually break it down into the different cost components, right?
10:24Are you talking about the training costs, right?
10:26Are you talking about data acquisition costs?
10:29Are you talking about the inferencing costs, right?
10:31A lot of times I think this conversation is kind of mixed together, right?
10:35Which make it difficult to actually pinpoint where the cost is coming from.
10:40So one of my retail clients, they face the exact same problem.
10:44And when we actually went through this,
10:46what I call the cost diagnostic exercise that we did.
10:49We found out the main pinpoint is actually with the inferencing.
10:53So, and it's simply because a lot of the data treatments that they have done
10:57behind the scene and how they are actually doing the prompting
11:01with the existing model is not being performed, right?
11:05It's not being optimized.
11:06So this means they are not taking into consideration of the long context window
11:11that is made available to them now.
11:13So they're making a lot of the round trips unnecessary of the round trips of
11:18the token consumption.
11:19So just to give you one example, almost like we're talking to a doctor and
11:23describe the symptom.
11:25And then in this case, doctor will prescribe very specific things
11:30along the entire stages of the architecture.
11:34I think that is the approach.
11:35I really love the example that Mahesh was talking about.
11:38I think that really brings to life the complexity of
11:44how we are taking this kind of gen AI into enterprise.
11:47Because it's, like I said, it's not just you prompting an off-shelf model.
11:51You get what you need.
11:52There's a lot of refinement, lots of optimization that needs to happen.
11:56Interesting.
11:57And Mahesh, I mean, the solution you talked about sounds great.
12:00You can, the finding of a child, it's hard to put a value on that necessarily.
12:04But for the person who's buying your camera or buying your solution,
12:06they do have to put a number on that.
12:08How do you price that and is it twice as expensive,
12:12three times as expensive as a sort of dumb camera system would be?
12:16Yeah, and I think it's that commercial like saving a life is priceless.
12:22That's right, but all that said, philosophically,
12:27we first focus on what is the most difficult problem we need to solve.
12:32And then secondly, we take an Occam's razor approach to it.
12:34What is the simplest solution to the problem?
12:37There's a tendency, given the power and capability of generative AI,
12:41to take the giant hammer and apply it to everything that looks like a nail.
12:46And sometimes you need the expertise in house to understand,
12:50how do you calibrate the right complexity of solution that applies to that problem?
12:55By the way, not just from a cost standpoint,
12:57because the simpler the solution, it's easier for you to actually evaluate it,
13:02test it, make sure that it doesn't do something that was unintentional and
13:05bad, potentially.
13:07When we think about it in that context, we can actually then say, hey,
13:10how do we now break up the problem into the right chunks?
13:13What can run in more expensive infrastructure?
13:16What can run in something that perhaps doesn't need that expensive an infrastructure?
13:21Where is it okay for us to have some latency in the system?
13:25And that translates to maybe some cost advantage.
13:27Where does it have to be in real time?
13:29That distributed systems thinking with AI,
13:32then becomes the other important factor here that we take into consideration.
13:36In that example I gave you,
13:38everything that the cameras do is built into the cost of the camera.
13:42And those cameras are no more expensive than perhaps dumb cameras that you would.
13:48That's what I was going to ask.
13:48So there's not, they're built in the price of the camera,
13:51the camera's not outrageously more expensive than a normal dumb camera would be.
13:54That's exactly right.
13:56And we selectively apply the translation capability.
13:59We selectively apply the context extraction capability where that is more
14:03on demand versus operating on a continuous stream of data.
14:07So we can calibrate the cost profile appropriately.
14:10Got it.
14:10Lan, I want to ask about talent and people, because that's part of this too.
14:14I mean, is that one of the impediments here?
14:17And how much do people need to think about, okay,
14:19I'm going to scale this out to my whole organization.
14:21But what kind of training do you need around your workforce to have success with
14:26that, and is that where people are maybe falling down also?
14:28I think some of these cases where you're not seeing ROI,
14:31it's partially because you haven't trained the workforce to actually use the technology.
14:33Absolutely, a huge impediment, right?
14:35We have seen cases all over the place that client is not ready to embrace
14:41Gen AI at scale, simply because they don't have the right people.
14:44They don't have the right skill set, right?
14:46It's like the hot potato, right?
14:48How do you embrace that if you don't have the right skill set?
14:50But I also want to call out, it's not just the technical skills.
14:54Yes, that is super important.
14:56In fact, in Accenture's case, we have transformed all of our AI practitioners
15:02to be future-ready roles, right?
15:06Things like AI computational scientists.
15:09Things like, okay, you need to be the Gen AI architects.
15:12So that's important to actually beef up the technical competency, right?
15:17So that you can master, you can discarnish the power of this Gen AI.
15:20But it's equally important to train the business users so
15:24that this whole adoption concept, the acceptance,
15:27the diffusion of the AI throughout the organization,
15:32we're relying on those users to play their role.
15:34So I just want to remind everyone, it's kind of a double-edged sword,
15:38technical competency, and also on the business side,
15:42they need to be able to embrace Gen AI on a daily basis.
15:45That's great.
15:46That's a good to end on.
15:47I'm afraid we're out of time.
15:48But thank you so much for being with us.
15:50That was really fascinating.
15:52Thank you again, Lon and Mahesh.
15:59That was fantastic.
16:01But stick around.
16:02Don't miss this really exciting announcement coming right up.
16:06The rate of change is at an all-time high, and
16:09your people will need new skills to embrace it.
16:12Accenture's LearnVantage is here to help.
16:15We work to understand your organization's strategy and
16:18assess the skills your people have today and need for tomorrow.
16:23We offer industry-recognized courses to deliver personalized learning
16:27with certifications and measurable outcomes.
16:30Together, we build the skills your people want and
16:33your organization needs to grow faster.
16:37All right, it's me again.
16:41I have a very exciting announcement to make here as Accenture's Chief AI Officer.
16:49Today is our day to announce, together with Stanford Human Center AI Organization,
16:56this very exciting program, so Gen AI Scholar Online Program.
17:04So this is the, like I said, we just talked about some of the talent
17:09challenges, right, on Jeremy's fireside chat.
17:13And that's one of the top challenges that many organizations face.
17:18Accenture wanted to play a very strategic role, very big role in this case.
17:23So in our case, we have trained a lot of our people internally, but
17:27we felt like that's not enough, we need to go to the best, right?
17:30So that's why about two years ago,
17:33we formed this Accenture partnership with the Stanford Human Center AI.
17:37And the first thing that we did together with Hai is actually set up this
17:42foundation model scholar program.
17:45So Accenture, Stanford Foundation Model Scholar Program for
17:49us to take our top leaders to Stanford campus, right, to learn from the top
17:55faculty there, the latest and greatest of the Gen AI technology.
17:58But that's not enough, not everybody can go to Stanford, right?
18:01So that's why we created this online program that is available to everyone,
18:05every organization, and for you to access this program,
18:10please go to our Learn Vantage program, the platform, and
18:14actually look up the Gen AI Scholar.
18:17And you will find all the information that you can sign up for this program.
18:21Panos, as my partner for this, you want to say a couple things here?
18:26Thank you, Lan, and hi, everyone.
18:28We are particularly excited launching this program with Accenture today.
18:32It was a byproduct of a lot of hard work and
18:36a lot of love and passion for both organizations.
18:39So as most of you might know, we are the Stanford Institute for
18:43Human-Centered AI, which means that we have a lot of ideas and
18:48we want to believe some sort of a playbook of how organizations should be designing,
18:53building, and deploying AI responsibly at scale.
18:57And we found over the course of the years that a lot of the alpha
19:03comes from the osmosis for this ongoing dialogue between
19:08the magic that's happening on the foundational research side, and
19:12then the action on the applied side, on the applied intelligence side.
19:16So in many ways, the industry program at Stanford,
19:20what we do is building bridges between those two worlds.
19:23So our ambition is with this program, we'll build bigger bridges with Stanford and
19:29the rest of the world, and allow access to the use cases and
19:35theoretical background in a more comprehensive and inclusive way.
19:42So that being said, thanks again to Lana and Accenture for leading this effort.
19:47Thanks for the great partnership, yes.
19:49And we will welcome everyone.
19:49Enjoy learning.
19:51Enjoy learning.
19:52Okay.

Recommended