Future of Finance 2024: Navigating Risks and Rewards of Latest Technologies

  • 5 months ago
Tal Cohen, President, Nasdaq, Inc. Katherine Wetmur, Managing Director and Chief Information Officer, Cyber, Data, Risk and Resilience, Morgan Stanley Moderator: Luisa Beltran, FORTUNE
Transcript
00:00 Hi, everybody.
00:01 Thanks for coming today.
00:02 So we're just going to dive right in.
00:05 So in the past few years, we've seen some major changes to market dynamics.
00:09 There's rising regulatory scrutiny and clients have changed their expectations.
00:13 Tal, can you speak about the regulatory changes involving T+1, Basel III, and how it's impacting
00:21 the capital markets and the banking sector?
00:23 But first, what is T+1?
00:26 T+1 is just one day faster than T+2, but it's a settlement cycle for equities and other
00:33 asset classes.
00:34 So when you execute a security, you need to settle that security, settle and clear that
00:39 security.
00:40 Today, it's T+2, so 48 hours after that transaction, and that's being brought in to T+24 hours.
00:47 And that requires the industry to invest in automation tools and processes that allow
00:52 them to ensure that they're not taking undue risk as a result of bringing in this settlement
00:56 cycle.
00:57 So yeah, so I'll talk a little bit about --
01:00 And regulatory changes?
01:01 Yeah.
01:02 And that was the tip of the iceberg when you mentioned T+1 and Basel III endgame.
01:06 There's just a number of regulations and a number of changes, and they're regional, they're
01:11 global, and they're going downstream in terms of the impact that it's having.
01:16 And the three themes that we hear from customers are really, really consistent.
01:20 It's one, they want to streamline and comply.
01:22 And they want to streamline and comply in the following way.
01:24 They want to better manage the explosion of regulatory costs and obligations while simultaneously
01:29 complying with enhanced liquidity and capital requirements.
01:32 Really not easy to do, and that's a big piece of work.
01:35 But once they've done that, they're talking to us about simplifying and standardizing.
01:39 That's kind of the second theme I'm hearing from customers, where these organizations
01:43 become very complex, operationally complex.
01:46 And there's lots of silos, and they want to break down those silos that introduce operational
01:50 and financial risk.
01:52 And then once they've done that, we're talking to clients about modernizing and digitizing.
01:56 And this is clients' ability to keep up with forces of change, forces of disruption, whether
02:01 that's competitive or emerging technologies.
02:04 So a number of customers are looking for leaders in this space, but just as importantly, somebody
02:08 who understands the challenges they face.
02:10 Because each of those challenges requires a real deep understanding of how those institutions
02:15 work.
02:16 Katherine, can you talk about modernization efforts?
02:19 Yes.
02:20 I mean, I spoke quite a bit about it, right, is all of the items you mentioned we're having
02:25 to implement, right?
02:26 So T+1, Basel, Endgame, any of these things.
02:29 And there are a significant amount of work.
02:30 So the focus that we've been having in our technology area at Morgan Stanley is saying
02:35 whenever we have an opportunity, thinking about process changes, so process optimization
02:41 or business processes or regulatory requirements, is how can we also think about modernizing
02:47 our technology stack?
02:49 You have an opportunity to do some of these things together.
02:52 When you have big projects that are going to require a lot of work, it's an opportunity
02:55 to do that.
02:56 And the more that we modernize our applications and more that we modernize our technology,
03:01 we become much more nimble.
03:03 So back to some of the comments you made earlier, how do you automate more?
03:06 How do you do more straight-through processing?
03:07 How do we make sure that we're using modern tech to do it?
03:10 How can we make changes really quickly?
03:11 How can we be more API-driven?
03:13 All of that has been a big focus of what we're doing in our modernization play.
03:17 How can we make sure we can scale?
03:19 Should we use cloud?
03:20 Can we use bursting?
03:21 All of that is needed as we think about all of these big regulatory projects.
03:24 Great.
03:25 And so everyone's talking about Gen AI today, and we're going to continue with it.
03:31 Morgan Stanley was a first mover with its partnership with OpenAI.
03:34 Catherine, can you tell us about the challenges this created?
03:38 Sure.
03:39 I'll talk a little bit about it.
03:40 I mean, it was very exciting for us to be a first mover and one of the financial industries
03:47 lead in working with OpenAI.
03:50 And what we found with it is we learned a significant amount from OpenAI.
03:53 They came with a lot of expertise, and they learned stuff from us of how to deal with
03:57 a financial regulated institution.
04:01 So the first thing we learned is you've got to make sure you're picking the right use
04:04 case.
04:05 There are so many generative AI use cases people have, and you've got to make sure you're
04:09 using one that you can implement, you can implement in the time you want, and has an
04:13 impact on your businesses, and you can learn from and you minimize the risk with.
04:17 So what we did originally working with OpenAI is we said, "Let's look at something in our
04:22 advisors area."
04:24 Our advisors could work with our clients better if they can have more information at their
04:29 fingertips and be able to answer their questions faster.
04:32 So we said, "Okay, so let's look at how we can create a chatbot."
04:35 We call it AI at Morgan Stanley Assistant.
04:38 We didn't name it a name, like a lady's name or a man's name.
04:41 We said it's an AI.
04:42 It's not a person.
04:44 And that AI can assist our advisors by them asking questions to that AI.
04:49 And the key thing that we did, which was very important as we looked at everything, we didn't
04:53 want any data loss.
04:55 We didn't want any hallucinations.
04:56 We didn't want these things.
04:57 So we curated our own data.
04:59 It's only going against the data that we have at Morgan Stanley.
05:02 We also made sure we put all the controls around it.
05:05 It meant when we were first starting out, a lot of conversation with people in the privacy
05:09 side, people in our legal, people in our cyberspace, people in our information security area, and
05:14 making sure we were choosing the right use case for that.
05:18 And we're going to continue to do more and more, but I think it's just the beginning
05:21 is that you have to make sure you're building some of these guardrails to start with.
05:25 And a lot of these products earlier on didn't have those guardrails in them.
05:28 You're starting to see a lot more of that happen, and we've had to build some of those
05:31 ourselves.
05:32 So when did you introduce it?
05:34 We introduced it about a year ago.
05:36 We spent about six months building it.
05:37 And a key thing with it is we constantly get feedback from our advisors.
05:42 So the advisors enjoy using it, but we did some things like just give us thumbs up, thumbs
05:45 down.
05:46 Did it give you the answer you're wanting?
05:47 Did it not?
05:48 And you're able to iterate on that very quickly.
05:51 And how many advisors use it now?
05:53 Thousands.
05:54 Fast.
05:55 And so is it a thumbs up or is it a thumbs down?
05:57 It's a thumbs up.
05:58 It's definitely a thumbs up.
05:59 And it makes them quicker?
06:00 It makes them quicker and able to answer questions without having to go through reams of paper
06:03 and look things up.
06:05 Tal, you guys also just announced the launch of your Gen AI feature for market surveillance
06:11 technology.
06:12 How's it going?
06:13 Yeah, very exciting.
06:14 So yesterday we announced this really core to our mission.
06:17 So when we think about AI, whether it's Gen AI or algorithmic AI, the first question we
06:22 ask is, are these use cases, are they going to deliver our mission in a bigger way, in
06:28 a better way?
06:29 One of the missions we have at NASDAQ is to ensure the integrity of the markets, ensure
06:33 investor confidence.
06:34 And so we have a surveillance solution that Morgan Stanley and others use, exchanges use,
06:39 and regulators use.
06:40 And what we've done is we've announced that we're going to have a Gen AI co-pilot, investigation
06:45 co-pilot.
06:46 And the way that it works is really cool.
06:48 It allows an analyst to truncate the investigation time and increases the efficacy of each investigation.
06:54 The way it works is the surveillance tool will generate an alert of a potential market
06:58 abuse or market manipulation, really serious issue.
07:02 Now that analyst now needs to undertake a number of manual activities once that alert
07:06 is generated.
07:07 And it could take hours.
07:08 In fact, it takes four to six hours every time an analyst needs to look at something.
07:12 But what if we can have Gen AI provide the context and a summary of all that extensive
07:18 information and provide that to the analyst, shaving off about two hours or 33% of the
07:24 investigation time.
07:25 What that allows is the analyst now focuses on higher value activities, makes a better
07:30 assessment of the situation, and the client can scale that across their operations.
07:35 So it has lots of benefits in terms of productivity.
07:38 It lives up to our ambitions to make sure that we're protecting the market.
07:41 And really importantly is the guardrails that we do.
07:43 And Catherine mentioned this.
07:45 So we've been -- we've not only been putting this into POC, we're just out of POC, we're
07:50 going to use it for our own markets.
07:52 And that's really important for our clients to understand.
07:54 If we're using it for our own markets, that's a proof point that I think is really powerful.
07:58 And so we'll look at use cases like this.
08:00 It's one of many.
08:01 We probably have 15 to 20 POCs that we're running.
08:04 But they have to be specific use cases with an understanding how we're going to deliver
08:07 value to our clients.
08:08 And we want to do it safely and responsibly.
08:11 So is it a thumbs up?
08:13 It's a thumbs up.
08:14 I mean, we haven't launched it yet, but the POC, like I said, it showed a reduction of
08:17 33% in the investigation time.
08:20 And think about what that means in today's dynamic markets where market manipulation,
08:24 the way it was done in 1999 is not the way it's being done today with the advancements
08:30 in technology.
08:31 So putting intelligent tools in the hands of our clients so that they can better surveil
08:36 the markets and we can help them and be their partner, that's terrific.
08:40 That's meeting our mission every day all day long.
08:43 >> Interesting.
08:46 So but with all the focus on AI, people are forgetting about cyber attacks.
08:51 Catherine, can you talk about that?
08:54 What are big banks doing?
08:55 >> I don't think we ever forget about cyber attacks.
09:00 I do want to thank Hal because I'm very excited that it's going to take us half the time to
09:03 deal with any of our alerts going forward.
09:05 So that's a very important thing for us.
09:07 And it will be worth it in going through the paperwork of all the model control as we bring
09:11 the new products in.
09:13 But on the cyber side, everybody, right, you can't open a paper any day and not see some
09:19 kind of cyber event, hearing something about phishing, something happened to individuals,
09:23 something happening to companies.
09:26 And we don't look at ever celebrating anybody having a bad day, right?
09:29 You know, nobody, we don't want to see that anywhere.
09:31 So we work very collectively across all the banks, all of our CISOs, that's our chief
09:37 information security officers.
09:39 We know each other at all the different industries across all of our banks.
09:44 We work very closely together.
09:46 We all sit on very types of industry groups, whether it's FSISAC or the ARC, which is a
09:51 resiliency group.
09:53 And we're doing a lot of things, right?
09:55 So you talked about AI, you know, the amount of data that every single bank has to collect
10:00 on any day to understand malwares, to understand any anomalies, to look at detections, to understand
10:06 business email compromise.
10:08 There's a tremendous amount that all the banks do for that.
10:11 We have to also protect our perimeter.
10:12 Think about the amount of emails we get, the amount of file uploads, the amount of things
10:17 that people are pulling off the Internet.
10:19 We have to protect our perimeter to make sure things aren't coming in through those routes.
10:23 And everybody's having to do those same things.
10:24 And it's really a big data challenge, right?
10:27 The amount of data that you're looking at to find these anomalistic behaviors is huge.
10:31 So it's a huge technology challenge.
10:33 And we're all working together, fighting the good game in this space.
10:37 Has AI increased the cyber attacks?
10:40 Are you seeing different sort of cyber attacks?
10:41 I would say one thing you see in a little bit, you talked about how the world is changing
10:46 also in the AML space.
10:48 You see that in any kind of the fraud areas, the cyber areas.
10:52 You know, think about the phishing type emails you used to get.
10:55 You'd look at them and go, well, this wasn't written by somebody who really knows me.
10:59 This wasn't a sophisticated actor.
11:01 Now you're getting things very targeted.
11:03 You know, people will look up, you know, what kind of role somebody has, the type of job,
11:07 something will be targeted at you, and it looks very much like something you should
11:10 have gotten, right?
11:11 A mail that makes sense, and you click on it, and now you have some malware exposed
11:14 to you.
11:15 So we're seeing more sophistication because the AI tools are now writing some of these
11:20 things for them, right?
11:21 So, you know, there's a positive on the AI tools, and then that's maybe the negative
11:24 of it.
11:25 Same things with fake websites.
11:28 It's very simple to create things that used to take more sophistication to do before that
11:33 is allowing it to be much easier for people to do.
11:36 Also with just being able to automate, you can just automate attacks.
11:39 Anybody trying to do an attack doesn't have to be successful 100% of the time.
11:43 If they can try a million times, one in a million is good, right?
11:46 So the new technology you see are just allowing things to be at a much higher pace and a much
11:51 higher speed, and that's what we're having to defend against.
11:54 - Tom?
11:55 - Yeah, I'll just add on what Catherine said.
11:56 She made a number of good points.
11:58 The way to think about AI and the way that we think about it in the product for our regtech
12:02 products is we're gonna help our clients get to the right answer faster, and regulators
12:08 expect us to surveil our markets, for example, or monitor our customers' activity in real
12:14 time.
12:15 They also expect us to make connections between different patterns.
12:19 They expect us to understand what's going on one side of the market and the other side
12:23 of the market, and so AI could be really, really effective in allowing us to be much
12:27 better at detecting those trends in real time and be that extra layer of defense because
12:31 we know that AI can be weaponized.
12:34 We understand that, but on the other side of that, we need to use it to make sure that
12:37 we're making the human element of it better.
12:41 So AI is effectively like the best teammate a surveillance analyst can have, and that's
12:46 how we think about it.
12:48 - Great.
12:49 So with all these technologies that we're having, all these emerging technologies and
12:53 all the new capabilities, what do you think?
12:56 Is it a blessing or a curse?
12:59 - I'll take that one.
13:00 So to make it a blessing and not a curse, you need to do the following.
13:05 You can never embrace emerging technology unless you've made the upfront investment.
13:09 So for us, we're talking about AI.
13:11 For us to be able to embrace AI at NASDAQ or Morgan Stanley, you need to have invested
13:15 in a couple things.
13:16 One is managing your data.
13:17 You've got to know your data.
13:18 You heard that, right?
13:19 Two is you've got to invest in the cloud, understand the cloud and the potential of
13:22 the cloud.
13:23 Three is you've got to have a really mature posture on the InfoSec side.
13:26 And fourth, you need to have great governance.
13:28 You need to understand that governance is a lubricant for AI and these technologies.
13:33 And then the last one, and very importantly, you have to upskill your workforce.
13:36 You have to bring your workforce with you when you're taking these emerging technologies
13:40 into the market because your clients expect you to be a user and knowledgeable about these
13:45 technologies.
13:46 So that requires a cultural change, potentially.
13:49 So it's investments, it's a cultural change, and it's being committed to being a leader
13:54 in a particular space.
13:55 If that's your ethos, that's our ethos at NASDAQ, we need to be committed.
13:58 And what that means is we need to start using it ourselves.
14:01 We need to start getting uncomfortably comfortable with the technology within NASDAQ for us to
14:06 then have a conversation with our clients.
14:08 Have you increased investment, like financial-wise?
14:10 What's that?
14:11 Have you increased your investment?
14:14 Well, it started for us in 2016, and we started with a small team.
14:18 That team has absolutely increased.
14:21 But the ROI, the win rate, you have a lot of POCs, some of them are not going to work,
14:27 so the win rate is one aspect of it.
14:29 But if you can get it in production and you have the right use case, the ROIC on this
14:34 can be extremely high, extremely high.
14:36 So that's the way to think about it because, to Catherine's point, the automation and the
14:40 drive and the productivity gains and the efficiency gains you can recognize, well, pay for that
14:46 investment up front, really do.
14:47 And so we have a centralized team as well, so we're also making sure that anything is
14:52 ubiquitous across the firm.
14:53 You don't want to develop the same capability three, four, five times, which a lot of firms
14:58 I think are struggling with right now.
14:59 So we've centralized it, which allows us to make the investment and allows us to scale
15:04 the investment.
15:05 Catherine, blessing or curse?
15:07 Yeah, I say it's a blessing for some of the same reasons that Tal said.
15:11 We've done many of the same things.
15:13 We've named a chief officer in AI.
15:16 We've also put a centralized team together.
15:19 We have a centralized governance group.
15:21 And I think all of that is actually very important because you could start building many different
15:25 things and not using some of the same technology.
15:28 Just imagine, you had an assistant every day.
15:31 Instead of you going looking up, how many times do you go look up something online or
15:34 go look over here and you're trying to bring everything together or you're trying to read
15:37 a document, you want to have somebody just quickly tell you, paraphrase exactly what
15:42 was there?
15:43 Those will be huge game changers, right?
15:45 Huge time saves for everybody.
15:47 Same things, you're in meetings, if that meeting, minutes could be taken for you and then you
15:51 can disseminate it out to people that weren't there.
15:53 You take notes.
15:54 All of these types of things are going to become really helpful to folks to make them
15:58 more productive.
15:59 And I think that's the thing that we're going to gain out of this.
16:01 And those are some of the easier type use cases, if you think about it.
16:04 It's really the productivity that everybody's going to have.
16:07 And I think that's a blessing.
16:08 Do you use AI personally?
16:11 Do I use it?
16:12 I use the chat GPT and stuff like that.
16:14 What do you ask it?
16:16 I've asked it many different things.
16:18 Sometimes it doesn't come back with the right answers, though.
16:19 I think that's the key back to your data, right?
16:22 Is that oftentimes, and this is what we, you know, very important, I said earlier, is that
16:26 we're curating our own data.
16:27 You have to be very careful of the data that you have and you got to make sure your data
16:31 is good.
16:32 Otherwise, it can answer the question in an old way, as an example, like it's no longer
16:36 relevant, because it's not current.
16:38 So I think those are also going to bring out new roles in organizations as data curation
16:42 type roles.
16:43 Yeah, just to build on what Catherine said, we say this all the time in ASL.
16:47 AI wasn't designed to be right.
16:48 It was designed to give you an answer.
16:50 And now it's your job to understand if that's the right answer and understand how you want
16:55 to train it if it's not the right answer and what data it needs to get to the right answer.
16:59 So simple question like, what's the price of an apple?
17:03 You need to understand context to get that right.
17:04 You know, how far do I live from the supermarket?
17:07 Where am I?
17:08 Am I buying from a wholesaler?
17:09 All these simple things, this reasoning that needs to happen.
17:12 So you just need to understand the limitation of the tool and how you invest in the tool
17:16 and what you're going to get out of it, because it's not just a source of truth.
17:20 And we've run out of time.
17:22 So I thank everybody for coming today.
17:24 And weren't my panelists wonderful?
17:25 Thank you.
17:26 And our moderator.
17:27 [Applause]
17:27 [Music]
17:29 [Applause]
17:31 [Music]
17:33 [BLANK_AUDIO]

Recommended