Brainstorm AI 2023: AI’s Impact on Fintech

  • last year
Sarah Hinkfuss, Partner, Bain Capital Ventures Chintan Mehta, Chief Information Officer and Head of Digital Technology and Innovation, Wells Fargo Prem Natarajan, Executive Vice President, Chief Scientist, and Head of Enterprise AI, Capital One Lúcia Soares, Chief Information Officer and Head of Technology Transformation, Carlyle Moderator: Jeff John Roberts, FORTUNE

Category

🤖
Tech
Transcript
00:00 In this next session, we're going to hear from experts on how they're integrating artificial
00:03 intelligence while managing some of the toughest regulatory and compliance challenges in business.
00:10 Many financial institutions are still in the trial phase because of these challenges, making
00:16 sure data is secure, piloting, testing use cases.
00:19 Here to discuss what's next for the future of fintech and AI, please welcome Sarah Hinkfess,
00:25 who's a partner at Bain Capital Ventures, invests in B2B software startups.
00:31 Chintan Mehta, Chief Information Officer, Head of Digital at Wells Fargo, which has
00:36 a new AI-powered chatbot, Fargo.
00:40 Prem Natarajan is Executive Vice President, Chief Scientist, and Head of Enterprise AI
00:45 for Capital One.
00:47 And Lucia Soares is Chief Information Officer, Head of Technology Transformation at Carlyle
00:54 Group, where she manages not only their tech stack, but also advises portfolio clients.
00:58 And they're going to be interviewed by Fortune Editor, Jeff Roberts.
01:03 Please come to the stage.
01:16 Thank you, everyone, and welcome.
01:18 Let's talk money.
01:19 Obviously, the financial sector is enormous, and it, like everything else, is being transformed
01:25 by AI, or theoretically will be transformed.
01:28 But I want to start with a slightly skeptical note, because having covered fintech for a
01:32 long time, it seems like AI has been around for a while, both robo-advisors, which never
01:38 seem to really amount to that much, no offense to those who build them, and likewise, customer
01:42 service biometric verification and stuff like that, call trees, a lot of AI there already.
01:46 So I'm going to turn to our panelists and say, are we in the cusp of something big or
01:52 different, and what is it?
01:53 So you want to start there, Sarah?
01:55 Happy to.
01:56 Thanks for having us, Jeff.
01:57 So AI has been really incredibly imperative to financial services since the beginning,
02:03 and so we've seen 40 years of predictive models being corded to the delivery of products.
02:08 And so now, as we're talking about generative AI, we can think about it not as something
02:12 that's just the first instance of AI, but rather it is water filling the cracks in sand.
02:19 And so there are areas of financial services that have been uncovered or haven't been able
02:23 to be optimized because they deal with unstructured data.
02:27 And one example of that is in robo-advising.
02:30 And so robo-advisors, they were able to leverage a lot of historical data in order to help
02:35 create a mass market application for wealth management.
02:39 But exactly to your point, a lot of that was actually a broken promise, and it wasn't sufficient
02:43 to meet the needs of a lot of consumers, and the results themselves were not good enough
02:47 to justify the pricing.
02:50 And so we saw dropping margins and then an inability to actually invest in the models
02:54 themselves.
02:56 Now what we see with generative AI is it's actually an emotional application, which is
03:00 critical in financial services.
03:02 And so I can have a conversation with someone.
03:04 I can share what are my hopes and dreams for my future, what are the risks I'm scared of.
03:09 Maybe I have, as Lucia, she has a kid going off to college next year.
03:12 So what are the things that are important in my life as I'm thinking through my wealth
03:15 picture?
03:16 And so that opportunity will actually transform the delivery that's possible with robo-advising.
03:22 And so I think there's the potential now for a new promise to actually be created in that
03:26 space, and we're seeing a lot of new startups in that market.
03:28 Okay, I'm sort of persuaded.
03:29 Lucia, let's go to you.
03:32 What's going to be so dramatically different with this wave of AI?
03:35 My angle is more from the institutional investment angle, and I think it is transformative for
03:39 a couple of reasons.
03:40 One, it really democratizes the access to information, whereas before it was in the
03:44 hands of data scientists and algorithms that people didn't understand.
03:48 Now investors can really summarize information quickly, get data out of silos where a lot
03:55 of bespokenness happens in investment consulting, and really monetize it in a way that is interesting
04:01 to look from an investment perspective.
04:02 So really democratizing the access to the information.
04:06 Chintan, what's new and different?
04:08 Well, I think it's not fully new yet, but I think where this will go would be that you
04:14 will have AIs exhibiting agentic behaviors, meaning doing some sort of autonomous decisions
04:19 on your behalf.
04:20 Today, everybody is busy making a virtual agent for a customer from a company's perspective.
04:25 I think there's room for somebody building a digital proxy for you as an individual,
04:29 who then sort of interacts with all these multitudes of financial services providers.
04:33 Nobody lives their life with a single provider.
04:35 They should not.
04:36 They work with everybody.
04:38 And I think that will change the way the cognitive load a human takes, the decisions they make,
04:42 how they make new experiences, capabilities.
04:44 I think there's a lot of potential there.
04:45 There are a lot of unknowns too, but there's a lot of potential there.
04:48 But I think that's what is new, and I think that's the direction we are going towards.
04:51 So let's see.
04:55 I think, Jeff, the AI revolution properly builds upon the data revolution and a tech
05:01 transformation that has happened.
05:03 I think if organizations have invested in those two things, they're well positioned
05:06 to benefit from the AI revolution.
05:10 But the current wave, I mean, AI has come in waves, and the current wave actually builds
05:16 upon the prior waves of AI, which touched very specific parts of your business operations
05:21 to many other parts of your business operations.
05:24 Like AI was very effective early on.
05:27 We've used it for fraud, fraud detection, delivered tremendous value to our customers.
05:33 And there you have to engineer the heck out of it because you want to make those decisions
05:36 in milliseconds at the time of transaction because you don't want to find...
05:41 But none of those things touched things like software development or report generation,
05:46 et cetera.
05:47 So you kind of see this as a progression, the AI progression, in a sense, that builds
05:52 upon both the data progression, the tech transformation, and also the prior waves of AI.
05:56 And now you're extending AI to other parts of your organization where it didn't exist
06:00 before.
06:01 Right.
06:02 So I like to say they're filling cracks, new areas.
06:03 It's going to be sort of broader and more everywhere than before.
06:06 Prem, I want to stick with you for a sec.
06:08 Take us under the hood a bit.
06:09 What are the specific tools?
06:11 Because in our breakfast panel, someone pointed out with financial services, if you have a
06:15 hallucination, that's really bad.
06:16 It could cost billions or trillions of dollars.
06:19 And so just tell us a little bit about the data sets and which sort of programs and tools
06:24 you are experimenting with and building with.
06:27 Is it ChatGPT?
06:28 Or just give us a little insight into that.
06:30 So first, I'll say predictive AI makes mistakes too.
06:35 And so you try to limit those errors and you try to find ways in which you put guardrails
06:39 around those errors.
06:40 Except because those things generated numbers or scores, we didn't think of it as hallucination.
06:47 Whereas if somebody speaks some text that is, but in both cases, what happens is it's
06:52 the out of distribution from the training data that you have to operate on that creates
06:57 these problems.
06:58 So, but quickly though, what we figured out is there are a few techniques and technologies
07:03 that can help us tremendously mitigate the potential for hallucinations or errors, out
07:09 of distribution errors.
07:11 One of them is something that was published a while ago, I think by folks at Meta, called
07:15 retrieval augmented generation.
07:17 But the underlying architectural elements for that, or your favorite semantic database,
07:21 whether it's OpenSearch or Pinecone, some kind of vector database, you couple that with
07:26 best of class prompt engineering things.
07:29 You constrain that in terms of giving instructions on how to generate things, et cetera.
07:34 Doing all of this, you can bring these hallucinations down to places where they're super useful
07:41 and you've mitigated most of the risks of these things.
07:43 Yeah.
07:44 And I like your point too.
07:45 I mean, credit scores, in a lot of cases, those are kind of a hallucination.
07:49 So maybe it'll get better.
07:50 Does anyone else want to offer some insight into what exactly is under the hood at your
07:55 firm?
07:56 Just to comment on hallucinations as well.
07:58 So we talk a lot, especially in financial services and healthcare as well, around minimizing
08:02 it.
08:03 And we talk about actually the risk of not having hallucinations to a lot of firms, including
08:08 those on the panel.
08:09 And so imagine if I am looking for a mortgage and I could ask my chat bot, should I actually
08:14 take the offer from Wells Fargo on the mortgage?
08:17 Or is there a much better offer out there from someone else?
08:19 That's actually a really scary idea to credit card companies or big banks, who I think in
08:24 general are going to be the winners from the first wave of AI.
08:28 But the opportunity to learn the truth about how different financial services products
08:32 actually compare is pretty scary.
08:34 And then just quickly on the way that we have the opportunity of what people are doing under
08:38 the hood, the best companies that we're seeing are leveraging multi-models together.
08:43 And so it's not just open AI API calls.
08:46 It's not just anthropic.
08:47 You're actually leveraging many, and then you're using your own data through RAG or
08:51 other techniques to actually implement it in a coordinated method.
08:56 So it's like a pipeline or process of stringing together multiple of these methods.
08:59 I'd like to add, and many of our portfolio companies, we're seeing similar things where
09:03 they're going broad and doing linking different models together.
09:08 So an example in fraud detection for one of the fintech companies we invest in, they detect
09:13 fraud with ML, but then they ingest that into a generative AI model that actually helps
09:18 to create the reports, the suspicious reports and process them, which is reducing the workload
09:22 tremendously.
09:23 So going broad, not just in SaaS applications that have ML and generative AI capabilities,
09:28 but building your own custom models is going to be key.
09:30 Yeah.
09:31 Jinten, I want to just go to something Sarah said about, it just occurred to me as a consumer,
09:35 if I can write a prompt saying, show me the optimal lowest cost bank fee, show me the
09:39 best mortgages.
09:40 I mean, is there a risk that consumers kind of get their hands around this?
09:44 There go your margins.
09:45 Are you worried about that?
09:46 No.
09:47 Actually, I would rather, like I started out, nobody should assume that everybody wants
09:52 to live their life, financial lives, especially with a single provider.
09:55 I think with that first principle, basic hypothesis, then it becomes what is the most efficient
10:00 way to sell that customer, right?
10:01 Whether that product is coming from you or whether it's coming from somewhere else.
10:05 So we tend to think of all these things as ecosystems, whether they're products being
10:08 offered, whether the technology stacks or the platform products themselves.
10:11 I mean, I actually put a double down what Sarah said, like not having hallucination
10:16 is a bad thing.
10:17 Actually, hallucination is a feature.
10:18 It's not a bug, as long as you know how to actually use it.
10:21 Product distribution characteristics are what makes the creativity work out for some of
10:26 these models.
10:28 So I would say no, not worried about that.
10:30 Two, I think that's logically where we should be going as a society and as an industry.
10:35 That's where we are going towards.
10:37 The incumbents have to make it better for customers.
10:41 And to make it better for them, we have to be more efficient, but also at the same time,
10:44 not play a winner-takes-all type sort of approaches.
10:48 And the last one I'll just kind of point out, which I think is important as well, is the
10:53 more empowered a customer is in making these choices, whether it's through an automation
10:57 or through some sort of a decision-making structure that they use, the best products
11:01 will win out in the longer run.
11:03 The best offerings will win out.
11:05 So yeah, I think it's actually better off for everybody else to do that.
11:10 Another thread I want to pick up on is data.
11:11 So you're training at, presumably, it's a lot of data, but what safeguards are you using
11:16 to make sure the stuff doesn't leak?
11:19 And how do you choose your vendors?
11:20 I'll toss this one to you, Prem.
11:22 Others can weigh in.
11:23 But how are you making sure the stuff doesn't leak into the wild?
11:27 Yeah, I think the leaks happen...
11:29 Actually, let me take a step back, though.
11:31 In deploying all of these models, definitely, given the sector that we're in, we're taking
11:36 a very conscious approach to sloping the risk in that we're choosing use cases or methodologies
11:42 for exploiting these models that make sure that in addition to technological mitigations,
11:47 we have process mitigations because you have all of this risk considerations in the kind
11:52 of industry we're in.
11:53 Coming to the question about the hallucination of the leakage of data in this case, Jeff,
12:00 the primary protection that you have early on is that the models are producing outputs
12:07 that are human-moderated, as in your employees or your associates moderate them, whatever
12:11 is the decision-making flow, whether it's customer support or other flows, so that they're
12:15 not directly exposed out to the customer.
12:18 This also protects you in early usage from prompting hacks because the way data gets
12:23 leaked is by people hacking into you with malicious prompting.
12:27 It also helps us develop insights into what are the ways in which we need to protect it.
12:32 But beyond that, there's also a huge amount of standard tokenization, encryption, everything
12:36 that happens, masking of the data.
12:38 So in the unfortunate event something was able to be prompted in some future generation
12:44 of these use cases or applications, what's leaked is not identifiable.
12:49 So there are two ways in which we deal with it.
12:51 Okay.
12:52 To put it another way, what keeps you up at night?
12:53 Like what could go really wrong?
12:54 Not at your firm, of course, but at one of your competitors?
12:59 I think, well, the Center for AI has already published that more than 90 attack modes are
13:04 out there for generative AI models through prompt hacking, but also data poisoning.
13:09 So what keeps me up at night is ensuring that we have secure, safe, and trustworthy development.
13:14 We have transparency.
13:15 We have red teaming in place.
13:18 And it's, I think, something that the industry should all be very aware of.
13:21 We know there's a lot of regulatory filings coming and standards and requirements.
13:26 And I think we can lean into the spirit of what's coming by focusing on just ethical
13:30 AI developments.
13:31 Yeah.
13:32 I'll add to that.
13:33 I would say it's the unknown unknowns.
13:34 I think that's what keeps you, like, would you know if something went wrong?
13:38 There are things that you're monitoring you would know if something went wrong, but there
13:42 are things that you don't know could go wrong that actually is a little bit worrisome.
13:45 And it's not necessarily the fact that things can go wrong.
13:47 They've always gone wrong.
13:48 It's the velocity at which they go wrong, right?
13:52 And how deep they go.
13:53 So I think that's a little bit of an ongoing sort of frontier we ought to keep an eye on.
13:57 Yeah.
13:58 Okay.
13:59 We've talked about some of our fears, but I should add, if anyone has a question, please
14:01 put your hand up.
14:02 We have a few mic runners here.
14:04 But my next question is, you know, so we have our fears, but let's talk opportunities backstage.
14:07 Sarah, this year we're talking about, you know, you're both investors, you know, deal
14:10 with portfolio companies and stuff.
14:12 How is AI going to change that process?
14:14 Go ahead, Sarah.
14:15 Sure.
14:16 Absolutely.
14:17 So within the investing space, we've already been using a lot of predictive models to take
14:21 a look at how different companies over time have scaled and grown and use that to prioritize
14:25 companies.
14:26 And now we're also using generative AI through more parts of the investing pipeline.
14:30 So thinking about generating the first emails and touch points with companies that are most
14:35 relevant to the founder, as well as drafting investment memos, like the first draft of
14:40 them, pulling together, for example, multitudes of calls with customers and using that to
14:46 actually determine what are the best and most important things about this product and how
14:49 does it compare to their competitors.
14:51 We're doing similar things, but basically there's two things we think about.
14:55 One is how do we maximize the value of our investments by helping our portfolio leverage
14:59 this technology?
15:00 And number two, how do we become better investors ourselves?
15:03 For maximizing investments, we create an ecosystem of strategic suppliers, thought leadership
15:08 forums, et cetera, to encourage that accelerated innovation within portfolio companies.
15:13 And we're seeing it especially in our FinTech space.
15:16 But as a firm, we're driving into doing what Sarah said as well, predictive analytics for
15:21 the investments we're making, but also just doing drastic efficiency strategy across the
15:26 firm.
15:27 So, for example, automating credit analysis, debt analysis, helping our investors get customer
15:33 reporting and more insights into data.
15:35 There's a lot of efficiency.
15:36 It's the yin and the yang of technology, the innovation, but also the efficiency element
15:40 that we're driving for.
15:41 >>Let's -- another opportunity to, I think, Prem, you mentioned backstage, recruitment
15:45 and HR stuff.
15:46 How are we using it there?
15:48 >>So, you know, recruiting AI talent is a top priority.
15:54 It's not using AI to recruit talent.
15:56 I think that's kind of a whole different area that, you know, I'm not particularly excited
16:04 by, especially in finding top talent.
16:07 You really want to use your network.
16:08 You want to look at other aspects of their contributions, et cetera.
16:12 But I'll say, you know, I'll connect these two to your previous question around what
16:15 keeps people up at night, and then the answer is the talent.
16:18 I think the things most organizations in my mind should worry about, and at least I think
16:24 about it a fair amount, is enthusiasm not going hand in hand with thoughtfulness in
16:33 developing use cases and putting it out there.
16:36 So we have created a fair amount of process and risk management frameworks on making sure
16:40 everything is vetted properly.
16:42 But the other part of it, the other side of that same coin is the lack of an adequate
16:47 understanding of these technologies, resulting in solutions being deployed that are either
16:53 not scalable or point solutions or defective in other ways.
16:56 And so that's where I think building up a talent base, this thing, and, you know, I
17:02 don't know if you saw, but we are making a fair amount of investment in building up that
17:06 talent and it's being recognized by some folks.
17:09 Can I pick up on that thread?
17:10 So as we think about what is the role of startups, so generally, I said this in the beginning,
17:15 but I think incumbents who have access to that data in a structured way and understand
17:20 how to work with regulators, which is so important, incumbents are doing an incredible job in
17:24 the financial services space and we're seeing less of a role immediately for startups.
17:28 But where that's emerging is on the talent side.
17:31 And so you have startups, really a lot of them in San Francisco and across the world
17:36 as well, but SF is this incredible hub, who are on the pioneering edge of development
17:41 and innovation, and they need to work with a large financial services firms to get access
17:45 to that data and to build their models.
17:48 And so you have these emerging partnerships that are being created, and I think where
17:51 we see this actually going is that a lot of those companies end up becoming acqua-hired
17:55 to being those teams within these large organizations.
17:58 And it really serves both purposes on both sides, and it's right before regulation comes
18:02 in.
18:03 Yeah, and that's a familiar model to I think the financial sector.
18:06 Chintan, do you want to weigh in on any of this?
18:08 Yes, I think there's a more for talent, depending on how you're looking at it.
18:12 If you're looking at cutting edge, pure research around multimodal models and things which
18:16 are the current rage, or objective driven AI and all, yes, it's a very small crew who's
18:21 doing it, and they're always going to be in a short supply, and I think that's fine.
18:25 But I think there's a lot of opportunity to actually do scalable integrations, whether
18:30 it's through startups, whether it's through other partnerships.
18:34 And so talent doesn't necessarily have to be acquired point in time all the time.
18:39 You have to groom some of it as well, ground up.
18:42 And so we have done a lot of work with Google, we have done a lot of work with OpenAI and
18:44 Microsoft because we did some of the stuff in collaboration with them where we're trying
18:47 to build our own internal pipeline of talent around this.
18:52 And the second thing I would say is, yes, there is one part of it is talent, but then
18:54 there is the adoption ecosystem of that curve for the rest of the organization.
19:00 And you have to do a lot of work around easing adoption as well.
19:03 So like you, Wells Fargo, just as a context, is about 40,000 developers, engineers.
19:07 Not everybody's going to do AI models, or even for that matter, we don't build a lot
19:12 of foundational models.
19:14 But everybody will use them.
19:16 So how do you completely simplify that adoption curve, that adoption ecosystem?
19:19 Do they understand how they're using it?
19:21 Is there enough safety around it in terms of when they use it?
19:24 So I think those are the things you have to do.
19:26 I think that's the key.
19:27 I'll just add that it's really important to really spread, evangelize the use of AI and
19:32 generative AI across the organization.
19:34 If you don't do it at the grassroots level, it will always be this niche thing that only
19:38 a certain people know about.
19:40 And driving competitions at the grassroots level, and really that knowledge base is going
19:44 to be key to really the adoption that's going to drive the value at the end of the day.
19:48 I want to shift to, so this is sort of an uncomfortable question, but I'd like you to
19:51 try and answer it candidly.
19:52 Speaking of people, financial sector employs a lot of people.
19:56 And in the past, we've seen bank tellers as maybe 50% fewer or something.
20:00 But it seems that's ripe for disruption.
20:02 So which jobs are going to disappear?
20:06 For us, we actually don't think about it that way.
20:08 We think about the fact that your job's not going to be lost to gen AI.
20:12 You probably will lose your job to someone who is using gen AI and AI.
20:16 And that's how we're thinking about it.
20:18 It's going to unleash massive efficiencies.
20:20 And with talent shortages, needing to free up more workforce to do the important work
20:25 is what we're focused on.
20:26 I don't disagree.
20:27 And I think that's true.
20:28 You need to know how to use it and direct it.
20:30 There's opportunities there.
20:31 But the reality is, there's some functions.
20:33 I'm thinking about analysts and a few other jobs.
20:36 I'm not sure there will be as many of them there five years from now as now because of
20:40 AI.
20:41 I'll wade into that one.
20:42 So I think my, and this is Chintham's opinion, not Welsh Fargo's opinion.
20:46 There's a boundary between knowledge worker and then an intellectual application of that
20:51 knowledge.
20:52 I think a lot of the traditional knowledge working will get disrupted for sure.
20:56 So examples of analysts who assimilate a lot of information and then summarize it and contextualize
21:01 it and stuff like that.
21:02 That will reduce the amount of human action needed.
21:06 Will all of us be able to redeploy that human capital in a productive way?
21:09 Don't know the answer to that question, but that's definitely what is going to happen.
21:13 Two quick points.
21:14 So first of all, there was actually a really interesting BCG study that the individuals
21:18 within a company who gain the most from the application of AI are those that are younger,
21:22 tenured or lower performing because it can help them climb up the curve faster.
21:25 So I don't think it's just this case of like, we see poorer people like not being able to
21:30 stay in.
21:31 The second point that I would make is that as we think about services within financial
21:36 services companies being more automated through Gen AI, what is not true is that any licensed
21:41 professionals can go away.
21:43 So if a model makes a recommendation or maybe there's a mortgage memo and it's being presented,
21:48 it's the first draft, but the officer still has to attest to it.
21:51 The wealth manager still has to actually deliver the advice to the individual.
21:54 That's true in this regulatory requirements.
21:55 We're out of time, but I want to do one sort of rapid fire round.
21:58 Tell us one thing to keep your eye on in the next two years in AI and finance.
22:03 Go ahead, Lucia.
22:04 Blockchain and the conjunction of AI with blockchain for crypto.
22:07 Thank you.
22:08 Yeah, I'm the crypto editor, so still relevant.
22:12 Pay me later.
22:13 Go ahead, Kriv.
22:14 I'll say organizations that get their data act together.
22:18 And how are you going to get your data act together?
22:19 I don't think there's an AI act, winning AI act without a winning data act.
22:24 That makes sense.
22:25 I'm going to say not immediately, but a few years out, convergence of deep quantum networks
22:30 using quantum techniques and ML models.
22:33 We have seen a lot of performance boost when we build our own deep Q networks using quantum
22:38 techniques.
22:39 And I think that will translate into AI very quickly.
22:41 Quantum and Sarah, your time very quickly.
22:43 Domain specific models.
22:44 So I think we're going to see a democratization of access to the data that large financial
22:48 service incumbents have by actually selling data as a service through these models in
22:52 the same way that open AI does with their API.
22:54 We're going to see Bloomberg, for example, do that with their chat.
22:57 Bloomberg.
22:58 Fascinating.
22:59 Please give a hand for our panel.
23:00 Good job, you guys.
23:00 [APPLAUSE]
23:04 [BLANK_AUDIO]

Recommended