How Good Governance Can Help AI Grow

  • 5 months ago
This panel at Imagination In Action’s ‘Forging the Future of Business with AI’ Summit of Jeff Saviano, Sasha Luccioni, Jag Gill and Read Siraj discuss the laws around AI and how good or bad laws will affect the scale of AI.

Subscribe to FORBES: https://www.youtube.com/user/Forbes?sub_confirmation=1

Fuel your success with Forbes. Gain unlimited access to premium journalism, including breaking news, groundbreaking in-depth reported stories, daily digests and more. Plus, members get a front-row seat at members-only events with leading thinkers and doers, access to premium video that can help you get ahead, an ad-light experience, early access to select products including NFT drops and more:

https://account.forbes.com/membership/?utm_source=youtube&utm_medium=display&utm_campaign=growth_non-sub_paid_subscribe_ytdescript

Stay Connected
Forbes newsletters: https://newsletters.editorial.forbes.com
Forbes on Facebook: http://fb.com/forbes
Forbes Video on Twitter: http://www.twitter.com/forbes
Forbes Video on Instagram: http://instagram.com/forbes
More From Forbes: http://forbes.com

Forbes covers the intersection of entrepreneurship, wealth, technology, business and lifestyle with a focus on people and success.

Category

🤖
Tech
Transcript
00:00 >> Welcome to our panel on governance of AI.
00:06 We're going to focus on how to approach governance to
00:10 match the speed at which AI is
00:12 progressing through your organizations,
00:15 but we're also going to bring a dose of
00:18 responsibility and ethics into the conversation as well.
00:22 How's the volume? Can you hear okay?
00:24 Little loud back there? We're good.
00:25 There's my friend John. Hi, John. Excellent.
00:28 My name is Jeff Saviano.
00:30 I'm with EY. I lead
00:31 emerging technology strategy and governance at EY,
00:34 and also have some university appointments.
00:37 I lead a team at Harvard studying AI ethics.
00:40 So we're going to be talking a bit about
00:42 ethics today and also have an appointment here at MIT.
00:45 I teach an advanced innovation
00:49 and technology class at Boston University School of Law.
00:52 Really, I'm excited about this panel and excited to
00:55 introduce our esteemed panelists.
00:58 If I could, why don't we go down
00:59 the line and please introduce yourself.
01:01 Sasha, you want to start?
01:02 >> Hi, I'm Sasha. I'm
01:04 a researcher and the climate lead at Hugging Face,
01:07 which is a global startup
01:09 that does responsible machine learning.
01:11 We have a platform where people share models and data sets,
01:14 and we try to promote the responsible usage of AI artifacts.
01:20 I also am on the board of Women in Machine Learning because we
01:24 have a big gender discrepancy issue in AI,
01:28 around 12-15 percent women in our field.
01:33 So Women in Machine Learning tries to organize events,
01:35 do mentorship, and help balance out a little bit.
01:39 I'm one of the founding members of Climate Change AI,
01:41 which is an organization that tries to bring
01:43 together communities from climate science,
01:46 from sustainability, ecology,
01:48 with people from machine learning and AI to help,
01:51 once again, create synergies and tackle some pressing issues.
01:54 >> You're very busy.
01:55 >> Not as busy as you. I think you had four appointments.
01:59 >> Hey, everyone. My name is Jag Gill.
02:01 I'm the CEO and co-founder of Virtru.
02:04 I'm excited to be back on campus because I'm an MIT alum,
02:08 and I like to say that I drank the Kool-Aid after MIT.
02:11 I got into startups and tech.
02:13 Today, focusing on helping organizations track
02:18 their global supply chains for human rights risks,
02:21 for climate impacts using AI.
02:25 It's really exciting to be on
02:27 this panel to talk about governance and AI.
02:30 I'll talk a little bit about regulations and
02:33 how AI is really an enabler for responsible business.
02:37 >> Excellent. Always nice to come home, isn't it?
02:39 >> Really nice to come home.
02:40 >> Love it. Excellent. Brad.
02:43 >> Hi, everyone. Rads Raj.
02:46 I also love the fact that I'm coming back to MIT.
02:51 Before joining MassMutual,
02:53 I was part of the MIT's endowments,
02:56 responsible for the buildings that you saw in Kendall Square,
02:59 or blame us or them, I should say.
03:02 I also started my career with a relationship to MIT at
03:07 Arthur D. Little which established
03:09 the whole management consulting industry
03:14 and their headquarters somewhere close by.
03:16 My name is Rads. For those of you who speak Arabic and Farsi,
03:21 it means thunder.
03:22 You could guess about my two younger brothers,
03:25 what their names are. I live in Brookline across the river here.
03:29 I did my undergraduate computer science at UC Santa Barbara,
03:34 focusing on cryptography.
03:36 Did my master's at Harvard focusing on AI.
03:41 That's where I witnessed the seminal paper
03:44 on learnability theory by Leslie Valiant.
03:48 Since then, I played CIO,
03:50 CTO roles with established companies like Eaton Vance,
03:53 and startups like Indeka.
03:56 I'm now the head of AI governance at MassMutual,
04:00 a life insurance company established in 1851.
04:04 Governance of AI technology is so important,
04:08 and we could talk more about that.
04:10 You had me at thunder.
04:11 Excellent. Every panel needs a little bit of thunder.
04:14 Love it. Okay. Well, thank you.
04:17 Thank you. We want to start with,
04:18 I want to introduce a framework that we've been working on.
04:23 I lead a research team at Harvard that's been studying
04:25 applied AI ethics and it's an ethics approach to governance.
04:29 The reason we started this work is that frankly,
04:32 we felt that there was a gap in the world,
04:34 that there were many governance frameworks
04:36 which typically were adopting principles.
04:39 Everybody's adopting, whether it's NIST or OECD principles,
04:43 and then they were stopping.
04:44 So, we devised this framework and working from bottom to top.
04:48 The bottom two layers represent
04:51 the legal mandate that organizations have.
04:54 That's the reason we started this because we heard from boards,
04:57 and we heard from members of the C-suite that
04:59 the first order of business they
05:01 wanted was to comply with the law.
05:03 So, the bottom layer are the non-AI specific laws
05:07 that would still affect AI systems, think GDPR,
05:10 think from a board oversight standpoint,
05:12 think Caremark and the progeny of cases with Caremark.
05:15 Of course, AI laws and the regulations that are coming with the EU,
05:19 AI Act as an example.
05:21 Moving up into what we refer to as conscious capitalism.
05:26 What we found in our research is that there are
05:28 many applied ethical actions that are not only good for the world,
05:33 but actually will promote the business too.
05:35 They will help preserve reputation and preserve brands,
05:39 and also they're good for society.
05:42 So, we collected different examples at each layer.
05:45 Then lastly, this highest level we call ethical elevation,
05:50 and we included that because there were so many boards and
05:53 so many C-suites that were asking for something more.
05:56 Saying that they wanted to go further.
05:58 What were some things that they could do that may not
06:01 necessarily produce an ROI,
06:03 but would produce a return for the world?
06:06 So, that was encouraging to me.
06:08 The last thing I'll say,
06:09 when we presented this to an ethicist at Harvard,
06:12 looked at and said, "Okay, I get it.
06:14 I see how this can be helpful,
06:16 but you're missing something,
06:18 and you're missing this fundamental human rights layer."
06:22 We love to refer to this as the Hippocratic Oath of AI.
06:25 First, do no harm in the world.
06:27 So, let's think about that as we roll into our first question.
06:30 We wanted to flash this because we think that there is a gap in
06:33 the world for an applied ethics approach to governance.
06:37 So, we hope that this can help fill some of that gap.
06:41 Okay. All right. Let's get started.
06:43 Rod, you're going to kick us off.
06:45 We're going to start with the softball question
06:49 of what's going right with governance?
06:51 If you want to also include some aspects of it that you
06:55 think in this frenzy that we're in
06:57 with AI use cases and development,
07:00 if there are some things that are a little bit off in governance,
07:02 that's okay too, but why don't you start us off, Rod?
07:05 Great. Happy to do that.
07:07 So, someone said at this conference that governance is
07:11 about preventing risks which is great,
07:15 but not really exciting, at least for me.
07:20 I'm thinking that governance has a branding challenge.
07:24 I'd love for all of us to change that.
07:28 I'm going to use a metaphor that we use at MassMutual,
07:31 which is that brakes in cars allow cars to go faster, not slower.
07:39 Brakes in cars allow cars to go faster.
07:42 Imagine if cars had no brakes.
07:45 In some sense, a reassurance that
07:48 the path is going to prevent many of
07:51 the common risks will allow faster innovation.
07:54 So, I think that's one current state that I hope we
07:57 could close. Second, you mentioned principle-based approaches.
08:04 I think that's the right approach to do that.
08:07 In contrast with a checklist approach.
08:11 If you have a checklist approach,
08:13 please stop it. It kills innovation.
08:16 A principle-based approach is much more agile,
08:19 really allows one to identify the new risks
08:22 and decide which controls to be mitigated.
08:25 On a weekly basis, there are new risks that come
08:28 with every advance in AI technology.
08:30 The last one that I would propose is related to regulations.
08:36 No matter what your domain is, life insurance is governed
08:40 in the United States by states, not at the federal level.
08:44 So, yes, we pay attention to the executive order on AI
08:49 that President Biden signed a couple of months ago.
08:51 We pay attention to what's happening on the EU side
08:55 to learn from it, but we really pay attention
08:57 to what's happening on the states side.
09:00 Not every industry is like that.
09:02 You need to pay attention to that to prepare your organization
09:07 for the ultimate regulations.
09:09 And existing, by the way, there is a misnomer about AI governance
09:14 that it's non-existent or it's coming soon kind of thing.
09:20 But the fact of the matter is that existing regulations
09:24 really apply.
09:25 Massachusetts last week basically said,
09:27 we're not going to add a new AI regulation,
09:29 whatever existing regulations applies.
09:32 So you just need to be mindful of that.
09:33 Those are the three things that I would add.
09:35 - Combination of federal, at least in the US,
09:37 federal and the states, there's actually a decent volume
09:39 of regulation that's happening in the states.
09:42 Same question, Jag, for you.
09:44 What's working well with governance
09:47 and in your experiences, what do you think
09:48 are some of the gaps that exist?
09:50 - Yeah, so when I think about governance
09:52 with kind of manufacturing organizations,
09:55 companies that we work with, retailers and brands,
09:58 it's so interesting because in the not too distant past,
10:03 organizations and their supply chains
10:05 were just incredibly unsexy, right?
10:08 And now given COVID, given geopolitical risks,
10:12 given kind of adverse countries that we do business with,
10:17 supply chains are kind of front and center
10:19 and a hotbed for risks.
10:22 And therefore a tsunami of governance
10:24 coming in all different directions, right?
10:27 Sasha, you mentioned coming through
10:29 Customs and Border Patrol, right, today.
10:31 So interesting that Customs and Border Patrol
10:34 is using AI to track goods coming globally
10:38 into the United States and enforcing regulation
10:42 around forced labor, responsible business, right?
10:45 The North Star being dignified work
10:48 for folks in the supply chain, in other countries,
10:52 and goods coming into the country.
10:54 And so, really exciting applications
10:57 of kind of new technologies for governance,
10:59 kind of for government regulations.
11:02 For example, investors are also shining a light
11:06 on governance for businesses and supply chains.
11:10 There are companies that want to IPO
11:12 and raise capital here in the United States
11:15 and are prevented from doing so because of their practices,
11:18 because of their lack of responsibility
11:21 around climate and human rights.
11:24 Increasingly, I think that consumers have a role
11:27 in governance as well, right?
11:28 Like the choices that we're making in terms of--
11:31 - Explain that, explain that a bit.
11:32 - Products we consume.
11:33 - Yeah, what do you mean?
11:34 - Yeah, I think that consumers are increasingly looking
11:38 for ethical practices from organizations
11:41 and companies that they spend money on.
11:45 We're seeing that increasingly with younger consumers,
11:47 younger demographics.
11:49 There's corporate governance because companies
11:52 are now looking at this new set of consumers
11:54 and realizing that a mission and values
11:57 around ethical practices is really important.
12:00 And so, AI is really important and seismic in my world
12:05 when I work with companies.
12:08 AI is becoming a driver for automation
12:13 of supply chain mapping, right?
12:15 Automation of supply chain visibility, resiliency.
12:19 There are things that supply chain managers
12:24 just can't do manually, right?
12:26 Linking together disparate data, unstructured data.
12:30 And to that point around kind of goods coming in
12:33 from China, potentially made with forced labor,
12:36 there are so many complexities with kind of global,
12:40 multi-tiered, fragmented, complex supply chain.
12:43 I mean, even the sneakers that I'm wearing today, right?
12:46 Hugely complex products to make.
12:49 And so, we need new technologies and AI
12:52 to shine a light on issues and risks
12:56 that frankly we can't do.
12:58 And so, just to make that concrete,
13:00 some of the work that we do
13:01 is combining very disparate data.
13:03 So, non-obvious data, corporate ownership records
13:07 from companies in the global South, news, events,
13:12 alerts, Twitter, audit reports,
13:15 synthesizing all that data using AI
13:17 and then being able to shine a light
13:20 on potential risks and inferences of risks.
13:23 - I wanna come back to some of the supply chain issues,
13:26 but first, Sasha, over to you.
13:28 I'd love to hear from you a bit on the who.
13:32 So, if you look across organizations
13:34 and because of the pace at which AI is being adopted,
13:38 not just in the private sector,
13:40 but in the public sector as well,
13:43 I'd love to hear your point of view
13:44 about who are the actors within an organization
13:47 who are leading it
13:49 and what are some of the best practices that you've seen?
13:52 - What I've seen works is when governance is not siloed,
13:55 when it's really a distributed approach
13:57 and actually distributed outside the company as well,
14:00 because consumers and users also have levers.
14:04 Maybe they're a little bit more subtle
14:05 than companies' levers as well,
14:07 but what I've found,
14:09 I mean, I've worked in a couple of companies now in AI,
14:11 when, for example, there's connections
14:13 between researchers like me with decision makers,
14:17 with the people actually talking to the customers
14:19 and figuring out how tools are used,
14:20 then we can, for example, issues get flagged
14:23 and then there's a sort of communication that happens,
14:25 whereas if researchers are just training cool models
14:28 and researchers have this tendency
14:30 to kind of like to tinker with stuff
14:32 and not necessarily think about
14:34 how it's gonna work in real life,
14:35 and so in that case, we don't get that feedback loop
14:37 and we just make things that aren't particularly useful
14:39 in the long run.
14:41 And also something I've seen a lot
14:42 are rebound effects and feedback loops
14:46 in the way that, for example, AI is created and evaluated.
14:49 So for example, we'll think about a specific metric
14:52 like efficiency.
14:53 Efficiency is a big thing in AI.
14:55 People want the models to go brr, to go fast.
14:58 And then, for example, the rebound effect
15:01 can be increased usage because you can use a model,
15:04 it'll go faster, so now you're gonna use more of the model
15:06 and actually the overall compute costs
15:09 or whatever metric you're tracking is gonna go up.
15:11 And so it's interesting because I think that
15:13 the way to solve these rebound effects,
15:15 these indirect impacts, is governance, is regulation,
15:18 because, for example, if you start anticipating
15:20 these rebound effects, there's actually one
15:22 that's really interesting.
15:23 It's called Jevons' Paradox,
15:25 and it's been observed kind of time and time again
15:29 as a new technology makes a certain task more efficient
15:33 depending on the task.
15:34 So for example, when we switched from horses to cars,
15:38 people tended to travel more
15:39 because you could go further.
15:41 Instead of going 30 miles away on the weekend,
15:44 you can go 300 miles away on the weekend,
15:45 so you would, you would travel more.
15:47 And so any kind of efficiency gains were lost
15:49 because people used it more.
15:50 And I think that with AI, we're seeing that.
15:52 We're optimizing these metrics,
15:54 whether it's performance or efficiency,
15:55 and what we're seeing is actually
15:57 these rebound effects happening,
15:59 and that's when regulation and governance has to step in
16:01 to make sure that the ripple effects
16:03 don't actually kind of neutralize the innovation as such.
16:07 - A combination of regulation and governance.
16:09 Talk, I'd love to hear your thoughts about,
16:11 so think a little bit more about governance
16:13 and within an organization.
16:15 You started by talking about the different silos,
16:17 and we've seen that in our work as well.
16:19 Sometimes the lawyers may not talk to sales
16:22 and marketing and R&D.
16:24 Are you seeing a best practice?
16:25 How does that come together?
16:27 Is there a single leader that would sit on top of that?
16:30 What are some of the best practices
16:32 from a governance standpoint?
16:33 We'll get a little bit more to the regulation.
16:35 - So for example, in Hugging Face,
16:36 what we do is that we have kind of champions.
16:39 So for example, each of the teams has one or more people
16:41 that are kind of interested in ethics,
16:43 interested in kind of the societal impacts.
16:45 And it can be an engineer, it could be like a UI designer,
16:48 it could be someone in communications,
16:49 but we have these monthly, like just group meetings
16:54 where we talk about these issues.
16:55 Oh, deep fakes are a huge problem right now.
16:58 How like, what are the different things that happen?
17:00 Misinformation, blah, blah, blah.
17:01 And then we don't necessarily have solutions,
17:03 but at least we have kind of these discussions that happen.
17:05 Like as a platform, we host a lot of data sets.
17:07 And for example, recently there was this Lion data set
17:10 where people found child abuse material, CSAM, in Lion.
17:13 And so like, what does that mean
17:14 for Hugging Face as a platform?
17:15 Well, people were already kind of like,
17:17 seeded with the notion that, you know,
17:19 we should be very reactive to this kind of stuff.
17:21 And I find that's great because if you have one team
17:23 that's, for example, responsible for ethics,
17:25 you have a confrontation that happens
17:28 because you're like telling people,
17:29 "Well, you should take this down."
17:30 And without having had that conversation ahead of time,
17:33 you're just like, yeah, conflicts, exactly.
17:35 And that's not necessarily the most productive way
17:37 of capitalizing change. - It can be healthy.
17:40 But it can be healthy.
17:41 - But you need to kind of have like a,
17:42 kind of like seed, right?
17:43 You need to have like the common groundwork that,
17:45 okay, we agree upon these things,
17:47 now let's talk about other things.
17:48 Because if you just come out of nowhere and you're like,
17:50 "Take down your most popular data set,"
17:51 they're gonna be like, "No, like 100,000 people uses."
17:54 - Yeah, it is a good point.
17:55 And we're also seeing that a lot of,
17:57 so when you have that conflict,
17:58 what forums can you create
18:00 and how do you bring teams together to find common ground,
18:03 but then to be able to move at speed
18:05 because of how fast it's going.
18:07 I wanna take one of the ideas that you had said, Sasha,
18:09 about including customers
18:11 and it applies for supply chain as well.
18:15 Rod, back to you,
18:17 I'd love to get your thoughts on the question of,
18:20 from a governance standpoint,
18:22 what are some of the best practices
18:23 about who is leading it within an organization?
18:27 But I would love to get your thoughts
18:28 on the inclusion of customers
18:31 and other stakeholders in governance.
18:33 That's an issue for insurance companies for sure, right?
18:36 - Yeah, absolutely.
18:37 So there's a whole bunch of things happening,
18:39 great discussion.
18:40 One of the unrecognized trends
18:48 is that AI governance does not live in a silo.
18:52 And ethics does not live in a silo.
18:55 Everyone has a responsibility
18:57 to do things that are ethical.
18:58 So culture matters.
19:00 - As members of the human race.
19:01 - As members of the human race,
19:03 as members of the industry that you're operating in,
19:07 as members of the culture that you're responsible in.
19:10 So for example, you said level one and two,
19:12 level three and four really is culture dependent, right?
19:16 So doing the right thing,
19:18 of course, making sure that we're doing things legally,
19:21 doing the right thing, going above and beyond that.
19:24 - Extend beyond what's--
19:25 - Extend beyond that, really depends on the culture.
19:28 The insurance industry, life insurance,
19:31 I've discovered because I joined it
19:33 after a career in investment management,
19:37 at its heart believes in data and predictive analytics.
19:41 They have the actuarial,
19:42 which have a focus on doing the right things.
19:47 - Managing risk.
19:48 - And managing risk at the same time.
19:50 So the discussion, at least within MassMutual,
19:53 is easier in terms of facilitating.
19:57 We are a first line of defense,
19:59 a peer to the business to set the boundaries, et cetera.
20:04 So that's one point.
20:06 The second is that privacy, data,
20:09 and AI governance are fusing.
20:12 We have AI governance, AI laws coming under the guise
20:17 of privacy or consumer protection, et cetera.
20:21 At MassMutual, we've recognized that
20:23 and I'm part of a group called
20:24 Privacy Data and AI Governance.
20:26 We think that's the model that others should be adopting.
20:31 - Insurance companies have been using AI for a long time,
20:35 maybe not since 1850 when MassMutual started.
20:38 - No, not since 1851.
20:39 We've been using it for like a decade at least.
20:41 - And can you talk a bit about the transition of,
20:45 it's not as though you didn't have any governance
20:47 of the AI system, call it traditional AI.
20:49 And then with the advent of generative AI
20:52 and the ubiquity across an organization,
20:54 how have you led efforts to transform
20:57 what governance looked like with traditional
20:59 machine learning into today's AI world?
21:02 - Yes, so first of all, it takes a village
21:05 across the organization, first line, second line,
21:08 third line, business, IT, all of that, right?
21:12 The question for us when generative AI
21:14 became very pronounced and everybody got on the bandwagon
21:18 is what additional risks does it represent
21:21 and what unique controls need to be added?
21:24 We had a pretty good program that predated
21:28 the NIST AI RFF that was validated by the NIST AI
21:32 risk management framework, which kind of asks
21:36 any organization to kind of do the same kind of thing.
21:39 We did not see any additional risks,
21:41 maybe pronounced existing risks,
21:44 but we needed some additional controls
21:47 to mitigate against the hallucination and facts,
21:49 basically focusing on the human in the loop.
21:52 So that's the bottom line.
21:53 We view generative AI as one more evolution
21:57 of AI technology, but I'll make a quick note as well.
22:02 If you look at what is in production
22:04 at any organization today, there's very little
22:08 generative AI at the moment.
22:10 The bulk of it is supervised learning,
22:14 machine learning techniques.
22:15 - Random forest.
22:15 - And that's not gonna go away,
22:17 at least for the next three years.
22:19 I think that's still going to be the bulk
22:21 of the AI solutions that we have to deal with.
22:25 - Let's think a little bit more about that
22:26 and maybe come back, Jag.
22:27 I'd love to get your thoughts on supply chain
22:30 and just thinking of what we all went through
22:33 with the pandemic.
22:34 Supply chains were turned upside down
22:36 in the pandemic.
22:37 Companies feel like they're still dealing
22:38 with some of the after effects.
22:40 And now on top of that, we have these new capabilities
22:44 with AI.
22:45 One of the things you had mentioned before
22:46 was the opportunity to either access data marketplaces
22:51 and to enhance your data.
22:53 Love to get your thoughts on how important
22:55 is the data aggregation and data discovery
22:59 in your work to optimize supply chain planning with AI?
23:02 Is that a material part of the equation?
23:05 - Absolutely.
23:06 I think I mentioned this earlier.
23:10 We really believe that kind of supply chain sustainability
23:13 is no longer kind of a nice to have, right?
23:15 It's core business.
23:16 It's really about resiliency.
23:18 Just touching again on the governance and the regulations.
23:21 If you have goods detained in the Port of New Jersey,
23:25 that's a revenue risk to your business.
23:26 That's a branding and PR risk.
23:28 And obviously that's a responsible sourcing
23:31 and ethical risk.
23:32 And so absolutely, critically important.
23:38 I was also just struck by an adjacent thought
23:40 thinking about kind of data around individuals.
23:44 In Europe, for example, there's increasing regulation
23:47 around data around products.
23:49 So in France, they are mandating that every physical product
23:54 that is going to be sold from 2026, I think,
23:57 has a digital product passport.
24:00 So think about the billions of data points
24:02 that need to be acquired, automated across millions
24:05 of SKUs to be able to provide visibility
24:09 into kind of how products are made.
24:11 So AI is front and center.
24:15 If we think that supply chains and sustainability
24:17 in our core business, AI is front and center
24:19 of responsible business and the responsible tools
24:22 and technologies.
24:23 But of course it's AI plus human intervention as well.
24:28 AI is not a one size fits all solution in our world.
24:31 >> We're not ready to hand the keys over
24:32 to the machines to share.
24:34 We need humans involved.
24:36 The issue on data discovery applies in your work as well.
24:40 Optimize for sustainability.
24:42 And can you talk a bit about how important that is to you?
24:45 >> Yeah, there's actually very little work,
24:48 I mean, information and transparency
24:50 around sustainability in AI.
24:51 It's always kind of like an uphill battle.
24:53 I started out actually using AI to analyze ESG reports.
24:56 So like the corporate reports that companies tend to file
25:00 every quarter or so about their environmental sustainability,
25:03 governance, and things like that.
25:04 So I started out really mining for that.
25:06 And there's so many issues in that.
25:07 And even just get, for example, if you have a PDF table
25:10 and you want to get information from that table
25:12 and finding like the columns and the rows
25:14 to know what number is what, that's a huge--
25:16 >> That's what we do.
25:17 >> That's a huge, huge headache.
25:19 Yeah, and it's only getting more and more complex.
25:21 So I feel that, and it's often, you know,
25:22 it's often human labor analysts going in there
25:25 and there's just so much potential,
25:26 but people tend to focus on the bright, shiny,
25:29 generative AI side of things and not realize that,
25:32 you know, parsing PDFs is super sexy.
25:34 >> Yeah, data scraping, entity resolution,
25:37 data integration, yeah.
25:38 >> And so when I talk about AI, I'm always like,
25:40 can we talk about like the random forests of the world?
25:42 Can we talk about the PDF scrapers?
25:44 Because like, for me, that's where, like both
25:47 from a sustainability perspective,
25:48 but just from like a business perspective,
25:50 that's where the value really is.
25:51 Like, yeah, it's cool to like ask your chatbot,
25:53 you know, whatever, if you can get a new credit card
25:56 'cause you lost it, but it's really, really cool
25:58 to find the information that you're looking for faster.
26:00 >> Well, isn't it cool just to find stuff?
26:02 How much time do we all waste just looking for things?
26:05 And some of the early use cases
26:07 from a knowledge management standpoint are incredible.
26:09 We had a bunch of vendors came in with legal tech solutions,
26:14 AI enabled legal tech, and some of the e-discovery solutions
26:17 where you have to respond in a moment's notice,
26:20 and you have to look through thousands of documents
26:22 >> Without hallucinating.
26:23 Without making stuff up, right?
26:26 >> And to actually get it right, and to find your own stuff,
26:28 and how much time we all, we can waste with that.
26:31 Okay, we could go on for hours, but they won't let us.
26:33 I want to thank you all for all that you brought to the panel
26:36 and so much more to talk about.
26:37 You'll be around to answer some questions from the audience?
26:41 Okay, excellent, great.
26:41 Thank you very much.
26:42 Thank you.
26:43 (audience applauding)
26:44 (static buzzing)
26:47 (static buzzing)
26:50 (static buzzing)
26:52 (static buzzing)
26:55 (static buzzing)
26:58 (static buzzing)
27:01 (static buzzing)
27:03 [BLANK_AUDIO]

Recommended