Construyendo conexiones a través de la investigación abierta: Joelle Pineau de Meta

  • hace 2 meses
La curiosidad de Joelle Pineau la llevó a realizar un doctorado en ingeniería con especialización en robótica, que ella describe como su "puerta de entrada a la IA".

Como vicepresidenta de investigación de IA en Meta, lidera un equipo comprometido con la apertura al servicio de la investigación de alta calidad, el desarrollo responsable de la IA y la contribución de la comunidad.

Category

📚
Learning
Transcript
00:00Hi, Sam here to tell you how you can unlock the transformative power of generative AI
00:07with a new online course from MIT Sloan Executive Education.
00:12You may be wondering what GenAI is and why it's relevant for your business.
00:16In this on-demand online course, led by experts from MIT Sloan and the Schwarzman College
00:21of Computing, you'll explore GenAI's promises and limitations, investigate its applications
00:27for your business, and learn how you can implement a strategy for it.
00:31Visit executive.mit.edu slash smrai to sign up for the course and gain the MIT edge today.
00:40That's executive.mit.edu slash smrai.
00:58Why is being open about your company's AI research a benefit more than a risk?
01:04Find out on today's episode.
01:06I'm Joelle Pinot from Meta and you're listening to Me, Myself & AI.
01:12Welcome to Me, Myself & AI, a podcast on artificial intelligence and business.
01:17Each episode, we introduce you to someone innovating with AI.
01:21I'm Sam Ransbotham, professor of analytics at Boston College.
01:25I'm also the AI and business strategy guest editor at MIT Sloan Management Review.
01:31And I'm Shervin Korubande, senior partner with BCG and one of the leaders of our AI business.
01:37Together, MIT SMR and BCG have been researching and publishing on AI since 2017, interviewing
01:44hundreds of practitioners and surveying thousands of companies on what it takes to build and
01:50to deploy and scale AI capabilities and really transform the way organizations operate.
01:57Hi everyone.
01:58Today, Sam and I are happy to be joined by Joelle Pinot, vice president of AI research
02:04at Meta.
02:05Joelle, thanks for speaking with us today.
02:08Hello.
02:09Okay, let's jump in.
02:11A good place to start might be for you to tell us about Meta.
02:14A lot of people know what Meta is, but maybe you could describe it in your own words and
02:19also your role in the company.
02:22Well, as many people know, Meta is in the business of offering various ways for people
02:28to connect and build community, build connections, whether it's through Facebook, WhatsApp, Instagram,
02:34Messenger.
02:35We have billions of people around the world using our products.
02:38I've been at Meta for about seven years now, leading AI research teams.
02:43I'm based in Montreal, Canada, and now I lead FAIR, which is a fundamental AI research team
02:49across our labs in the US, in Europe.
02:51The role of our group is actually to build next generation AI systems and models, discover
02:57the new technology that will eventually make the products better, more engaging, safer
03:03as well.
03:04That's a great overview.
03:05Can you give us a sense of what some of those projects are that you're excited about or
03:09you're working on?
03:10You don't have to give us any secrets, of course, but what are some fun things you're
03:13excited about?
03:14Well, I hope we have a chance to talk about it, but there's not a ton of secrets because,
03:19in fact, most of the work that we do is all out in the open.
03:22We adhere strongly to open science principles.
03:25We publish our work.
03:26We share models, code libraries, and so on and so forth.
03:29Our teams cover the full spectrum of AI open problems.
03:34So I have some teams who are working on understanding images and videos, building foundation models,
03:40so core models that represent visual information.
03:44I have some teams that are working on language models, so understanding text, written, spoken
03:50language as well.
03:51I have some teams doing robotics, so understanding how AI systems move in the physical world,
03:57how they understand objects, people, interactions, and a big team of people who are working on
04:04core principles of intelligence.
04:07So how do we form memories?
04:09How do we actually build relationship between different concepts and ontology of knowledge
04:15and so on and so forth?
04:17It seems like there's almost nothing within artificial intelligence you're not working
04:20on there.
04:21Tell us a bit about why you think open is important.
04:25So FAIR has been committed to open research for 10 years now, since day one.
04:30We've really pushed on this because whenever you start a project from the point of view
04:36of making it open, it really puts a very high bar in terms of the quality of the work
04:41as well as in terms of the responsibility of the work.
04:44And so when we decide what algorithms to build, what data sets to use, how to evaluate our
04:51data, how to evaluate the performance of our model through benchmarks, when we know that
04:56all of that work is going to be open for the world to scrutinize, it really pushes us to
05:01put a very, very high bar on the quality of that work, on the rigor of that work,
05:05and also on the aspects of responsibility, safety, privacy, and so on.
05:11The other reason I think that open is really helpful is a lot of researchers come from
05:15a tradition of science where you're always building on the work of others.
05:21Science does not happen in a silo.
05:24And so if you're building on the work of others, there's also a desire to contribute back to
05:27that community.
05:28And so researchers are incredibly interested in having that kind of a culture.
05:32So it helps us recruit the best researchers, keep them.
05:35It is quite different from how other industry labs operate.
05:40And so from that point of view, I think it's definitely a big advantage.
05:44What's interesting is from the point of view of meta, there's no concern in terms of keeping
05:49some of that research internal in the sense that it doesn't in any way stop us from using
05:54this research into our products.
05:56Not because we've published the results that we can't use it into our product.
06:00Really the power of the product comes from all the people using it.
06:03It doesn't come from having a secret sauce about AI.
06:06And we know how fast AI is moving.
06:08A lot of that knowledge is disseminated across the community very, very quickly.
06:13And we are happy to contribute to that.
06:15That makes a lot of sense.
06:16A lot of my background is in computer security.
06:19And so I think openness is a great segue there from security because of both of those points.
06:25First, in security, anybody can design something that they can't break.
06:29But the question is, can someone else break it?
06:31And I think that's always a more interesting and more difficult problem.
06:35But then there's also the idea of building on others' work.
06:39I think that's huge.
06:40If you think about what's happened in research from the history of all of mankind, historically
06:46research happened in academia.
06:48And then eventually more basic research became more applied within industry.
06:54But it seems like with artificial intelligence, that a lot of this has shifted to industry first.
07:00In fact, what you described to me sounds very much like an academic lab.
07:05So is that a problem that we're moving basic science from academia?
07:09Or are we?
07:10Maybe I'm begging the question.
07:11Is this a move that's happening?
07:12Is this a problem?
07:13What do you think?
07:14Well, I think both academia and industry have advantages when it comes to AI research.
07:20And I'll maybe not speak broadly across all areas of research.
07:24But for AI, in today's context, I do think both have significant advantage.
07:31On the one hand, on the industry side, we do have access to vast resources, in particular
07:36with respect to compute.
07:38And so when it comes to scaling some of the large language models, you need access to
07:43thousands of GPUs, which is very expensive.
07:46It takes a strong engineering team to keep these running.
07:49And so it's a lot more feasible to do this with a large industry lab.
07:54On the academic side, it's harder to have the resources and the personnel to be successful
07:59in terms of scaling large models.
08:01Now, on the academic side, there are advantages.
08:04And I do have a position in academia, I have many colleagues, I have some grad students.
08:09So I think you have to be very smart about what research questions you ask, depending
08:13on your setting.
08:14On the academic side, we have the privilege of often working in highly multidisciplinary
08:19teams.
08:20I work with people who come from philosophy, cognitive science, linguistics, and so on
08:25and so forth.
08:26We ask much broader questions.
08:28And as a result, we come up with different answers.
08:33One of the places where I sort of track the different contributions is looking at some
08:37of the very top conferences in the field and seeing like, where do the outstanding paper
08:41awards go?
08:42Do they go to academia?
08:43Do they go to industry?
08:46And in many cases, we see a mix of both.
08:48There's some really seminal work coming out, both of industry and academia, that is completely
08:55changing the field that is bringing forth some breakthroughs in AI.
08:58So I'm quite optimistic about the ability for researchers across different types of
09:05organizations to contribute.
09:07And beyond that, we haven't talked about startups, but there's a number of small startups that
09:11are doing some really phenomenal work in this space as well.
09:15And so overall, having a thriving ecosystem is in everyone's advantage.
09:20I think I'm more interested in a lot of our work in looking in ways that we can work together.
09:26Because in general, I strongly believe that having more diverse teams helps you ask different
09:32questions.
09:33So a lot of the intent behind our work on open sourcing is actually to make it easier
09:37for more diverse set of people to contribute.
09:40You made the analogy with the security community really relying on open protocols.
09:44I think there's a lot of that in how we tackle this work from the sense of like, I have amazing
09:50researchers who are putting their best every day into building models.
09:53But I do believe by exposing these models to a broader community, we will learn a lot.
09:59So when I make the models available, you know, researchers in academia and startups take
10:04these models, in some cases, find flaws with them, give some quick feedback.
10:09In many cases, we see derivatives of the model that have incredible value.
10:13One of the big launches we had in last year is our Lama model, Lama 1, Lama 2, Lama 3.
10:20Thousands of people have built derivative models from these, many of them in academic
10:24lab, fine tuning models, for example, to new languages, to open up the technology to different
10:30groups.
10:31And to me, that's where a lot of the value of having different players really comes from.
10:37I think we certainly see the value in, let's say, collaborating and iterating and keeping
10:42things open, but that's not always guaranteed to happen.
10:46What kind of incentives are there for us all to work together like this?
10:51It's always hard to predict the future, and in particular with AI and how fast things
10:55are moving.
10:56And so I completely agree with you, you know, what I will say is, as you mentioned, there's
11:00a strong culture towards open protocols at Meta that predates the AI team, the basic
11:06stack, the basic software stack is also based on many open protocols.
11:11And so that culture is there to this day, that culture continues, it goes all the way
11:15to the top of the leadership, and that commitment to open sourcing the models is strongly supported
11:21by Mark Zuckerberg and his leadership team.
11:23So I don't see that this is going to stop very soon.
11:27What is going to be important is that we continue to release models in a way that is safe.
11:32And that's a broader conversation than just one company.
11:35The governments have several points of views of how should we think about mitigating risks
11:42for this model.
11:43There's also a lot of discussions about how to deal in particular with very frontier models,
11:48the largest, most capable models.
11:51And so we're going to have to have these conversations as a society, beyond just the
11:55labs themselves.
11:57You raise the specter of risks, you know, the worry out there is that, oh, my gosh,
12:01these models are going to take over everything, and our world is going to collapse, and this
12:06is an existential threat.
12:08I'm kind of setting you up with that straw man, but do you buy that?
12:12I don't really spend a lot of time planning for the existential threat in the sense that
12:18many of these scenarios are very abstract.
12:21They're excellent, you know, stories in terms of science fiction.
12:26But in terms of actually taking a scientific and rigorous approach to that, it's not necessarily
12:32the existential risks that take most of my attention.
12:36I will say with the current generation of models, there are several potential harms
12:41to different populations.
12:42You know, algorithms have been known to have biases towards underrepresented groups, for
12:47example, in facial detection system, as well as being on the language side, very angle-centric.
12:55And so I do look quite carefully at the current set of risks and try to measure them as much
13:02as possible in a rigorous way.
13:03We build mitigations whenever we can.
13:06We've invented new techniques for doing watermarking to make sure that false information can't circulate.
13:12We've done a lot of work on bias assessment so that we can actually measure the fairness
13:17performance of our algorithms.
13:19So I look a lot more at current risks rather than these really long-term ones, just because
13:25I feel we can have a handle on it that is based on a rigorous approach, based on metrics,
13:30based on really analyzing what the harms is and how to mitigate them.
13:34The very far-fetched scenarios, it's really hypothetical.
13:37It's hard to build good systems.
13:40It's hard to do good science.
13:42It's also hard to do good policy.
13:44Yeah, I think your point's well taken about bias and metrics that you mentioned, for example,
13:50these models that have biases built in, but I mean, my gosh, they're built off training
13:54data that has massive bias built in.
13:56I find it hard to attribute that to the model itself and more to the training data.
14:01And your point there is that you can build in bias mitigation there.
14:04What kinds of things have you done towards that?
14:06Yeah, in fact, on the question of bias, it's a little bit of both.
14:10There's no doubt that many of our data sets are biased.
14:14The data sets are a reflection of our society, and unfortunately, a large amount of unfairness
14:20remains discrimination as well as having underrepresented groups in our society.
14:25So there's no question that the data sets themselves don't start off soft on a very
14:29good foot.
14:30However, the model themselves also tend to enhance these biases in that most of the machine
14:36learning techniques we have today, they're very good at interpolating the data.
14:41So you sort of take data distributed in a certain way, and the models will really push
14:46towards the norm of that data.
14:49The models tend to be very poor at extrapolating.
14:51So making predictions outside of the data set, they tend to have a larger error.
14:56So if anything, when we train the models, and we try to sort of minimize the error,
14:59you do well by predicting more towards the norm versus towards the sides of that distribution.
15:06And so the data is responsible, the models are also responsible for doing that.
15:11And then there's the way in which we deploy the models.
15:14We tend to often look at aggregate statistics.
15:17So we'll look at the overall performance of the model.
15:20And based on the overall performance, we'll say, great, we've got 95% performance on this
15:24model, it's ready to be deployed.
15:27But we don't take the time to look at a more stratified analysis of results.
15:32What is the performance with respect to different groups?
15:36And how are these groups differentially impacted with respect to how the system is deployed
15:41in a bigger system?
15:43I think there's different points where we can be much more rigorous and thoughtful to
15:48make sure that we don't enhance biases.
15:52And ideally, that we actually use AI towards a more fair and equitable society.
15:57Yeah, I think that point of averaging is huge, that we've got so much.
16:04Models feel right when they give us the answer we're expecting.
16:08The image generation feels right when it gives us the image that fits our stereotypes.
16:14And fighting that seems like it's a quite difficult problem.
16:18But on the other hand, I feel like these models can try to solve it in a way that we're not
16:22going to convince everyone in the world to suddenly stop being biased tomorrow or suddenly
16:26not have a stereotype tomorrow.
16:28But we could convince an algorithm not to have a stereotype tomorrow by tweaking some
16:33weights and changing things.
16:34And so that gives me a little more hope to manage the risks.
16:37Perhaps it's not the existential threat we're getting there yet, but it seems more plausible
16:42to me that way.
16:43I think one of the challenges is determining what we want out of these models, right?
16:48We've seen some pretty egregious examples recently of groups, which I assume is well-meaning
16:54intent to rebalance datasets, especially with representation of, for example, different
17:00racial groups in images.
17:01You know, of course, if someone asks for like an image of an engineer, you don't want only
17:05men to show up.
17:06You would hope to have a few women show up.
17:09And there's ways to rebalance the data.
17:11There's ways to sort of recompensate at the algorithmic level.
17:16But sometimes you end up with very unusual results.
17:21And so it's also a question of what are the distribution of results that we expect and
17:27that we tolerate as a society.
17:29And in some cases, that's not very well defined, especially when the representation is biased
17:35within the real world as well.
17:38That seems incredibly hard because the problem switches from being an engineering problem
17:42and engineering problems you can typically solve with enough pizza and caffeine.
17:48And when you get to these more difficult problems, then they tend to be trade-offs and they tend
17:52to be choices.
17:53And these choices are very difficult.
17:56They're not improving an algorithm, which is the kind of thing that we can get into.
18:00But knowing what it should do seems like a much harder problem.
18:03And again, that seems much worse, too, as these technologies become so pervasive.
18:08If, for example, Meta does make these algorithms available to people as part of the open source
18:13process, by definition, more people have access to them and then more people have to make
18:18these difficult decisions.
18:20That seems much harder to scale than algorithms.
18:24I agree.
18:25I think in many ways, deciding as a society what we want these models to optimize for
18:32and how we want to use them is a very complicated question.
18:36That's also the reason why at Meta we often open source the research models.
18:40We don't necessarily open source the models that are running into production.
18:44That would open us up, I think, to undue attacks.
18:47And it's something we have to be careful about.
18:49But we often open our research models.
18:51And so that means very early on, if there are major opportunity to improve them, we
18:56learn much faster.
18:58And so that gives us a way to essentially make sure that by the time a model makes
19:03it into product, it's actually much better than the very first version.
19:08And we will release multiple versions as the research evolves, as we've seen, for example,
19:12with the Lama language models I mentioned earlier.
19:14We released Lama 1, Lama 2, Lama 3, and so on.
19:18And every generation gets significantly better.
19:21Some of that is, of course, the work of our own fabulous research teams.
19:25But some of that is also the contributions from the broader community.
19:29And these contributions come in different forms.
19:31You know, there's people who have better ways of mitigating, for example, safety risks.
19:36There are people who bring new data set that are allowing us to evaluate new capabilities.
19:42And there's actually some very nice optimization tricks that allow us to train the models faster.
19:48And so all of that sort of converges to help make the models better over time.
19:53I think the analogy that sticks with me is how image processing improved from the 2012
20:00and the ImageNet competition that, you know, again, that came out of originally academia,
20:05Toronto, but then exploded as everyone could see what everyone else was doing.
20:11Everyone brought something better, a faster implementation, a smaller implementation,
20:14a bigger one.
20:16And the accuracy just over the very short time got really truly phenomenal.
20:21Yeah.
20:22Let's shift gears a little bit.
20:24Joelle, you're an AI researcher and also a professor.
20:28How did you find yourself in this line of work?
20:30I'm very driven by curiosity, I have to say.
20:34I first got into robotics, that was sort of my gateway into AI.
20:40I was doing an undergrad degree in engineering at the University of Waterloo.
20:44And near the end of that, I had the chance to work on a robotics project, building a
20:47six-legged walking robot, and in particular, the sensor system for that robot.
20:53So we had some sonars and had to process the information and from that decide sort of where
20:57were the obstacles in the environment.
20:59And so that led me to doing graduate studies, master's, PhD at Carnegie Mellon University
21:04in Pittsburgh, which is a phenomenal place to study robotics.
21:08And from there, I really got into machine learning.
21:11I found that for the robot to have relevant, timely information and to be able to take
21:17decisions, you needed to have a strong model.
21:20So my thesis work was in planning under uncertainty, the ability to take decision when there's
21:25some uncertainty about the information and developing algorithms for doing that.
21:30And from then on, I took on an academic career at McGill University in Montreal, where I'm
21:34still based, and pursuing work across areas of machine learning.
21:39A lot of applications of machine learning in healthcare.
21:42We have a fabulous faculty of medicine here at McGill, and so I had many very exciting
21:48partnerships there.
21:50And also a lot of work on building dialogue systems, which today, you know, we recognize
21:55as language models and chatbots, but I was building some of the very preliminary version
22:01of this work in the early 2000s.
22:03And so, because I do use curiosity as my main motor, it has allowed me to work across
22:09several subfields of AI, robotics, language, perception, and applications.
22:15And so that gave me a pretty good set of knowledge and experience to then come into a place like
22:21meta where the teams that I work with do fundamental research, but we work closely with product
22:28teams and try to both push the frontier in terms of the science, but also push the frontier
22:33in terms of new products, new experiences.
22:37So clearly there's lots that meta is doing around the core meta products, but there's
22:42the general scientific discovery that meta research is working on.
22:46What are some examples of projects that are in progress there?
22:50This is such an interesting area.
22:52I think there's enormous potential to use AI to accelerate the scientific discovery
22:58process.
22:59When we think about how it works, often, you know, let's say you're trying to discover
23:02a new molecule or discover a new material.
23:05There's a very large space of solutions, often combinatorially large, and the traditional
23:11methods have us looking through the space of molecules one by one, and we take them
23:16into the wet lab and we test them out for the properties that we want, whether it's
23:19to develop a new medication or develop a new material.
23:24And so we've had a few projects over the years that look at this problem.
23:28More recently, we have a project that's looking specifically at direct air carbon capture,
23:34really the desire to build new materials that could capture carbon in a way, of course,
23:40to address our environmental crisis.
23:42Now when you do this work, there's many steps.
23:44One of them is even just building up the data set for doing that.
23:48So we've built up a data set, synthesizing many different possibilities for this problem.
23:55And out of that, we often partner with external teams to try to validate which of these solutions
24:01may bring the most value.
24:03We've done previous work also in the area of protein synthesis that had a similar flavor,
24:08though the specifications of the protein was a little bit different.
24:12But at a core fundamental way, the problem looks very similar.
24:16So I'm really excited to see what comes of this.
24:19I've had some cases where partner teams came to me and said, in the space of about a year
24:24of working with AI, they were able to cut down the scientific process in terms of experiments
24:31that would have taken them like 25 years if they were going through the search space
24:37with more traditional methods.
24:39And I think that's something that we're seeing from other people we've talked to.
24:41We talked to, for example, Moderna, talking about their vaccine development and how it
24:46helped explore that space.
24:47And we talked about Pirelli and how they use it for tire components.
24:52So I think this idea of exploring a combinatorically large space is really pretty fascinating.
24:58It's not something that I would have expected MEDA to be involved with it at first blush.
25:04I can see, for example, the carbon dioxide from the air problem.
25:07That's probably just something you're facing in data centers, but I wouldn't have expected that.
25:11Yeah, I think, I mean, you bring up the case of data centers, I would say that's a prime application for this.
25:16We are building several data centers and it's in everyone's interest for those to be very
25:21energy efficient. We also have some strong commitments in terms of using renewable energy.
25:26And so there is a strong motivation in that space.
25:29And not to be forgotten, we also have all of the work that's happening on our work towards the
25:34metaverse, the reality lab side of MEDA, which is really the longer term vision of building
25:40AR and VR devices.
25:42And when it comes to that type of hardware design, there's a lot of really hard problems,
25:47whether it's in the space of optics or other components where AI guided design can actually be
25:53very useful to accelerate that work.
25:56Yeah, that's pretty interesting.
25:57We actually just talked with Ty Sheridan, who is the star of the Ready Player One movie.
26:01And so that's a perfect segue from the metaverse to there.
26:05We have a segment where we ask you a little rapid fire question.
26:08So just first thing that comes to your mind, what's the biggest opportunity for artificial intelligence right now?
26:15I do think that the ability to open up, to connect people across languages is huge.
26:22We've had systems where we're building up machine translation to go up to 200 different languages.
26:28But there are many more languages that are spoken only.
26:31And so we're really having the ability to build technology for anyone to understand anyone else across the planet.
26:38I think that's going to be really crucial for us to figure out how to all live together on this earth.
26:44So what's the biggest misconception that people have about AI?
26:49I don't know if it's the biggest, but one that really gets to me is thinking of AI as a black box.
26:54People think, you know, information goes in, something happens and then something comes out.
26:58I think in many ways, from where we stand today, the human brain is a lot more of a black box than AI.
27:04When I have an AI system, I can trace down with a lot of precision how information circulates,
27:09how it's calculated and how we got to the output.
27:12I cannot do that with a human brain in the same way.
27:15So, yeah, whenever someone says AI is a black box, I sort of frown a little bit and feel like, no, it's a complicated box.
27:22But we have a lot of understanding of what goes on inside there.
27:26Yeah, other people's brains make no sense to me.
27:28Mine makes perfect sense, but everyone else doesn't.
27:30What was the first career that you wanted?
27:33Oh, early on, I wanted to be a librarian.
27:37I loved reading books.
27:38I still do.
27:39I still read quite a bit.
27:40And I thought, you know, having a job where you can just sit in a space filled with books and read all day sounded delightful.
27:47When do we have too much artificial intelligence?
27:50When are we trying to put that square peg in a round hole?
27:54I don't think of it as like one day we have enough and one day we have too much.
27:58I think it's really about being smart about where you bring in AI into a system.
28:05So already there are places where AI shouldn't go.
28:08And there are places, or at least the version of the models we have today,
28:12and there are places where we could bring in AI much more aggressively, I think.
28:15So I think what's really important is figuring out how to bring it in in a way that it brings real value,
28:22economic value, of course, but real social value as well, and being thoughtful about that.
28:27Yeah, that ties to your previous answer about the difficult parts of using the technology or not the technology itself.
28:34So what's one thing that you wish that artificial intelligence could do now that it can't do currently?
28:40I wish that AI systems could understand each other.
28:44Right now we're building a lot of AI systems that are individual.
28:48They're all fine-tuned for an individual performance.
28:51But once we start deploying many of these AI agents together in the same place,
28:56our methods for understanding the dynamics between several agents are very primitive.
29:03And I think there's a ton of work to do.
29:05You know, if we look to humans as the society of agents that is most evolved today,
29:10we derive a lot of our longevity, our robustness, our success through our social ties.
29:17And AI systems today have no idea how to build social ties.
29:21That's interesting, because I think we spend so much time thinking about the human-computer interface
29:25and the computer-human interface and not as much about the computer-computer interface.
29:31This has been a fascinating discussion.
29:32I really kind of opened my eyes to all the things that Meta is doing that's beyond just that sort of surface research
29:38that's more obvious in the newspapers and media reports.
29:42Thanks for taking the time to talk with us today.
29:44Yes, very inspiring conversation. Thank you.
29:47My pleasure. Happy to be with you.
29:50Thanks for listening to Season 9 of Me, Myself and AI.
29:54Our show will be back in September with more episodes.
29:58In the meantime, if you missed any of the earlier episodes on Responsible AI,
30:03please go back and have a listen.
30:05We talk with Amnesty International, Airbnb and Salesforce.
30:10Thank you for listening and also for reviewing our show.
30:14Talk to you soon.

Recomendada