Brainstorm AI Singapore 2024: A Survival Guide to our Superpowered Future

  • 3 months ago
Jeremy Kahn, AI Editor, FORTUNE Moderator: Maiwen ZHANG, Executive Director, Executive Editor, FORTUNE China, Co-chair, Fortune Brainstorm AI Singapore
Transcript
00:00So, Jeremy, your book offers very dramatic predictions
00:04of AI's impact over the next decade.
00:06Suppose we can really travel through time to the year 2034.
00:12As a writer and thinker,
00:13what do you think will be the most underestimated change brought
00:16about by AI?
00:18Yeah, so I think there's several things,
00:20but I think we're sort of vastly underestimating the extent
00:24to which this is going to change the way we kind of interact
00:27with the internet in a way that, you know,
00:29Google kind of changed the way we all search for information.
00:32I think the sort of AI chatbots
00:35and actually more agentic AI that's coming along is really going
00:39to change the way we interact with digital information in a way
00:43that's kind of very profound.
00:44It really is like the next platform shift,
00:46and I think we're in the future not really going to do a lot
00:49of this searching ourselves.
00:50We're going to have these AI agents that kind of go out into the world
00:53and do our research for us, do our shopping for us, plan our trips,
00:58and I think that's going to sort
00:59of profoundly change the way companies potentially interact
01:02with customers and the way we all interact with each other
01:05in ways we haven't really grappled with, and I think a lot
01:07of companies aren't really grappling with.
01:09So I think that's one of the biggest changes
01:11if you project forward a decade that we'll see.
01:13I also think in all of us who are sort of knowledge workers,
01:16there's going to be kind of a co-pilot
01:18that works alongside us in our jobs, and that's going to change sort
01:21of how we function in our daily jobs, and I think a lot
01:25of people aren't also sort of thinking too hard
01:27about that at the moment.
01:29Your answer remind me that one of the seminars I listened
01:34to mentioned a topic called general computer control,
01:36which sounds harmless, but think about it.
01:39This could be a shortcut to AGI,
01:42that AI can understand what's on the screen, can take actions,
01:46interact with the cyberspace through a PC.
01:49So if not properly regulated, this could be followed by things
01:54like feeding the public with misinformation,
01:56fake news, et cetera.
01:58So is this like opening a new Pandora's box?
02:02Yeah, I do think, so that's the kind of agentic AI
02:05that I was talking about that I do think is coming,
02:08but I think some of the risks you have to sort of look
02:11at how much of an additional risk you have from the technology.
02:15So if you talk about disinformation, for instance,
02:18we already have quite a lot
02:19of human-generated disinformation out there.
02:21The question is, if you have an agentic system,
02:24is it that much easier to produce something at scale,
02:26and then also put it out into the world and kind
02:29of create a whole strategy for a whole disinformation campaign?
02:33I think what I worry about most with the sort
02:35of digital agents is actually,
02:37there are some interesting issues about control.
02:39I worry actually about sort of scams and crime and this sort
02:43of thing, because I think right now there's already systems
02:45where you can tell ChatGPT,
02:47like what's a way I can make $10,000 in an hour?
02:50And it will tell you some ideas for doing that,
02:52but you have to then go ahead and implement that idea yourself.
02:55And so far, when people have done these experiments,
02:57mostly ChatGPT recommends you run some sort of marketing thing
03:01where you're then paid to market somebody else's product
03:04on the internet, T-shirts or something like that.
03:07But it could very well in the future be that, oh, you know,
03:10go make me $10,000, you know, in the next few hours,
03:13and don't tell me, you know,
03:15I don't want to know how you're going to do it.
03:16I think you could very well get a system that says, oh,
03:19well a great way to do that would be
03:21to run a phishing campaign, for instance.
03:22And we'll just send a bunch of scam letters to people
03:24and tell them to transmit money to your bank account.
03:27And the question is, if your system goes off and does that,
03:29like first of all, who's liable for that?
03:31But also that's a risk that we really want to try to avoid.
03:33So I think the guardrails we have
03:34around agentic systems are going to be really important.
03:38So as you wrote, if we can try to govern AI, like what we did
03:44with nuclear power, for example, what do you think our government
03:50and the international society can do now to prevent,
03:54you know, all this from happening?
03:56Yeah. Well, I do talk in the book a little bit
03:58about different analogies for governance,
04:00including nuclear power.
04:02And I do think we need different levels of regulation.
04:06I think we should do some stuff at the international level
04:09that puts some basic limits on what this technology can do.
04:11I'm not, as if you read the book, you'll see that I'm not super worried
04:17about the sort of existential risk scenario, but I'm worried enough
04:20that I think we should take some action to head that off.
04:24I think it makes sense to do that.
04:25I don't think that that should sort of crowd out what we're doing
04:29to deal with more near-term risks, such as disinformation,
04:33such as bias in AI systems.
04:35I think we need other regulation to deal with that.
04:37And that could actually be done more at the national level.
04:39I do think sort of every country is going to need a regulator
04:42that actually understands the technology, has the power to kind
04:44of look at what technology companies are actually building,
04:48and have some meaningful control over that.
04:51And we need some standards around what these controls should be.
04:55I do think there are risks here,
04:57and we need to have a regulatory regime.
04:59I don't think you can just have a completely laissez-faire system.
05:03I think your book mentioned a lot of things actually are working.
05:06One of them is called constitutional AI, which is adopted
05:11by Amazon's large-language model,
05:14which is to give AI model written principles,
05:18a constitution to follow.
05:19So what it should be, or maybe it's already written
05:22in this constitution?
05:23Yeah, well, this is an interesting idea.
05:25So constitutional AI, for those who don't know,
05:26is something that was invented by the folks at Anthropic,
05:29which is one of the sort of leading LLM companies,
05:33founded by a group that broke off from open AI.
05:36And their idea was to give an AI system a written set of principles
05:40that they, it should check every answer it's giving against,
05:44and see if it conforms with these principles.
05:46This does seem to produce, you know, more robust guardrails
05:49than some other kinds of ways of trying to create guardrails.
05:52However, it's not foolproof.
05:54There are still ways around this,
05:55so I think we still need more research.
05:56But I think it's not a bad idea.
05:59Then the question becomes, well, whose principles are
06:02in this constitution?
06:03And that's a very fraught issue.
06:05And I think Anthropic's done some interesting work on trying
06:08to survey the public on what do they think should be
06:10in this constitution.
06:11And they've found, you know, two-thirds of the people kind
06:13of agree on what should be in the constitution.
06:15The problem is that the other third vehemently disagree
06:18with the two-thirds.
06:19So I think that's still, you know, we're going to have to come
06:21up with a way of resolving this.
06:23And I think it's going to, that's one of the reasons I think it's going
06:25to be, there's going to be geographic diversity among,
06:29you know, chatbots.
06:30I think because we're going to need different guardrails
06:32for different places.
06:33And different societies will think certain things are more appropriate
06:35than others.
06:36But there will be common areas.
06:38Yeah, I think there will be commonalities.
06:39And that's one of the things we could try to do
06:40at the international level is come up with what are those commonalities.
06:43But is it, you know, counterintuitively,
06:46is it correct to say that maybe we need more powerful AI models
06:50in order to make sure that they will be safer?
06:54Because we simply need them to be
06:56that sophisticated to make it happen.
06:59Well, yeah, this is, again, an idea that the folks at Anthropic,
07:02Dario Amadei, who's the CEO of Anthropic, has this idea
07:04that you have to build 90% of the unsafe thing in order
07:07to do the research on how to make the system safe.
07:10I think that's a little bit dangerous way of thinking.
07:12But I do think there's a little bit something to it that we need
07:16to advance the systems to a certain degree
07:18to test, you know, how we could control them.
07:20But it's also one of the reasons I think you need a regulator there,
07:23sort of looking over the shoulder of the company to say, okay, well,
07:25at what point are we reaching a limit
07:27at which we shouldn't go any further?
07:29I also think there's some interesting research
07:30on if you've built something that's relatively powerful,
07:32how do you distill the learnings from a large model
07:35down to a smaller model?
07:37And I think there's some interesting research that's come
07:39out on that.
07:40And I think some, we may end up, through building these larger models,
07:44create, and more powerful models, create smaller models
07:47that are actually more controllable.
07:49But it will be because we've done the work on the large models.
07:51And the last chapter, the very last ending
07:54of the book actually brought me things some further.
07:58Last week, my kids and I visited the Mogao Grottoes located
08:03in northwest China.
08:05So, standing inches from the ancient murals, mostly colorful paintings,
08:11religious writings on the wall in the cave,
08:14I was actually profoundly shaken by the fact that this was created
08:18by the very same human beings more than 1,000 years ago.
08:22But think about that.
08:23When our descendants are writings, paintings, even music sheets,
08:27will they still be amazed knowing
08:30that these could actually be the creations of machines?
08:33So, will this fact discourage today's artists and writers?
08:37I hope not.
08:39I think it is a potential risk
08:41that people will stop appreciating, you know,
08:43human-created objects and art.
08:46But I actually don't think it's a huge risk.
08:48I think there's plenty of room here still for human creativity.
08:50And if you look at sort of the most profound contributions,
08:54particularly to advancing the edge of any art,
08:57it's still going to be done by humans.
08:58It may be done by humans in collaboration with AI
09:01and using AI as a tool.
09:03But I think ultimately that to try to, particularly if you look
09:06at how a lot of these AI systems are trained,
09:09they're trained on past human works.
09:11And if you're trying to advance where art is going,
09:14it's very hard for these systems to actually make an advance
09:17on that goes beyond their training data.
09:19In fact, they really can't in many cases do that.
09:22But a human can.
09:23A human can potentially prompt this, you know,
09:25use what AI produces but then put a human touch on top of that
09:28that does really advance the art form.
09:30And I, in the book, I talked to a number
09:33of artists who are trying to do this.
09:34There's a photographer I talked
09:36to who takes these big monumental kind
09:38of canvases, panoramic photography.
09:41And then he applies an AI layer
09:44that provides some interesting visual effects.
09:46But then he does traditional photo editing on top of that.
09:49I also talked to a writer who kind
09:50of takes the most bizarre outputs of AI systems and incorporates
09:54that into her writing and uses it as kind
09:57of a foil for things she writes.
09:58So she has the AI in kind of conversation with her,
10:01which I think is interesting also, as a way to think
10:03about how artists can still create human-written
10:06and human-advanced art.
10:07So just as you wrote, we need copilots, not autopilots.
10:11Yeah, that's right.
10:13I have to say we need copilots and not autopilots.
10:14Yeah, so again, you wrote, I quote, AI is actually bendable.
10:18So let's hope we can skew the odds in favor
10:23of those more positive outcomes, not the other way around.
10:26Yeah, absolutely.
10:27Thank you, Mayweather.
10:28Thank you so much.

Recommended