bet365娱乐, bet365体育赛事, bet365投注入口, bet365亚洲, bet365在线登录, bet365专家推荐, bet365开户

WIRED
Search
Search

New Beginnings: A Conversation with Mira Murati

The former CTO of OpenAI made waves when she announced her departure from one of the tech industry’s most pivotal roles this summer. The technologist sits down with WIRED’s Steven Levy to share her perspective on the future of AI and her place in it.

Released on 12/04/2024

Transcript

[audience clapping]

[energetic music]

Are y'all having a good time?

[Audience Member] Oh, yeah. [audience cheering]

Yeah. Okay, great,

well, I'm excited about our next guest

in the big interview, Mira Murati.

She recently, most recently served

as Chief Technology Officer at OpenAI,

where among other things,

she oversaw product teams building ChatGPT, DALL·E, Sora,

and contributed to advancements in ai safety, ethics,

and machine learning research.

For a few days, she was even the interim CEO, I here.

[audience laughing]

She also handled the external relationships.

And I could tell you, I did a story on Microsoft recently

and unprompted,

any number of people told me how important she was

to the partnership

and how they enjoyed working with her

and how she made things go better for

that little company in the Northwest.

Prior to joining opening OpenAI, she managed the product

and engineering teams at Leap Motion

and led the design, development

and launch of vehicle products at Tesla,

including the Model X.

Is that all correct?

Yeah.

Well, welcome to Mira.

Thank you for having us.

[audience clapping]

Okay, Mira, in September,

you left OpenAI with a very generous and diplomatic note.

You say you were going to do your own exploration.

Now I've been reading unconfirmed reports

that you've been fundraising

and you might be working

with some other people from OpenAI.

Now, here's a spoiler alert.

I know from our prep you're not gonna be talking a lot about

what you're doing in terms of that,

but what can you tell us?

Can you tell us anything about what you're up to next?

I'm not going to share much about what I'm doing next

because I am figuring out what that looks like.

I'm in the midst of it,

but I can tell you a bit about what I'm excited

[Steven] Okay.

In the future.

[Steven] Sure.

And yeah, generally I would totally ignore the noise

and the speculation externally.

I think actually there is just too much noise

and obsession about who is leaving the labs and so on

and let's focus on the actual substance of things.

But what I'm excited about is, you know, quite similar

to the set of things that I was working on earlier,

but perhaps from a slightly different angle.

I think I'm very optimistic about the future.

I think we are about to see immense potential

with abundance of energy and intelligence

and even meaning.

And I really think that we're sort of at this beginning

of infinite age of curiosesity and wonder

and deepening our knowledge about the world.

And the hardest part is gonna be figuring out

how our civilization co-evolve with the development

of science and technology as our knowledge deepens.

And I hope that I can continue

to contribute positively in that direction.

So your optimism,

I think is something I saw not only with you,

but other folks at OpenAI.

It was a company that where people had a shared vision,

I think you might even referred to it

as something spiritual at one point.

And it was centered about belief that, you know,

humanity really was on this quest to take, you know,

what was suddenly possible to do

and developing digital technology

to do human like and beyond performance

and things, you know, something called AGI,

what some people called it.

And that was within our grasp.

You did share that view

and I guess do share

that we're approaching the point

where we can accomplish that, is that correct?

Yeah, but, you know,

just to kind of state the assumptions

of what it actually means,

I'll define AGI as sort of,

you know, a system that is capable

of learning how to perform at human level

across all cognitive tasks.

And you know, if you look at sort

of the past few years this year,

we have systems that are capable

of PhD level performance.

[Steven] Yeah.

In many different domains

and before that we had college level performance.

And before that, just a couple of years

before that we had high school level performance.

So if you just look at this trend

and extrapolate it out, you know,

it's not for certain,

but it shows that progress will likely continue.

And it's not unreasonable

that in a very short time we could get to a system

that has a capability to learn how to perform at human level

across this basically all cognitive tasks.

And right now this feels, I would say,

quite achievable, even if it doesn't take, you know,

even if it's not something that happens within a couple

of years, it'll take perhaps a decade,

maybe two, I don't know,

but it feels achievable.

Whereas I'd say, you know, even six years ago,

to me it felt more sci-fi.

And even though I was inspired

and we believed in this, what you call spiritual mission

and common vision,

it felt quite sci-fi at the time.

Whereas now we've made enough progress that we can kind

of see how the technology evolves.

When did you first begin to understand

that this was possible?

How did you, you know,

become involved in working with AI?

Was it at Tesla earlier in your education?

Yeah, so sort of by background,

I was very drawn to math and sciences early on,

and I went on to study mechanical engineering,

worked in aerospace as an engineering,

and kind of got a sense of how to build

and develop complex systems

in the world given real constraints and, you know,

systems that are safety critical.

But it was a Tesla where I got intuitive feel

for how AI would really advanced transportation.

And there I started to think more about

how it would affect other domains

and particularly how it would change our relationship

to knowledge and information.

And this is when I got interested

in exploring virtual reality, augmented reality,

and from the lens

of exploring the human-machine interface.

And while I was doing that, I was actually reading

quite a bit of Vernor Vinge,

and it was his essay on Singularity where he talks about,

you know, this is sort of likens our era

to a time where, where the change is so transformational

and it's quite similar to the rise

of human life on earth.

[Steven] Right.

And it felt quite sci-fi,

but at the same time, it was enough,

it was grounded enough on real possibility.

And to me it felt like even if there was, you know,

2% chance of this being possible,

it would be the most important thing that I would do.

And OpenAI mission really resonated with me to,

you know, ensure that AGI would benefit humanity.

You know, you mentioned, you know, the VR company you work

for, it seems, you know, like an interesting

on your resume, like almost like a side trip there.

I mean, if this interview were taking place

or if this conference were taking place like six years ago,

all people would be talking about was the metaverse, right,

and I don't think any

of the sessions here are about the metaverse, you know,

we're, we're talking about AI.

Is this something you still believe in?

You know, that, you know, AR and VR

will be super important technologies?

Yeah, my approach to VR

and AR wasn't so much from a perspective

that I thought it would happen in that particular time.

I was more curious

to understand this next human-machine interface

and what that could look like.

I think virtual reality

and augmented reality are have definitely advanced a lot,

since then.

And yeah, I think we will definitely see great technologies,

but we will also see other interfaces.

You know, in talking about hype, some people feel

that AI has been crazily hyped

and they're somehow saying that at this moment, you know,

it shows that, oh, it's slowing down,

it's plateauing is what they're saying.

And they're saying

that the next generation models

are not going to be the kind of leap we saw

from like, the generation of GPT-3 to GPT-4,

which was, you know, kind of like an astounding leap.

Do you push back against that?

Do you think AI is plateauing?

So I think one interesting observation is that people get,

they adapt very quickly to these changes.

Like, you know, the ChatGPT and Claude

and all the systems that we have today

that maybe they think they're not good enough,

they're not proving fast enough.

So I'll make that observation

and maybe that is good signal for what's about

to come in our society's ability to adapt to more change.

But in terms of whether there is a plateau

or not, let's consider where the progress came from.

And a lot of the progress today has come from, you know,

increasing the size of the neural networks,

increasing the amount of data, increasing the amount

of compute that goes into the systems.

And we've observed this scaling law,

which is not literally a law,

but an observation rather,

that as you increase all of these things predictably leads

it to increased capability.

And in 2020 we saw this with text,

but since then we've seen it with a lot

of different data code and images and video and so on.

So we've seen a lot of advancement coming from that.

And then another vector has been of progress

has been multimodality.

And there is also, you know,

we're just starting to see the rise of more agentic systems.

So I expect there is going to be a lot of progress there.

But then the question is, will this progress,

will this scaling laws lead us to systems

that are capable of performing a human level

across all cognitive tasks?

I would say that, you know,

current evidence shows that the progress

will likely continue.

And I don't think there is a lot of evidence

to the contrary,

but whether we need, you know, new ideas to get

to AGI level systems or not, that that's uncertain.

And also it's very possible

that we hit limitations in architectures

or, you know, other methods.

But then when that happens,

it turns out that--

[Steven] Yeah,

People will find a way around it

and there'll be new techniques,

and new optimizations.

So I would say it's uncertain,

but I, I'm quite optimistic that the progress will continue.

There are also, I'd say, yeah, the counter arguments

that I've heard are, you know, the data wall

and sort of the compute investment.

And on the data wall,

people are exploring things like synthetic data

where models generate their own data.

And on compute, if we look at the level

of investment on compute, you know,

this year companies are spending a billion dollars

and next year that goes up by a factor of 10 to 10 billion

and the year after that to a hundred billion.

So from capability perspective,

it seems like progress will continue.

But I think getting

to AGI level systems is not just about capability

and it's also about figuring out

how we make the systems aligned, safe.

It's about figuring out the entire social infrastructure

in which these systems are going to be operated in.

[Steven] Right,

So that we can have a positive future

because this technology is not intrinsically good or bad,

it comes with both sides.

So you mentioned safety.

You know, I have to say when I was diving

into OpenAI, you know,

which was founded to build AGI safely,

the people I talked to in general,

I'm not talking about you necessarily.

they got more excited about the building AGI part

than the safety part.

Not that they dismissed it,

but that's what really got their motor running.

You know, they'd light up

when they talk about about that.

Do you feel that we're spending enough attention

to the safety and, you know,

because there's kind of an arm race going on,

like kind of, there's literally, you know,

a race to best, you know,

this company the best than another,

your model's better than my model, you know.

I've gotta fix that and race ahead.

Do you think we're overrunning safety,

you think we're paying enough attention?

I think that on practical safety,

we've actually made a lot of progress.

The work that OpenAI has done

on practical alignment has been incredible

and it has really led the industry.

And that's been very interesting to see

because it is also,

I think kind of like the market dynamics have pushed

everyone in the industry to really innovate in that vector.

But there is a lot of work

on more theoretical alignment

and that I think we're lacking,

but not only also sort of unlike governance.

What what does it mean to live in a world

with this AGI level systems?

And also I think regulation is lagging

behind basically the entire infrastructure.

I would say that civilization needs to coexist harmoniously

with this technology

is really lagging behind

What worries you more,

the sort of problems we might have

and we're actually seeing with misinformation

or, you know, bias,

you know, and things like that.

Or the, you know,

longer term as existential kind of threats

that people paint AI have.

Some people have said that the as existential threats

are brought up as sort of

like a distraction

because people aren't really building the safety stuff now,

you know, which worries you more?

I'd say both, but more perhaps on the longer

term safety questions,

because I think that there is

market alignment on the short term safety questions

around misinformation and bias

because, you know, it's not good for business

to have AI systems integrated in your business

that are making things up.

And so I think a lot of effort will actually is already

and will continue to go into this set of problems.

But one area that's lacking that's more near term

is sort of the transparency and the AI literacy.

A lot of people, a majority

of the world doesn't have a good understanding

of what's going on.

These systems are black boxes.

And I think investing more

in the understanding of what these systems are capable of,

how they work, how we control them,

investing more in the direction,

giving people an intuition for, you know,

where they have control and where they don't

and also what what we expect in the future.

I think those things are very important.

Okay, can you explain to me,

you've been working on these products

why we haven't been able to get rid of the Hallucinations?

So one way to think about it is,

yeah, actually Von Neumann wrote an essay in 1955

where he talks about sort of the harmful

and positive aspects of a technology

and how they're always tied together.

[Steven] Yeah.

And he says, you know, I will just paraphrase

something like, These things are always tied together

and you almost cannot distinguish,

it's impossible to distinguish the lion from the lamb.

And I think hallucinations are like that where it gives you,

you know, this ability for the model

to provide very imaginative outputs,

but at the same time in a different context

that can be quite damaging

and harmful if you're operating in a context

where you need very accurate information in, you know,

legal context or medicine or so on.

And, you know, since the development

and deployment of the LMS in the real world,

we've developed new techniques like using tools,

using search, getting citation and so on.

But it's still something that we need to figure out.

You know, that brings up

sort of an interesting question.

I know, you know, OpenAI is struggling, indeed,

it's being sued for, you know, IP,

the allegedly and the training sets.

Some people have suggested, you know, you talked earlier

of synthetic like data

that that could be one way to get around that.

But it seems to me that the more we go down this path,

the more valuable, the trustworthy information is,

you know, like made by humans.

There was a study I read about recently

that talked about that if models are trained on, you know,

data which is produced by AI, it sort of, you know,

gets awful results

and you know, in each iteration it kind of goes

to meaninglessness, right,

which seems to put a premium on like human created

content to put it in the training sets.

So inevitably what happens?

You know, it winds up to be some sort of licensing things

for the best, most trustworthy models,

which then sort of, I guess limits its world models.

How are we going to eventually deal with this IP issue

and have reliable, world knowledgeable models?

Yeah, I think that's probably the answer to that is

probably very nuance.

There is the aspect of, you know, how the laws evolve

with the dawn of this technology

and there's a question of that.

There is another question of how do you make sure

that the people that have contributed data

along the way are, are part of, you know,

somehow are--

Yeah.

Taking part into benefits

and figuring out and innovating perhaps in business models

and understanding, doing more research

and understanding how specific data contribution

leads to the model providing, you know,

a certain amount of revenue.

And another layer is definitely the research on the data

and figuring out what kind of data you can use

and pushing areas like post training more,

which is, you know, you're using techniques

like our reinforcement learning with human feedback

or you're doing reinforcement learning from AI feedback,

like the constitutional AI stuff,

or you're using other techniques.

But I think this is one area actually

that is getting more and more sophisticated

in the modern AI systems

and requires a lot of human feedback or synthetic data.

You know, finally,

I guess what happens if we do get AGI?

Your former boss says is gonna be an age

of unbelievable abundance.

You know, the poorest person in the future will be,

well better off than the richest people now.

You must have given us a lot of thought.

Where are we gonna be if we get this AGI,

which can match and exceed some human capabilities

and then learn to go beyond us,

what's that world look like?

I think that depends on us.

We have a lot of agency

for how things are about to evolve

and how civilization co-evolves with this technology.

I think it's entirely up to us, the institutions,

the structures we put into place, the level of investment,

the work that we do,

and really how we move forward the entire ecosystem.

I think right now there is a lot of focus

on specific individuals,

but the real question there is

how do all of us contribute

to this ecosystem to move it forward in a way

that's collectively positive

and that is really what will shape the actions

and constrain the actions of any specific individuals.

And I hope that more people focus on that,

this is not going to be up to a single company

or individual to bring AGI to the entire civilization.

Well thank you very much, this is great.

Thank you.

[audience clapping]

bet365娱乐