Eric Horvitz on The Robot Brains Season 2 Episode 15

 

Pieter Abbeel

With us today is Eric Horvitz, Microsoft's first ever Chief Scientific Officer. Eric joined Microsoft through the acquisition of a startup, Knowledge Industries and has been with the company since 1993. His research spans theoretical and practical challenges with developing systems that perceive, learn and reason. He's the company's top inventor with over 300 patterns filed. He has been elected fellow of the Association for the Advancement of Artificial Intelligence, fellow of the National Academy of Engineering, Fellow of the American Academy of Arts and Sciences, and fellow of the American Association for the Advancement of Science. He was a member of the National Security Commission on AI. He co-founded important groups like the Partnership on AI, a nonprofit organization bringing together Apple, Amazon, Facebook, Google, DeepMind, IBM and Microsoft to document the quality and impact of AI systems on things like criminal justice, the economy and media integrity. Welcome to the show, Eric. So great to have you here with us. 

 

Eric Horvitz

It's great to be here, Pieter, and I'm looking forward to our conversation. 

 

Pieter Abbeel

Same here. So excited to chat with you. Maybe let's dive right into the topic that really piqued my interest this past week. I was catching up again on everything you've been up to, which is the National Security Commission on AI. What is that commission and how do you become part of it? 

 

Eric Horvitz

So Congress established the National Security Commission on AI as an independent commission. They actually, in legislation said that this is going to, was being stood up to consider methods and means necessary to advance the development of AI. We call that machine learning and associated technologies to comprehensively address the national security and defense needs of the US. Now that was the definition, we had quite a bit of range in thinking through how to address that challenge now. There were 15 commissioners as we were called. And interesting, if you look at the legislation that actually was actually an algorithm as to who got to pick each commissioner. It surprised me that I was nominated to be a commissioner on the NSCAI Study by Adam Smith, the congressman who chaired the House Armed Services Committee. So he asked, you know, he nominated Eric Horvitz and I said, Oh my gosh, it's an interesting responsibility. Let's dig in here. It was great. Other commissioners that I work with, your listeners will know well. Andrew Moore, now at Google and former dean of CS at CMU. Steve Chen from JPL. Ken Ford, Bill Mark. And interestingly, some other commissioners included corporate leaders. So Andy Jassy got to know quite well. He's now the CEO of Amazon, Safra Catz, CEO of Oracle, Bob Work, the former Secretary of Defense. It was quite an interesting, interdisciplinary group of academics, folks from industry and government. And when it did its work, it wasn't just this group thinking through things on its own. We actually brought in quite a few folks to provide expertise. Folks from AAAI organization, how do you study AI, multiple colleagues came to answer questions and provide input. 

 

Pieter Abbeel

Interesting. Now you're charged with effectively studying the security implications of AI, right? How do you go about that and what are some of the main findings you had and recommendations that came out of this study? 

 

Eric Horvitz

Well, let me just say that we took the phrase national security to mean something a bit broader, quite a bit broader than what you call security considerations. The vitality of our research endeavors, our industry, our education, our ability to attract top talent from throughout the world, making appropriate investments in R&D. And so it turns out that the security aspects of it and the defense aspects might say, are actually important. But a subset of the larger scope of what we deliberated about. The two lines of effort, we broke the study into several lines of effort. The two that I was most active in were the future of R&D, where we need to make investments in AI. Pretty fascinating to deliberate about this. You're asked by the United States Congress, give us recommendations on where the science is going. And the other one was on trustworthy AI. Thinking deeply about principles of robustness, notions of ethics, appropriate engineering practices and so on. And so it's a very large document and NSCAI.org. But if you go to the chapters on R&D and the future of where the science is going, as well as the chapters on trustworthy AI and civil liberties, you'll see some of the work that that my particular subgroups were really focusing on and including the topics of where AI was headed in. You know, this document has had quite a big impact more than I would have expected with scores of bills now in Congress. And I think some of them, folks may have heard about. One direction on this, coming out of the R&D work that we did was recommending significant bolstering of funds for basic science research, as well as applications and technology development in AI has led to a call in a bill right now to significantly increase funding to the National Science Foundation for Computer Science, particularly focused on AI and related technologies. There's the standing up of a task force the National AI Research Resource, which is in full operation right now, trying to spec out what we need to do to provide the kind of resources that only a few companies have now for a university based research in the United States, coming out of this work. Another call and recommendation led to NIST, the National Institute of Standards and Technology, which you called out, should be leading efforts and standards is now leading the development of a risk management framework, came out of a call from the National Security Commission on AI. One interesting recommendation that came out of this document suggested that we need to add a new directorate to the National Science Foundation, a technology directorate. You may have heard a little bit about this, that's now being discussed, and the potential funding for that significant new directorate is a bill in Congress. 

 

Pieter Abbeel

And how do you distinguish technology from science? Where are you drawing the line when you're making that new directorate? 

 

Eric Horvitz

Interesting question. We had lots of discussion and debate about that because today most of the directorates, let's just focus on computer science. CISE, the Computer and Information Science and Engineering Directorate has engineering in the title, and they fund research projects that span both basic science theory, as well as various kinds of prototypes and artifacts that are created and tested. And there's a sense that with fielding AI technologies in particular, quite a bit of the effort in translation is, surprisingly, not in the core science. Of course, you need that enabling pop to understand how to do things, but there's so much that goes into it, the challenges of integration and translation. I mean, just thinking about healthcare. We've had great diagnostic reasoning tools since the late 80s and before that you would call AI, even if it wasn't today's AI. We had systems in my startup that we were fielding and selling to healthcare organizations that were running at the level of, like top experts in pathology across all four areas, for example. Yet it was hard to get them to translate them into actual practice, to work into the workflow, for example. So you might say there's different aspects of engineering, but one piece of this is in how do you take the status quo world, great AI innovation and think about the tech, the engineering you need to do, to take it with fluidity integrated in to make it a real positive presence to change things for the better. And that's, I view that as, a hard technical challenge approaching the science, right? 

 

Pieter Abbeel

And one thing that also comes to my mind and I like the example you gave, and I'm curious if it would be part of it. In how you think about it is things like building a great open source repository that captures the latest advances in a certain direction. Because a lot of code in AI, somebody writes a paper, there's a one off code release related to that paper, but there is no continuity. It's just really a sprawling of different things versus a clear repo, you can go to and work with, for many areas of AI. 

 

Eric Horvitz

Yeah, so if you even look at the chapter we did on trustworthy and robust AI, we talk about engineering issues around the machine learning lifecycle, documenting the data appropriately, understanding metrics for doing testing. How do you document your, you know, which algorithm you use? How do you maintain a system over time? How do you understand issues around distributional drifts? How do you continue to test? These might say, well, these are part of our science, but to do this well takes infrastructure, best practices, execution in the real world, in the open world, which is messy in a way where engineering plays a role, an important role. And it's an area where we can do a lot better as the world, as scientists, as engineers and as companies, figuring out and making deeper investments in what I would call engineering. 

 

Pieter Abbeel

I couldn't agree more, Eric. And I hope, I hope this goes through to hear your recommendations get turned into reality. 

 

Eric Horvitz

Right. And I just got off a call yesterday with the NSF, and there's excitement there about what this will mean, should it happen, in building out a new potential directorate. 

 

Pieter Abbeel

Really cool. Now, maybe as a bit of context, because I think a lot of our listeners don't necessarily apply to NSF grants and know what's involved. But I just looked it up again because I had a sense for it, but I wanted to make sure to get the right number. In IIS, which is the AI part of NSF, right? The funding rate of grant proposals for 2021 was 16 percent. So for every 100 proposals submitted, only 16 are funded. And if you ever served on any of those panels, usually at least half of these submissions are great, sometimes even more than half of them. And so 50 or more out of 100 are great, but there is only funding for 16 out of 100. 

 

Eric Horvitz

Let me just say that those numbers were explicit in our conversations about what this nation might be doing, kind of, in a new Sputnik era when it comes to AI advances around the world and where we stand with our leadership in this space for the vitality of our technology, the education of our top talent and the systems we build in field. You know, Henry Kautz oversees, he’s a good friend I ask right now, and I was on a call with him. And the idea of doubling the top prioritized, we have proposals that you could fund would be a game changer. 

 

Pieter Abbeel

Absolutely. And actually, our audience would be familiar from listening to Sergey Levine in the first episode of this season. And when we asked Sergey, you know, if you get a robot to help you with anything in your life, what would you want that robot to do for you? And guess what he said? He said, write my grant proposals. 

 

Eric Horvitz

That’s amazing. 

 

Pieter Abbeel

Yeah, it’s a time consuming thing to support your lab. And I think the other thing that's really interesting that you're saying is to support education, really. The science happens, but almost 100 percent of the money from these grants goes to supporting the education of PhD students and postdocs. That's where it goes. And so it has a big effect beyond just the science itself. 

 

Eric Horvitz

And in this National Security Commission on AI report, there's a whole chapter on talent and education in this nation. You know this, we're trying to also, one of the comments was trying to counter this fortress America approach. You hear from some folks who believe that if we put a fence around the United States, we could protect our technology and so on. But the idea of openness and being a world player and being a top talent magnet for folks around the world is so important in our country's vitality. And I say our country’s, but the focus of this document is all about, you know, the United States and where we stood commissioned by Congress. 

 

Pieter Abbeel

Now you worked on this commission, which was a US commission. You also have an independent effort that you co-founded, the First 100 Years of AI report and the second report just came out recently. So what made you decide? I mean, because at core, I've always known you are somebody who loves to do research, loves to dive into the technical specifics, loves to work on the next project. But there you are serving on the security commission, founding the 100 Years of AI report. What's driving you to effectively take time away from your research and starting this 100 Years of AI report? 

 

Eric Horvitz

It's interesting. We just had a dinner recently. We got tested with a very advanced method to make sure we were omicron free, but we went into this dinner live 3D meal together, talking about the role of AI researchers when it comes to socio-technical issues and responsibilities. And one topic that came up was, you know, we could be researchers and just focus on the hard, interesting technical challenges and inform somebody else to do the, you know, who might be interested in being an activist to take action in the world. And we went back and forth about the importance of the AI scientists, themselves, diving in and taking this seriously and using their knowledge and their experience with the nuances and the frontier topics and the borders and understandings to help guide those discussions. And I tend to fall in the latter. I think it's really important, especially given that I think more generally across computer science, not given the importance of our field in the world now for folks in the field, experts in different areas to dive in themselves and think through what they're doing and the implications because there's probably no one better than that to do that kind of work. And just going back a bit to where this 100 Year Study came from, so I was AAAI president in 2008 and 2009. AAAI, it's kind of a long process. You do two years as president elect, two years as president and then two years as past president and the job there is even harder, you know, fellowship committees and so on and so in ways. And so during my presidency, I decided to make the theme of my term in office, AI in the open world. And it was just a time in 2007, 2008, 2009 where AI was becoming a real technology, and started to influence actual decision making in high stakes areas. And so in my presidential lecture, I called out the technology, first of all, what does it mean to go open world with our technology? There's a famous old problem in AI, the frame problem, which captures this challenge of do our systems know enough about state to take action in the world? Do they know the implications of their actions given their bounded representations and resources for computing and models? And so in my talk, I largely talked about the technical issues because I had been very excited about what does it mean? And then we came up with early readings of Herb Simon, for example. What does it mean to be a very limited, by definition, processor and reasoner in a really complicated universe? How do you do the best you can? How do you go bounded optimal with your reasoning? And so I talked a little bit about my lecture and trying to energize the audience that there's a technology here, the set of technical principles we want to think about on robustness, on systems that know what they don't know. The famous, I quoted the famous Lao Tzu, he you talked about, you know, real knowledge knowing what you don't know. And can we build systems that had this ability to be humble when they needed to be humble and knew how to gather information actively, where they needed more information and do the best they could? So there's a technical side to this. Then I mentioned AAAI. It was, a kind of, a closed group. You'd have to log in to even get access to papers and be a member. And I made it the mission of my presidency to change that and turn that around, which we did against all this pressure, like, oh, we're going to give up all of our membership because people don't have to be members to get access to the papers anymore. Well, we did lose some membership over that, but I think it was worth it. And the third year, I said, is, we're going to do a study. We called it the first Asilomar study, and I chose the Asilomar for symbolic reasons for the biomedical work that had gone on there, recombinant DNA in the 70s. But we basically created a presidential panel on long-term AI future, as we called it, with three working groups. All, you know, you can go to the page on AAAI and see who was there at the time. It's a fun group, but people who I thought were like the top leaders in the technical side who would care. And we had three different subgroups, one on short term disruptive influences of AI, long-term future. Looking at the time, singularity was being talked about, Ray Kurzweil was writing his books, at the time. There was all this dystopian and utopian debate going on about where AI was headed. And finally, we had a special break open in ethics and legal issues, and it was such a useful meeting. I thought I was, we ended up, but we met for a few months and then I went to Asilomar for three days. It's such a great meeting that about five years later, I was talking with some folks from that meeting and we realized that it was 2009, we did that meeting. And here we are in 2013, I think, four or five years later and we said we should do that again. And we thought about how to pull that off, like have that same group to do a mix. And it hit me that, you know, this is, of course, induction. Why not endow a study that will go on forever? And track this technology and stay ahead of the crashing wave of where it's going and help to guide thinking, research mitigations were necessary, engagements, interdisciplinary engagement to someone. And so in talking with my wife, Mary, we thought about, this might be something that we should invest in and help to find and looking for a home for it. You know, the idea of an endowment that would go on for a long time and define a process that would be at least as high quality as the AAAI first meeting we did in the first engagement? And we talked to John Hennessey at the time and, you know, most people thought we were kind of crazy with doing an endless study. John Hennessey, his reaction as president of Stanford at the time. He said, oh my god, this is great. We'll just do it. And we had development offices getting kind of concerned. How do we pull this off? We can't guarantee this will happen forever, and we call it a 100 Year Study. But that's just a name. The commitment from Stanford is this will happen every five years for as long as Stanford exists. And as John Hennessy said, we hope that's a pretty long time. So we just hit the second study. Interesting to see how it's gone. I think it's gone well. We talk about the importance of the second study even being more important than the first because, as I was saying, two points to find kind of a trajectory into the future as to how this process is going to go and how valuable it's going to be. But you talk more about the actual study and how it works. But to kick it off, what I just did was I wrote down a little document. My wife and I thought about things we cared about and we should be caring about, in part informed by the first Asilomar study. And in the document, we call that 18 topics that we think would be evergreen and important issues to be looked at carefully. And several of them turned out to be more important and a little bit coming to the surface faster than we thought in terms of challenges like AI democracy. You know, and we called it a topic I'm looking at right now, democracy and freedom and AI and warfare, criminal uses of AI, collaborations with machines and so on. You can read this list of topics. I think they each will cover, helping to frame studies for generations to come. I've often thought about what's going to happen, you know, to know that Stanford has made this commitment. We have a process with standing committees and study groups to think about, like there will be a report almost definitely in like, you know, 2085. What's it going to say? What's it going to say? And to just even imagine that report with the goal of helping to guide AI and summarize where things are going and to help with understanding its implications and influences, its government, academia and industry. I'd love to read that report now. You can’t imagine what it’s going to say. 

 

Pieter Abbeel

A time machine, yeah. Now I'm staring at the report. Actually, it's right in front of me and it's really intriguing. All the questions and the report is really comprehensive. Like anybody who wants to learn more about the state of AI today, it's I mean, it's not a short report, but it's not crazy long. It's 82 pages that I have in front of me and it has clear sections, right? There's a section, what are the most important advances in AI? What are the biggest grand challenge problems? And the list goes on. And I think one very interesting one, I'm curious about is, of course, how should we inform and educate the public? Because that's part of why I'm excited to do things like this podcast, is give, you know, get information out there. So I'm curious, what were some of your conclusions in that context? 

 

Eric Horvitz

Well, I can put my own opinion on this. I was actually an ex-officio member of the standing committee for the first report and stepped back in the second report as more of the consumer to watch how it goes. The report and the questions that it asks, including how are we how are we doing with building out more general intelligences? I'm curious to see the answer to that question over the next 100 years. Then there's the AI Index, which is a separate project that was spawned by the 100 Years Study that is an annual, looking at issues like metrics. You can see the curves every year and how we're doing on various benchmarks or new benchmarks. And they also do so you can get to the AI Index at the same place, the AI100.stanford.edu. But they also published a kind of list of like nine key, what they call key takeaways, which are all very interesting also to look at. And I found those are all fascinating, too. So those two projects sit side by side and one's in kind of a five year process and then one’s annual metrics and key takeaways. 

 

Pieter Abbeel

Yeah. And actually, I got a question recently from the AI Index report. I'm curious about how that survey will come out because it's very comprehensive. It's not just some people sitting in a room. They sent me a survey and had some very specific questions, in this case over the last five years, each of the past five years, if you bought a robot for your lab at Berkeley, how much did you pay for that robot and which robot was it? And so I mean, this is not just people ideating in a room. This is actually collecting information and trying to really measure the things that are happening. And from there, of course, identify the story around it, of what's happening. 

 

Eric Horvitz

Yeah, I mean, I was just looking at the top nine takeaways this year that came out of the report recently. And to see that you know, the details of what's being called out, AI investment in drug design discovery increased significantly with details on the dollar investments. Industry shift continues where graduates in North America with PhDs are going, China overtakes the US in AI journal citations and so on. So t'll be interesting to watch every year from the index, which also informs the AI100 study, which is the main 100 Year Study reports. By the way, Michael Littman did a fabulous job as a study panel chair this year, as Peter Stone did five years ago at the first report. 

 

Pieter Abbeel

Yeah, and that's a big part of it is finding people who are willing to commit to time, which obviously takes time away from other things. They're typically already very busy, and then they have to cut something out of their schedule to make the time for this. 

 

Eric Horvitz

Yeah, to be honest, Pieter, that was the uncertainty about both the original Asilomar study, that AAAI sponsored under when I was president, as well as whether we could pull off the AI100, the 100 Year Study on AI, would people believe this was so important as to really take out a chunk of their time and invest. And it was pretty clear to me that it had to be from the get-go viewed as worth it, viewed as high quality enough, viewed as being impactful enough. And if you're selected to be invited to be on anything you want to know, yes, this is like the reverse of Woody Allen. You want to know who else has been on that, you know, club you want to join versus something which would have you which is part of the healthier or psychological perspective, I think. But the idea of making it, making it such that the people we invited would be the kind of people technically across disciplines, as well as the impact they've been having in the world that you'd want to join and be part of in the future and say, oh, I also I did. I was on the 2085 study panel. 

 

Pieter Abbeel

Yeah, I'll be excited if I'm sufficiently productive still in 2085 to be invited to that panel.

 

Eric Horvitz

You are already there, Pieter. 

 

Pieter Abbeel

Well, but the exciting thing is if I can be productive for that many more years. 

 

Eric Horvitz

Oh, I see you could be alive and healthy enough to be on the panel. I see.

 

Pieter Abbeel

I would probably sign up for that if that can be achieved. Now switching gears for a moment here, Eric. You're the Chief Scientific Officer at Microsoft, and that means you directly advise the CEO, Satya Nadella, on everything science. I imagine a lot of it AI. And so I'm really curious in your conversations and how you see Microsoft and Satya think about the future. How much does AI come up? You know, are you asked the question once a year? Or is this something that actually is really an active thing? 

 

Eric Horvitz

Let me just say that I work with Satya, with Kevin Scott, with Kevin’s leadership team, he is CTO at Microsoft and AI is not, advice and guidance on the AI is not just coming for me. AI has become quite central as other areas, in the security, privacy storage are competing efficiencies. But AI has been rising in centrality at Microsoft. In some ways it's a foundation of many things happening at the company. It's amazing to see, for example, large scale neural models, platform models where foundation models and fine tuning of these models are now being leveraged by almost all of our divisions. It's amazing to see how they're being harnessed and fine tuned in offerings and being in office dynamics, the cloud. And I should say that it's not just taking models and harnessing them, it's really pushing on the frontiers of building out some of the largest neural language models, vision models and multimodal models that have ever been created that are winning at the top benchmarks.  For example, in competitions a number of which have reached, by some definitions, human level and beyond performance. And that goes along with the data, the infrastructure and all the back to engineering again, the engineering issues like deep speed, which is split jobs across course, for example. So it's become quite central at Microsoft. Now let me say that beyond advising, it's more like collaborating. Satya Nadella, you mentioned Satya and Kevin Scott, but are quite expert and are following closely. Satya I found to be a brilliant engineer and computer scientist. He reads deeply. He focuses tightly on AI advances. He leads intensive cross group meetings, asks deep questions. He thought out incredible challenges and directions in real time. And he actually will seek follow up on where the ideas land. And so, well, sometimes the subject header in a meeting comes up that Satya happens to attend will sound like business meetings. They really are like graduate seminars at a leading university. But with, kind of, a deep focus and energy that comes with shipping products to millions of customers. So I wish I had, there's a way to show some examples of how these meetings go. You know, we have this one meeting every year called Disruptive Technology Review. And at that meeting, MSR folks, Microsoft Research folks think through what are the key disruptive technologies that will disrupt us or that we can disrupt the world with, in terms of big changes, for example. And when we have it, we invite an outside speaker to those meetings, and we've had some fabulous outside speakers in the past. Mike Jordan came one year, Josh Tenenbaum one year, and that's an experience, I think they've seen up close as outsiders, but I think to share more of that kind of thing. 

 

Pieter Abbeel

Yeah, I'd be very curious. I mean, I don't know how feasible to record one of these, or some of these and put them online and there might be some intricate company internals that might be, I guess, need to be removed. But it would be interesting. 

 

Eric Horvitz

Yeah, it's over the last decade, I'd say the company's got more, there's more of an electric spark about AI. And it's warranted because it's the technologies are, you know, and their importance and relevance in terms of how we leverage these inferences and machine learning technology as it's grown and matured in so many aspects of products and services you can just imagine. I mean, I remember just a few years ago, it seemed sparse with, you know, we built the first spam filter, which was doing machine learning, and we were so excited because it was like machine learning was being used on the platform worldwide, you know, and celebrated that. And then a few years later, we did email prioritization and, you know, smart email kinds of applications. And I remember a few years later, I was so excited to get to work with, our team worked with the Operating Systems Group to do machine learning in the OS. You know, Windows was a bounded resource operating system in the world and couldn't figure out how to do memory management in a personalized way for users. And that was shipped in Windows7 and it's iterated over time. So, you know, if you run a Windows machine, it's pre launching and pre fetching in the background all the time based on what is predicting what you're going to do next. Stay ahead of the crashing wave of need. 

 

Pieter Abbeel

Now that's really interesting, like predicting what you're going to want to retrieve from your slower memory on your computer already. I mean, just it's there. 

 

Eric Horvitz

And it's even memory management issues. You can imagine when you bring up a web page, if you can control a CPU in a microsecond way, you know at that moment that a human being with a very slow cortex is going to be reading that page and you can back off processors and then come up again right in time for that click. So there's lots you can do with energy efficiency, as well as with latency in systems. 

 

Pieter Abbeel

Now talk about energy efficiency and compute. Of course, you mentioned the big language models. That's I mean, I think to many people, the biggest revolution in AI in the last few years is the fact that if you train a language model on enough data and your model is large enough, it will really surprise you. I mean, maybe less so now and once we've seen it many times, but it would really surprise you how good it is at predicting how to complete an article, things like that. But that requires tremendous compute, right? And of course, Microsoft has a large cloud. But I'm curious, how do you see that play out? And what is, you know, is the cloud even large enough that Microsoft has? How fast does that need to grow to be at the frontier? 

 

Eric Horvitz

We realized several years ago that this was going to be the case. We saw, you know, the curves and the need for compute and began a major set of initiatives. And this includes, our working with OpenAI to think through and prepare and to be ready and to be leading in, at the frontiers in which with large scale neural models. I think that we have to continue to make investments. Right now, some of the largest supercomputers, we'll call them AI supercomputers, in the world are at places like Microsoft, Google and maybe a couple of places internationally. And that's also a concern because we know these resources, which are required in some ways to do the frontier research, are not available to university based scientists and students. And I know that in our meetings at Microsoft, we feel pretty strongly about the need to democratize and not hoard these systems. And thinking through this, why we've been very much a supporter of significant government efforts to figure out how to support university research, as well as leaning in ourselves with programs like we have something called the Microsoft Turing Academic Program, where we share our models with a request for proposal process with the universities, as well as compute time. And not sharing them freely yet because of the concerns with their use of, potentially use of offensive behavior as disinformation and so on. And poorly understood capabilities that we need, just to want to keep them under wraps right now, but work under license or under proposal with teams of researchers. But it's going to be a challenge for research moving forward and one that we've reached out to the National Science Foundation on, you know, have been having a dialog on, what it would take to have a models as platform program for the nation. 

 

Pieter Abbeel

Yeah. Do you have any thoughts on it? 

 

Eric Horvitz

Well, I think it's going to take some sort of significant funding and the way I've described this to the National Science Foundation, program managers and leadership is we are at a disruptive time for computer science, the science of AI, for people building large scale neural models. And there's plenty of other things to be doing in AI. But for people interested in issues around emergence, generalization, contrastive learning, other kinds of methods equivariant approaches that are coming to the fore in research. We really need to have a Large Hadron Collider, national lab approach to this. Building huge artifacts in thinking through problems we want to solve jointly because it's going to be very expensive. And this is despite expectation that there will be breakthroughs in more efficient computing, better representations that make things more efficient. But I think it's likely going to take a reckoning that we're in a new world now. And if we want to stay out and front and have our, you know, our doctoral level research and our university scientists cruising along and being at the frontier, they'll have to be some sort of private public partnership and major public investment. 

 

Pieter Abbeel

I'd love to see that happen. Eric, I mean, I know exactly what you're talking about. I mean it's a complete change in many ways. It used to be to run a lab in academia, you got to find support for your students, for your PhD students. That's hard enough. The funding rate at NSF is low enough, as is, that that's hard enough. Now, you effectively have to, you know, add to that another 50 or 100 percent in cost to support the compute, but that it's not clear where to go get it and where to go ask for it because it's all, you know, it's not clear. I think the NSF is not set up with the right budgets at this point to also have that available. 

 

Eric Horvitz

It's almost like imagine if you know you are a physicist in the 1920s, what was it like when at some point we realized we needed a collider and a cyclotron and, you know, accelerators and these were to be major at investments like CERN, Brookhaven, you know, and so on. I think for certain aspects of AI, we're getting to that point where commercial opportunities and needs the CERN's and the Large Hadron Collider are being are being built in just a couple of companies. 

 

Pieter Abbeel

Now talking about companies, switching the conversation a little bit, Eric. You've been at Microsoft for a long time. In fact, it's very uncommon for people to be at the same company for so long. It's also beautiful because you've seen it from, you know, your startup acquired right out of PhD at Stanford and then you're still there now. Well, when we see the evolution of Microsoft, obviously, Microsoft was the leader in personal compute. Won the browser wars in the late 90s with Internet Explorer, was doing really, really well. But then at least from the outside, it seemed Microsoft was not doing as well for a little while in the 2000s. It wasn't coming up as often as, you know, an Apple or a Google or in terms of and even the acronym FANG, which is supposed to refer to the top tech companies. Maybe Microsoft is happy to not be part of it because there is a lot of other people who, you know, don't don't like the top tech companies. But Microsoft is not in the acronym FANG, even though actually in the last five to 10 years, it's really changed. Like Microsoft was a second company to become a two trillion dollar company. It is the second largest after Apple, right? And so how do you see that process from within Microsoft, from being such a leader to being at least from the outside being pushed a bit in the background? And then now coming back out at the top?

 

Eric Horvitz

I’ll first say that, when our startup was first acquired and we had this incredible offer to help build MSR, Microsoft Research, my two co-founders were so excited. It seemed to me, my reaction was I'll go up with you, but I'm staying max six months, I'm out. I wasn't going to have a life at this weird company that had like Microsoft Windows 3.1 and Word. It wasn't. I couldn’t even imagine they wanted a research team when I first went up there. But I was very impressed and I have a pretty high bar for staying anywhere. So if I've been here almost 28 years, it means something. And so while it's, well, it's one company, it's been constantly changing and evolving in a very interesting way. So let me first say that I've had the honor of working for three CEOs during my tenure. First, Bill Gates and Steve Ballmer. Now, Satya Nadella, I've enjoyed each of the intellects and distinct personalities and energy. You know, all quite different, but also shared an incredible passion and optimism for how computing can change the world. And I would say that Microsoft has been doing okay to great at certain times, even when it was viewed as a slower growth. It was always a place packed with like energy inside innovation and doing quite a bit in the world. I think it's true that some of the things that Satya and the leadership team have been doing over the last number of years have unleashed even more energy and sparks and things are really flying. You know, when Satya came into his role, he was very energizing, the level of reflection about purpose that he brought to the company. He actually asked the question, what we're here to do? What was Microsoft here to do? Why do we exist? Thinking deeply about mission possibilities, looking at our role at this point, very serious one with a big responsibility to deliver value to people. You know, I have to say that there's a psychological aspect. The way the leadership has, Satya and the leadership team have communicated current, you know, on the possibilities, but with some courage about the softer, you know, more powerful aspects of motivation and culture to share some experiences. I mean, I've left executive retreats at times, meetings of the top leadership. I feel like with my fists clenched and, you know, tears in my eyes about what we can, what we might do to make people's lives better. And I didn't feel that way in the past. I mean, I've always been really passionate but like to really think about, okay, this is a competition revolution we're all experiencing. Let's really go for it. And this is really, really serious. And some of the outside experts that work with Microsoft like this economist Colin Mayer with quotes like, let's discover, you know, profitable solutions to hard challenged people in society as like the raison d'etre. And that really, that level of refresh has really had an effect. 

 

Pieter Abbeel

You co-founded the partnership on AI, a nonprofit organization that brings together Apple, Amazon, Facebook, Google, DeepMind, IBM and, of course, Microsoft, around AI to document the quality and impact of AI. What spurred starting this partnership? And how do you even bring what are really competitors together in this single effort? 

 

Eric Horvitz

So the idea was to stand up an organization and not have it be necessarily led by industry, even though we bought all the big IT companies together via their research teams and research leads. But to build a board of directors that was always going to be balanced, starting out as half the corporate founders and half the nonprofit philanthropic and civil society organizations. So it's been a really interesting, diverse discussion and a diverse approach to thinking through the short term and the longer term issues coming to the fore with best practices, experiences, case studies and analyzes, some really high quality research projects, particularly where the results and the summaries are aimed at people in multiple disciplines. So lawyers can read about facial recognition, for example, or criminal justice decision making. In 2016, right around the same time, I was thinking of working with Yan LeCun and Demis on pulling together the Partnership on AI, that came to be named the Partnership on AI, PAI. I was also working internally at Microsoft with senior leadership on questions about what was the responsibility and set of processes around AI technologies when it came to safety, fairness, security, privacy issues with data centric systems, but AI in particular, as these systems edged into the realm of human intellect, which are places where people make decisions in the past. A company fielding products and services might have quite a bit of responsibilities of its own. And we stood up what we called the Aether Committee, which stands for AI, Ethics and Effects in Engineering and Research. And we set up a set of working groups, a rich set of working groups and also worked as a company and with Satya, in particular on what were Microsoft's AI principles to start with? We came up with six that have stood the test of time, fairness, reliability and safety, privacy, security, inclusiveness, transparency and accountability. And along those lines, set up working groups, typically led by a Microsoft researcher co-chaired with some help from the product team and a vibrant group on that topic. Additional groups include collaboration section called HAIC, Human AI Interaction and Collaboration, and the last group was a kind of a process group, which is an interesting group called Sensitive Uses. And the idea there was to define what was a sensitive use? What is a sensitive use of artificial intelligence technologies? How do you define that? And then how do you put a process in place at a company like Microsoft, where sensitive uses are identified, either they exist now and we have to analyze them or a development process is going towards a sensitive use and it's reviewed and there's a committee that thinks through and makes recommendations as to if, when and how and why that application is being built and can modify where it goes. There's a longer story here. It's now been nearly five years of effort, but about two years ago, the Aether work, which was a committee, as well as working groups and processes and projects and sensitive uses process, ended up involving a formal legal organization called the Office of Responsible AI and co-exists with it as a sister group, so Aether and ORA. Aether I look at as the intellectual leadership on responsible AI at Microsoft. ORA is compliance, we move sensitive uses into that now. It’s a full process at the company. And we together wrote what's called Microsoft's Responsible AI Standard, now in version two. And I think that's a big public, or will public at some point. But it's a very detailed document that the company divisions must follow when conceiving, building and fielding AI products. 

 

Pieter Abbeel

It's so interesting that you, I mean, anybody at Microsoft that brings AI into a product effectively has a process now to make sure some sensitive aspects don't get overlooked. 

 

Eric Horvitz

So it's been a really interesting process and one that we're trying to share. We'd like to share with other companies and learn from other companies, as well as other organizations for feedback over time. 

 

Pieter Abbeel

Well, I think it's really interesting you brought this all up, Eric, because I think, you know, when you just follow the press, you get to stories where this process wasn't done right, where things were released that shouldn't have been released. And you don't necessarily get informed about the good processes that are in place that then don't catch the headlines because it's not nearly as striking when things are going well. But just a lot of work behind it to make sure things do go well. 

 

Eric Horvitz

I should say that. Look, we're all learning. It's a challenging space. You know, we haven't talked a lot publicly about what we do, just yet. I've given some talks, Chatham House and other places about how we're working and trying to learn, to how a large company grapples with this technology and tries to do the right thing with it. But there are cases like that example just to throw one out, without going into detail right now. All of a sudden there's this neural TTS, neural text to speech, where anybody can speak a few paragraphs into a microphone and within seconds, a neural model can drive a voice so close to your own that it can call your significant other and have you have them write a check. You know, that's powerful technology and thinking about how we would not deploy this generally, but thinking through all the steps we'd go through to make it available. For example, let's say to BBC for, you know, for a moderator or, you know, an anchor or newscaster to use, but not make it available for malicious uses and how you can control that in a way that made sense, as well as how it would protect people. Even people who have long passed away do not have their voices used without appropriate agreements in place. So you can just take any one of these sensitive use cases, and it's just so rich and interesting as to what we should do as a company, as an organization, as society, as the United States, as you know, you know, law abiding nations with these technologies. 

 

Pieter Abbeel

I think now that's really inspiring. And now when you look ahead, Eric, and you think about specifically AI, where do you see some of the big opportunities to really bring value to people thanks to AI getting taken to the next level, next level, next level in the next few years? 

 

Eric Horvitz

You know, I started out as a neuroscience PhD, and I moved over to AI because I just thought we wouldn't make progress. But the mysteries about how these graphs, whether whether cellular or computational with the power they can have with existence proofs from humans and other vertebrates have always been startling and motivating for me. So first of all, I mean to say that I think there'll be a continuing march in the representation of space, the power of various kinds of generalization, leveraging in variances and equivariances. I'm really excited as to where multimodal models that bring language and vision together will be going. But I think that's a really interesting opportunity for AI. For our models are methods to capture new kinds of abilities in the representations of utility considerations and with abstracting understandings along the lines of what we care about in the world. but I also think that we'll be seeing our stations expand to incorporate human abilities and challenges this Human AI collaboration, which is a very important topic. I see we've already made some progress on changing the objective function and methods in machine learning, so they incorporate models of what humans are good at and also understand when to ask a human for assistance or for input or when to come forward. But that's part of the actual learning itself and the representation of models. So machine learned systems that represent human intellect and understanding within them, applying them in ways that augment human decision-making in a fluid manner. Now there's a whole other area of, I think that I'm very excited about. Some people may have seen my interview with our panel with Judea Pearl and Yoshua Bengio, Susan Athey recently on what's happening with systems that can reason about causality. It's another scientific direction right now. And how does that fit into deep learning? And that's a really interesting area with this attention and an opportunity, and I think there'll be some interesting work there. Getting into your realm, Pieter, you know, I see such incredible possibilities of AI getting more physical learning, about the physical world, you know, you and I talked about hand precision manipulation. I think when that happens and I expect it to happen, it will lead to an explosion of innovation in terms of even what we see around us and what the AI is doing in the world. Speaking of physicality. I'm really, really bullish and excited about, and we've seen glimmers of this, machine learning being applied to biology, physics and chemistry. But I really believe that AI is going to be the path to supercharging advances in biomedicine. So it's all very exciting to me. It's gotten more exciting over time. Our fields, even though I've always been superheated about possibilities. 

 

Pieter Abbeel

I couldn't agree more. Eric, it's like every year goes by and yet more exciting things happen. And it's even more exciting to be working in AI than the year before, which is so fascinating because with a lot of things you do over time, you might, you’ve done it so long, you're like, okay, let me switch to something else. But somehow in AI, the possibilities seem to continue to grow very rapidly, making it every year even more interesting. 

 

Eric Horvitz 

Even the shared knowledge with people in society about AI and, you know, understanding it, understanding its importance. I still remember it to this day. I was in grad school at Stanford and I was talking with somebody, you know, at a restaurant about AI and I was going on and she was very interested in this. And it turns out at the end of discussion, she thought it meant artificial insemination the whole time. And so now at least we know it's part of our society, and that's a real story. It's like I was shocked. They said, No, no, let me explain to you what AI is. So I think we're making progress even with a general understanding of where we're going with this. 

 

Pieter Abbeel

It's definitely part AI. I think not too many people would mistake AI for anything other than artificial intelligence these days. Well, Eric, we've covered so much ground. Thank you so much for joining us. It's been an absolute wonderful conversation. 

 

Eric Horvitz 

It's been great talking to you. Pieter. It's great to once in a while to step back and reflect together about where things are, where they're going. 

 

Pieter Abbeel

Yeah. And I think it's really fun to do it with your perspective. Everything you've been up to, it's amazing. 

 

Eric Horvitz 

Thanks. Thanks for having me.