Shakir Mohamed on The Robot Brains Season 2 Episode 12
Pieter Abbeel: If any company is at the top of most people's minds when it comes to AI, it's DeepMind. They've been at the forefront of many major breakthroughs, including AlphaGo, the first AI to beat a human Go world champion and AlphaFold, which revolutionized protein structure prediction. Our guest on this week's episode, Shakir Mohamed, joined DeepMind in the early days and has been an instrumental part of their success ever since. Shakir is a Senior Staff Scientist at DeepMind, an Associate Fellow at the Leverhulme Centre for the Future of Intelligence and a Honorary Professor at University College London. Shakir is also a founder and trustee of the Deep Learning Indaba, a grassroots organization aiming to build pan-African capacity in leadership in AI. Welcome to the show, Shakir. So great to have you here with us.
Shakir Mohamed: I'm really excited to be here. Hi, Peter. Hello, everyone who's listening? We're going to have some fun. Thank you so much for having me today.
Pieter Abbeel: With you on, it's going to be easy to have fun, Shakir. Not worried about that. So now, of course, right now you are one of the leading researchers in AI at DeepMind, but it's not where your life started. You grew up in South Africa. From there made it to Canada, then to London. Can you tell me a little bit about that journey.
Shakir Mohamed: Yes, I am from a suburb in the south of Johannesburg. I spent most of my life there and I love going back. All my family is still there. The COVID pandemic has made it much more difficult, like so many people, for us to travel back to see our families wherever they are living. But I grew up in a very particular kind of period in the history of South Africa and then, of course, the history of the world. Apartheid had ended. Democracy comes to a country, and you are this very young person growing up in that kind of country that's changing and transitioning. And so you are yourself navigating all these different kinds of questions of your own identity and place in the kind of changing country, as well as the very positive, optimistic view and South Africa. I think even still today is still a place that has so much optimism and positivity about the role of democracy, of changing futures, of building new kinds of societies. But eventually, I did an undergraduate degree in engineering at the University of the Witwatersrand and ultimately a master's degree at the same university. And then I was lucky enough to get a Commonwealth Scholarship and then all to be accepted to do a PhD at the University of Cambridge. So that was fun. I really wanted to be able to level up or go to see what it's like to do a different kind of research. I was doing neural network research and for condition monitoring in my master's degree, and then when I went to Cambridge to do a much more theoretical kind of work, much more foundational work in Bayesian statistics and reasoning through probability. And then the same thing at the end, I wanted to just see what it was like to live in North America. So I was lucky to have applied and got what was then called the CfA Fellowship for Global Scholars Program. I spent two years in the amazing city of Vancouver, loved it there. And then at the end of that two years, I think so many people who are doing postdocs come to this point at the end of their fellowship. Well, the money is running out. What am I going to do next? Am I going to apply for faculty positions? Am I going to go into some big industry. And of course, then there was a small little startup. I think it was just doing its very first round of hiring research scientists. This is at the end of 2012. And so then I wrote an email to our founder, Shane to ask, “Well, I'm interested. I'm finishing my fellowship. I heard about DeepMind seems like a really great place,” and then I suppose the rest is history. And now it's nine years later. And you know, I still get to do amazing research every day, keep growing myself in many different areas of research, and I have the platform to speak to amazing people, like you, now so
Pieter Abbeel: What a journey, Shakir. Now I'm really curious what originally got you interested in AI and machine learning?
Shakir Mohamed: I often try to reflect on this question. I won't say it's anything directed or strategic or had a clear sense. I initially did engineering because it was one of those fields where you had a sense that if you did a degree in engineering, there were many different opportunities available to you afterwards. So it was a good catch-all for that case. And then, you know, like so many people, this is where there is the powerful role of really good mentors. So at that point, in the end of my undergrad degree, there was a professor in our university and he said, Oh, you should come and do a master's degree. And the university was also funding people to go. And so that's sort of where I began. And he was just starting a lab and starting his kind of research program in machine learning, in neural networks, in application in industry and biology and many other kind of areas. So I then did that master's degree and then I suppose from there, that's where it went. And then at some point, he said to me, Oh, I think you should really apply to go and do a Ph.D. in Cambridge. I think you're good enough to go and do that. I think that's very special to have someone tell you that, of course, you never really believe them.
Pieter Abbeel: Now Shakir, as you mentioned, late 2012 you applied to DeepMind. But I think for many people, DeepMind only came on their radar with AlphaGo, which was many years later. Right? 2012 was still the very early days of DeepMind. In fact, I don't think they have published a single paper yet. I think the first paper came out in 2013, late 2013. So how did you know about DeepMind and why did you think it was going to be, you know, such a big thing?
Shakir Mohamed: Yeah, that's such a great question to have to go through memory lane on. I suppose one of the things, in those days, I think that's still true today, is the power of your own network and the people that you are on. So there were two of us in this CfA fellowship in Canada, one in Toronto, and I was in Vancouver. And in that late summer, we had met and we were both ending our fellowships. Then we were both thinking of what to do next. And this other person had known Shane. And of course, through that, was himself, having a kind of conversation on this. So when we spoke, he said, Oh, I really think you should go and speak to them to see what this is about. I was also at the same time wanting to come back to the UK, for another thing, that so many people go through, which is that you end up in this two body problem. Many people in academia or research end up chasing each other around the world for a period of four or five years. And so that's me and my partner. We were chasing each other. We were looking to go to London to find the place to actually where, okay, the little game is going, we'll have done our post-docs and do these kind of things. So that was sort of the background. And then afterwards, speaking to Shane, you sort of had this sense that there was something new. There was something exciting to actually try. And for you, as a young scientist in research, I also didn't have the sense that I would myself be someone successful in the academic setting of going to write lots of grants and becoming a mentor for many kinds of students. What seemed to be all in that space? A very highly, highly competitive space. Very few positions for many, many people. And I didn't want to live in the US, which has in some sense, the biggest kind of academic mark. So there was this kind of confluence of things all working together. But yeah, I had a great chat with Shane. And actually in the beginning, it wasn't clear that I was going to be a good fit for DeepMind. And so I actually had left that thinking like, Oh, this wasn't going to work out. And then several months later, we decided to have another conversation. There was actual funding. NuerIPS was happening in the power of a conference like NuerIPS, where we all got to actually meet and speak to each other. And then I was like, OK, this sounds great. This is the kind of journey and you are the right kind of place to actually go on board. And well, you know, what's the worst that could happen, right? You will still do research. And you know, London is also such an amazingly broad place that it can absorb you in many, many different career paths that you can take.
Pieter Abbeel: Well, I don't think the worst happened. I think very good things happened. Now, I'm also curious, back then, there must have been a very small number of people at DeepMind, maybe 20 people, something like that. And now, it's it's over a thousand. Maybe over 2000 people. How has that arc been and what has changed, in terms of how you work at DeepMind back in the early days versus today?
Shakir Mohamed: Yeah, actually, I've loved this experience of seeing a changing organization and yourself also changing and that kind of interplay that happens as people change and organizations change. And of course, they are changing each other. So it was really amazing in the old days, of course, to be a small startup, to be at a place where you would all just have lunch together around the same table to go through acquisition, to become part of Google and then just to see the amazing-ness of what a real corporate infrastructure looks like, what production systems and running production kind of code looks like, and the different kind of way of professionalizing your way of managing people, running teams, growing research portfolios and thinking about long-term research. And yeah, it has just been an amazing journey to see that, of course, you are yourself changing and the rate of pace at which you change is sometimes different from the organization. And yeah, it's been amazing just to think of the way that we were changing. So in the beginning, you are doing many kinds of work to show yourself and the kinds of things you can do, and then you have much more focused and smaller teams. So you, yourself, can also only work on certain things. But in the startup environment, you get to work on many, many different things. It is one of the great things of being in a startup environment. And then as you grow and change, then you yourself want to focus a little bit more, you can consider the kind of influence that you have and how you're developing people. And then now, I think it's really great that over time, we've just been able to do so much more growing in so many different areas. I, myself, have been able to do very theoretical work and write papers in mathematical statistics or do very applied work in health care or environment, or work that really questions the foundation of our own fields. And I think that's only possible because we were able to grow as an organization together and to create that kind of space, to do the kind of research and thinking that needs to be done in the field.
Pieter Abbeel: Now talk about the expansion of that kind of things DeepMind is working on and that you are working on recently. Nature published your work on nowcasting. What is nowcasting? Can you explain it to us, and you know, what's the impact of being able to do it?
Shakir Mohamed: And nowcasting is the problem of making very high resolution predictions. Predictions of rain for every five minutes and this is obviously important to us as people because we always want to make decisions about how we are going to operate in the real world. When we go outside, should I take an umbrella? How should I dress? But of course, it affects so many industries, everyday that we wouldn’t think about. Airports rely on these kind of predictions because that is how they adjust the takeoff and landing schedules, every outdoor sporting event that you can think of. The Olympics, for example, or Wimbledon tennis relies on these things. Am I going to open the center court or not? And that is, of course, in a one hour to two hour time horizon prediction. So many, many different kinds of industries rely on. From transport, if you can be futuristic, think about self-driving cars, for example. How am I going to route this vehicle with people in it right now? Well, you don't need a self-driving car, even if you are a self-driving car, that's useful information to have as well. Yeah, this nowcasting problem is really important and impactful from that point of view, as well. But it's also really interesting to us as people who are doing research in machine learning and AI because it asks so many fundamental questions. How do you make such high resolution predictions at the scale of an entire country? How can you account for the fact that the weather is so uncertain and variable? How can you calibrate in the way climate scientists and atmospheric scientists require things to be calibrated? So I think it's really a great problem for us as machine learning people to work in. Particularly for those of us who are also interested in another area of machine learning research, which is called generative models.
Pieter Abbeel: When I naively think of weather prediction, I would maybe not think of machine learning because I might think, OK, if I look at my engineering books from the past, it seems like, you know, I should solve partial differential equations of how the air is moving around. And maybe how much is evaporating and might reach a certain temperature and drop again. And how come? Machine learning is, all of a sudden, the way to get better results rather than solving PDEs.
Shakir Mohamed: Yeah, this is such a great question. The traditional way, and actually, we have two hundred years of amazing insight in exactly this. The way of doing weather forecasting, the way you describe. We describe the equations which govern the way the atmosphere will evolve and then we simulate them on very large computers for several days in advance. Now, one of the key challenges of running these simulations is that they need a bit of a kind of a warm up time until the simulation actually gives you reasonable kinds of predictions that will reflect the actual environment, the atmosphere that we are in. That time conveniently is around 90 minutes or two hours. And that's why nowcasting is important because nowcasting is the place where the current approaches, which are called numerical weather prediction systems, don't work so well now. Coupled with that, over the last several years, many countries, especially in the developed world, have made investments into the sensing of particularly rain. So here in the UK and in the US, for example, you have radar networks covering the entire country. They do extremely high resolution measurements of the rain so that not even one kilometer can do 200 meters, 100 meter resolution every two minutes to five minutes. So you have this extremely high resolution, good quality data source. And of course, when there is a need, where existing methods don't work and you have a source of very high quality data that's available, this is almost the perfect place for machine learning to enter and then provide its approach of extracting signal predictability and forecasting from that kind of data. And this is exactly the situation that we have in nowcasting
Pieter Abbeel: That's so interesting. So when I imagine this, when you want to now cast, let's say, what's going to happen in the next two hours and do you access all those instantaneous radar measurements of the last hour or something. To use them to predict what's going to happen next? Or do not need that access? What's needed?
Shakir Mohamed: So actually, I think you have exactly the system correct. You can actually get a live feed of all of this data from your local National Meteorological Agency. And because you get this live feed in the particular system that we were working with, it used the last 20 minutes of radar data and then to make predictions of the next 90 minutes of radar data. So that's exactly what you need. And you can see that because we are operating at the scale of the U.K., you need a certain amount of background of what was happening in the atmosphere, and around 20 minutes is the right amount to inform where rain is coming, what direction it's going to move in, and then you can actually take that forward to make predictions for the next 90 minutes. Exactly that system is with what you have in mind.
Pieter Abbeel: Very cool. Now, at its core, you mentioned that there is generative modeling, and it's not something we've talked about much in the podcast yet. Can we maybe zoom out and explain what are generative models? What's their role in machine learning? And what's their future potential?
Shakir Mohamed: Generative models are an attempt by systems, machines, machine learning methods to create simulators of data. So, for example, exactly in the case of radar data that we are talking about, if you have sets of radar data, can you use that data to create another system that can simulate that kind of data going forward, in the future? So that process of simulation, of generation, is the key kind of object, and you can see it's useful for these kinds of prediction problems. And the other thing that's so useful about generative models is because you can generate, you can do many different generations of them. So of course, we observe one way of the atmosphere unfolding in the case of rain or the weather. But there are many other alternative, plausible ways that the atmosphere could have unfold, and generative models give us a way of exploring those other alternative ways of generating data. In this particular case, weather can be, so generative models are connected to another idea in machine learning, which is called unsupervised learning. So if you contrast supervised learning that says you have an input and an output and you try and create this kind of prediction that takes you from inputs to output, the generator model says there's no outputs, there's only inputs. What can you do or unsupervised learning says what can you do? Just by looking at this data, learning the kind of patterns of structure of reproducibility of the way rain falls, the way it has intensity. What kind of structure? And can you learn that? And you know, again, think of the way rain falls, what's so amazing that you can learn almost trivially in this case is where mountains are and the kind of orography because rain falls in one way, even just averaging all the data images together gives you that. But you immediately learn this with a generative model so you don't need to actually tell it where there are mountains because it implicitly knows where there are mountains and how rain will fall in those kinds of categories. So you have this very powerful mechanism of generating data so you don't just need to generate images. Today, you can generate many, many different kinds of data, all at the same time. You can generate audio and voice. So many other similar projects have done the things like WaveNet, for example, that many people have in their home assistance. That voice is often generated through a generative model that using audio or you can generate text. And of course, there's so much interest in language models and chat bots and the kind of role they have and concerns, of course, about them at the same time. But those are generative models themselves of text, and you can imagine wherever you have sorts of data where you do need to be able to simulate, to be able to generate other forms. There is this tool of generative modeling that we have available to actually use and deploy in many cases. And I'll just say for many years, I've done research in generative models since I was a Ph.D. student, till today and for many, many years I've been searching, what is the application of a generative model that really lets you know this stuff is useful? It's not just me deriving another integral and an equation, and I was so glad to stumble across this problem of nowcasting because it makes it so clear why you need generative models. Because if you can generate rainfall predictions, then of course, you can create so many different services. You can report how rain is uncertain. You can make that available as a service to many other people who don't have specialized data. You can make it much faster. You can actually capture uncertainties that are really in the real world. So it's many, many interesting things. And of course, the flip side for us is AI researchers is that if our work is actually working on problems in the physical world, then that I think tells us something very interesting about the fundamental algorithms we are using and using as a basis of developing AI itself.
Pieter Abbeel: So I like your notion of referencing the physical world because I mean, in my own work, as you know, a lot of it is in robotics, and there's nothing like going into the physical world to realize how challenging real world problems are, compared to things confined to a digital simulated world that's just a game or something, which can already be hard. But it's never as hard as the real world.
Shakir Mohamed: It’s never, never as hard as the real world. Because the real world has bizarre things. Birds is the really interesting thing. Birds come up on radar all the time. And so you have to account for the fact that actually there's no rain there. It's the season and birds are migrating. Or like all measurement systems, they have failures. So you have to know their different measurement systems. And in fact, one of the challenging things for radar data itself is the radar data is owned by national meteorological services. So every radar is different across countries. So maybe we could do some kind of transfer learning where we use different, completely different sources of data to try and learn about the same underlying phenomenon. But it's never that straightforward when you are actually working with the real data and part of the way the real world forces you, and as we did in this case, asks you to confront what is a really meaningful application of your work. In what way does it improve the current state of the art? In what way are the predictions you are making actually useful? In what way does it help actual decision makers? And you can't know that unless you go and speak to the people who are doing that, and we were very lucky to be able to go and speak to expert meteorologists and they will actually tell you, they don't care about a very low rain, the way you and I would care about, because low rain doesn't cause any damage. The worst thing is that you get a little bit wet. But heavy rain, the real rain that causes damage to property, to life, where emergency services need to be put in a position to respond. If you can create a system that gives them that 90 minutes decision window, then you can actually do significant changes to safeguard property, life and protection of our people and planet, in some sense. So I really think real world data is this amazing thing. And as I said, it always comes back to your fundamental research and in AI systems and what it means for us to explore intelligence and that interplay between AI in science and AI for science, and AI as the science, itself.
Pieter Abbeel: Talk about real applications. You've also worked in health care, right? Can you say a little bit about that?
Shakir Mohamed: Yeah. So I've always had two actual areas of the real world that I'm interested in, which are health care and the environment. And actually had this long term dream of bringing health care and the environment together. I think the coronavirus pandemic made that a little bit more urgent. We all know about non-pharmaceutical interventions and the role of our public health and sort of air quality importance to these kinds of things. So I think I've been taking the long systematic route and was like, OK, let's do some work in health care, then do some work in the environment, learn a little bit and see if you can do something together. So health care is just an amazing space to work. And also, if you wanted to find a more impactful area, you would probably really struggle. Maybe climate is the other one that could compete with it. So we were interested in that case, again, to go to the real decision maker and ask, well what is the real key question here? And for them, for so many people who work in hospitals, physicians are clinicians, there is one disease that stood out, which is called the silent killer. Maybe one in four people actually die of this disease in the US hospitals and also in the hospitals, here, in Europe. And that's related to kidney failure. And it's a particular disease, which is called Acute Kidney Injury. And so what we wanted to do was to see what are the ways that we could use machine learning to create early warning systems of organ damage, particularly of kidneys, to detect Acute Kidney Injury and then see what would see what is possible in that kind of space. So we were lucky to be able to work with the Veterans Affairs Department of the United States. And one of the things that was so amazing about this Veterans Affairs Department is that they are really an expert and excellence in the care and treatment of AKI. So they are in some sense, the best kind of partner you could find, particularly because AKI does affect all their patients, and that really comes up in the veterans that they are serving, themselves. So again, it's very much in some sense, like the nowcasting problem, again, what is considered a meaningful problem? What is the right window of prediction to effect meaningful change? How would you know that you are making a good prediction? In some sense, we were lucky to be able to use electronic health records to then make predictions up to 48 hours in advance or even more in some of our tests, as well as the decline of kidney function within three different categories. And based on that data, clinicians have an established workflow of how they would treat people. And actually, the kidney is an amazing organ, that if you know it's going through some kind of adverse effects, it's actually easy enough to treat. You can provide more fluids or stop the medication in the kidney will come, but if you wait too long, then it's almost too late. And so there's this amazingly sweet spot that if you can give the right window of action for clinicians, then you can do things. Of course, that work, we published it. But the real test of time for that kind of work is to do several other kinds of clinical studies, which we didn't get to doing at that point. So one of them is called a simulated prospective study where you go in the clinic and you just run it alongside the actual path to see what would have happened otherwise. And then you can do the real kind of clinical study, and those kinds of studies take many, many years, a decade almost to really complete and to suss out and get done. But I was very hopeful, excited. I really love that space. I think it's an amazing area to work in.
Pieter Abbeel: It sounds like this study still has to happen from what you're saying, but I mean, it's good that there is such a thorough process, but isn't it also then frustrating that you know, you have this prediction system that is good and that could already help people? I mean, are you allowed to still have the system there and have the doctors be aware of the prediction it's making and maybe help them think through things or do you have to wait several years before that's possible?
Shakir Mohamed: Yeah, there's no escape to waiting, I think in this particular case for several reasons. Actually, one of the good things of waiting is that you do more thorough testing and then you actually detect other kinds of biases, that are in the data, that are in the kind of clinical practice, that maybe for equity points of view, you may want to go back and actually readjust from the beginning. So unless you actually had some waiting, you never give yourself the opportunity to do that. And I think the real pathway for us doing responsible innovation with machine learning and AI is to create spaces within our research pathway and program that add a little bit of friction that slows us down just for a brief moment and just for us to look again. Are we sure of this thing? And actually AKI is one of those cases where you will find that very, very clearly, there is a racialized dimension to how AKI in the data is traditionally recorded, there is quite a big distinction between black patients and white patients, for example, within the US medical system. And that then gets infused into the prediction model that you'd want to correct and also may have factors against age as well. But then also, when you move it into the clinic, any system that you put into the clinic, starts affecting the behavior. And you can never really say that they act without this. You have to be very careful about the control. So this is actually, I think, one of the big changes in the way we think about machine learning. We really think about them as these kind of socio-technical systems that you cannot divorce the social element of what people are doing when they are encountered with it from the system itself. And so that itself requires sets of thought, of thinking, of ways of evolving the kind of test. And so I think ultimately, yes, in this space, although we are impatient to find ways of producing things, a little bit of friction is good. And I think with innovation, if we do it the right way, we can probably speed it up a bit. But I think there's something to that, to that kind of patience and friction that we add now.
Pieter Abbeel: Shakir, you mentioned that the data might be differently recorded for black patients versus non-black patients in the US. Can you expand upon that? What is different?
Shakir Mohamed: So AKI is a system of thresholds. You basically accumulate two thresholds, one of them, which is an average over the last year and one of them of the average over the last month. And actually, so many cases of disparity in healthcare comes because the thresholds are chosen differently for different racialized populations. And this is the source, in this kind of case. So now they may be valid by medical and clinical reasons for different kinds of thresholds. But I think we can't assume that they are correct when we get that absorbed into our data that everything about social disparity comes embedded in that data. This is how race enters into our data. And so I think that's the point where we can say, oh, actually, let's think of something else or we can find, particularly if we want to do rigorous evaluation, we can find other kind of objective measures that we can compare alongside to say, well, okay, we tested with a completely different way. We get the same kind of outcomes, decisions, correlations, you know, explanatory factors, bias signals, etc. So that's in this particular case. But you see it in so many, so many, many cases. The most recent one, I suppose, was around COVID in hospitals that affected black patients again. So significantly using pulse oximeters, which were these little devices you put over the finger. And again, they were not working as well for black patients as they were doing for white patients. And again, this is how you see. And then that just affects the clinical outcome. And if we were doing, imagine we were doing COVID predictions of COVID safety triaging in a hospital, then we would just absorb what has fundamentally not the right kind of decision signals to you. So I think interrogating the sources of race in particular in our data is sort of one of those, again, big changes that has changed in our field and everyone is so attuned to them. So I think that's only a good thing for us.
Pieter Abbeel: Now tied to that. You founded the Deep Learning Indaba, an organization that is strengthening Africa's role in shaping the future of AI. Can you say a bit more about the Deep Learning Indaba? How it started? And where is it headed?
Shakir Mohamed: Yes, an Indaba is a Zulu word, which means a gathering or community. And in South Africa, every meeting is called an indaba. So it's the very kind of common meeting, a common word to use that we really loved. And in 2016, I think it was, we were probably there together. There was then NuerIPS conference in Barcelona that year. It was amazingly warm in December, everyone loved being at the beach, looking at the beautiful architecture of the church. But one of the things stood out and I was not the only one who noticed it, but many of us were noticing, so we are the other South Africans in this conference? Like what is so special about you that brought you here? And of course, none of us are so special, so something else is going on. And so I would go to my one other colleague while where are the other South Africans? And then you just after you asked that question, you naturally ask, well, where are the people from Botswana? Where are the Zimbabweans? Where are any other Africans at this conference? Which is the leading technical conference in this kind of field. And then they were just absent. And so what is the mechanism, of course, many other people were noticing the very same thing at the same time. They were also noticing the same thing. Where are black researchers in the field of machine learning? That same question had come already several years before with the pioneering work of women in machine learning. Where are the women of this field? And so at that realization, what we thought was the thing we could do within the space of power that we had, was we were going to just go back to South Africa to give a short series of lectures, myself and my co-founder, Ulrich Paquet. We were just going to do, like three days. Let's just go back to Johannesburg. We give three lectures on the work that we know how to do. We said, well, if we are going, why don't we invite a friend to add to the lecture? Or why don’t we invite someone else? Well, if we're going to do this, let's have more people. And then eventually we were like, okay, let's do a little mini summer school of some sort, or a small mini conference. And yeah, that's how this idea snowballed from something that was going to be 50 people. And we then did the first Deep Learning Indaba in September in my alma mater, the University of the Witwatersrand. It was just the most amazing thing to bring these 300 people together for the first time to know that within the space that you are, that you don't need to ask this question. Where are the African machine learning intellectuals, researchers, young voices, the leaders of the future? They are there. We just can create the space to bring them together. So over the last five years, we have been trying to grow the Deep Learning Indaba as an organization with the mission to strengthen machine learning in Africa. And that strengthening is something that I, as an African can use to extend that power to other Africans that we can collectively own this organization. And so that means when we collectively own it, we collectively teach each other, we collectively skill each other. Then we actually can take a very active role in shaping the way that machine learning will influence people across our continent. And like the examples of racialized factors that are entering into our predictive systems, that same kind of racialized system will enter into all of the technologies that will affect us as African citizens within our countries across the world. And so how is it that you can counter that? There needs to be many, many different kinds of strategies. But the one way that you can do this is by having technical experts themselves who know that mature, who can create a new kind of technology that actually serves them in a different way. So yeah, and that we did it second year in South Africa, in Stellenbosch. We did a third year in Nairobi, in Kenya. Then, of course, COVID came. But if everything goes well this year, we will host the next Deep Learning Indaba in August in Tunis. And then if everything goes well, we will go next year to Nigeria. And then we would have completed a full circle of South, West, North and East Africa. And over time, we just continue to grow the kind of things we think of doing. So now it's no longer a summer school that it used to be. It's actually a forum for research. It's also a mechanism of recognizing excellence in research through thesis awards, masters awards, bringing new kind of African startups together in conversation with young students and voices. Building that kind of international conversation of researchers from outside and researchers from local countries and regions themselves. So, yeah, I'm incredibly proud of the work of the Deep Learning Indaba. And then to have worked at so many, many other amazing organizations across Africa Data Science Africa Data Science Nigeria, more communities that have evolved as a consequence of that, the Masakhane NLP community. And then actually, you can see the power of a small idea in play that after we created the Deep Learning Indaba, many communities globally recognized that that is a model that can work. So today we have the Eastern European Machine Learning community. We have the Latin American community, which is called KHIPU. We have the South East Asia Machine Learning Summer School. Last year was the first model of the India Machine Learning community, and you just see this kind of global spread of communities who are at the edge of machine learning, taking ownership for their own training, their own networks of togetherness, their own communities of practice, their own ways of building technology to serve their regions, and the kind of things that need to be supported for their languages and cultural factors. And yeah, it's just been really amazing to see that kind of global transformation. So I have this line that I often use, which I have asked people for several years, do you think global AI is actually something that is global? Or is it restricted in the hands of a few western countries here in Britain? In the United States? In Canada? In France? and Germany? And I would say the answer to that was no, five years ago. But today I do think global machine learning, global AI, is truly global because of the amazing work of all these groups, not just the Deep Learning Indaba. But everywhere across the world, to see that they actually have an amazing power. And, you know, as a scientist, I think this is the power of the transformative power of grassroots organizing, which is always a good thing.
Pieter Abbeel: This is absolutely amazing what you started there, Shakir. And I have, of course, seen the events, seen the websites of the events and everything that's happening. And, okay, you know, it feels even the Indaba, itself, to a great extent, feels global, even though it's centered around Africa because it's touched on so many topics and bringing so many researchers from all over the world to participate. Right? It's not just African researchers. And when I look at the website, there is a mix of people coming in, working together on AI, exchanging ideas. And in fact, COVID interfered with this, but ICLR, one of the other leading conferences in machine learning, was meant to be in Africa right when COVID hit and would have been the first major machine learning event in Africa, back in early 2020. Right? And hopefully that'll happen soon when conferences go back to actually being in-person.
Shakir Mohamed: Yeah, I really, really hope so. 2020 was the year I was the Senior Program Chair for ICLR, and that was the year I was like, okay, it's going to be amazing. We're going to bring this conference to Ethiopia to go to the great city of Addis Ababa. Of course, COVID came along, though I am a member of the board for ICLR and we still have that intention of taking the conference, coming to a country in Africa, probably Rwanda, if we can succeed at doing that. But, you know, COVID continues to be with us so there is a lot of uncertainty still. But yeah, it will happen.
Pieter Abbeel: It's very interesting you bring up Rwanda as a likely destination because actually one of our first guests on the podcast, Keenan Wire, back from Zipline, they started Zipline, essentially in Rwanda. Which is drones for drug blood deliveries to be able to deliver blood that otherwise would be very hard to get to patients. And for him, essentially Rwanda was the best place to start this company because just the environment to do the work, and it's so interesting that now you're going with the same country.
Shakir Mohamed: Yeah, I love it, if anyone has the opportunity to go to Rwanda, it is an amazing experience. The people are just amazing. Kigali is an amazing city and you actually see how amazing it must have been for Zipline to be able to. They really call it the Land of a Thousand Hills. So he had to really work hard to get those drones to deliver, deliver blood supplies in all those ways. And now they've extended at least the last I checked to so many countries. So Rwanda continues to be an example of a model of different kinds of innovation. And actually, this is the thing you don't need to see Africa as just this amorphous blob of things. Once you look inside, you find so many different innovations. And in some sense, our role of the Indaba is also just to reveal in some way the kind of interesting mix of, well, here's an event in Nairobi. And how amazing is Nairobi to go to, or to go to Tunis and then to see the amazing, thriving culture of the technology, of startups, of government service and innovation in Tunis.
Pieter Abbeel: Yeah, talk about diversity. London honors February as the LGBT plus History Remembrance Month and I read a 2020 piece that you wrote for this event in London called Queer Exceptionalism. Can you say a little bit about what you wrote there?
Shakir Mohamed: Yeah. So every year, it's actually one of the things that's maybe interesting for all of us. Every year we commemorate LGBT History Month in February and in the UK and the Netherlands and in Ireland, and then in October we celebrate Black History Month, in the same three countries and it's the reverse in the US. So right now in the US, it is Black History Month in February and then in October, it's LGBT History Month, so we always do the two of them together, I suppose one of the things I wanted to think about in in this way, of thinking about what it means to do queer exceptionalism in science, is to think about the role that queer people can play in the scientific environment, particularly because there is still lack of visibility or the many different kinds of questions which come up. Even the most mundane thing turns into a very fundamental question of identity for queer researchers and queer scientists. And so I was using the word exceptional, in that way, that they live as exceptions to the way things are received or thought to be. And I think this is an idea that comes across in so much of queer life, in queer living, where we think about this idea of queering things and the idea of queering something, is to say just for a brief moment that maybe the thing you think is the opposite of what it is. So I think you can bring this into the realm of machine learning. And actually, I do purposefully think quite actively about what it means to do this kind of queering of machine learning, to take something that we take so standard in machine learning, that we do question its purpose, its value, its function, its applicability, its accuracy and then to say, well, what if it is the opposite of whatever we assume? So I think that's sort of the thing I was trying to think about. The other kind of thing that does come up in all of this is that we talk about diversity, of equity, of inclusion, of the fundamental role of experience, of life beyond the ones that we have. And if we don't take that idea seriously, then we need to think, well, what does it mean to bring those life experiences, those kinds of ways of living into our research, and make them actually a meaningful part of our research? So I think that's actually the very fundamental question that adds so much value to us in machine learning. And that's where you can see that the idea of queering and the experience and the lessons of queer life is something that it's not only for queer people, but it's for everyone. And so I make this concrete. We were thinking through questions of fairness in machine learning, so like those examples of racial bias. What gender bias that we spoke about earlier on, and that we wanted to ask the question, well, what does queer life and queer experience tell us about or ask us to think about these questions of fairness? And one of the first questions that comes up is, well, the key idea of queer life is that we don't about each other. You don't ask someone what their status is because that is an outing thing and that is somehow sacrosanct, especially is the kind of relationship that needs to be earned. And so in the area of fairness, there is an assumption that if you are going to assess fairness of an algorithm, you know in your data what the racial ethnicity of someone is, what their gender or sexual identity is. Now, of course, as machine learners, we also often find ourselves in the trap that if we don't know something, let's just collect more data. The act of collecting more data is itself a kind of outing process. So what if you can't do that? What if you can't observe these kinds of racial identities? And in fact, most things then will be unobserved and these are the characteristics of fairness that we wanted to do unobserved? So how do you do fairness through unobserved characteristics? And that, I think, is the key lesson of queer life and we then wrote a paper to expand on this topic using the lens of queer experience. I don't think we have an answer to that particular kind of question, but it raised this question in so many areas of censorship, of privacy, of language technologies, of health care, of mental health. And then you can sort of unpack that in many different ways. So yeah, I just hope that we continue to show other people that there are examples of queer scientists who are working in the world, who are proud to be out and open, and that actually being queer has significant value, that it adds. And that is a value that can be used actually by anyone. So that's maybe this summary of that.
Pieter Abbeel: Well, I love that verb, that you that was new for me, queering, where you're effectively looking at assumptions that are being made and for a moment, I think if I'm summarizing this correctly, you're thinking about what if this assumption everybody's making and doesn't even think about twice, if it were not true? What would it mean? And what insights would they gain from thinking that way? And I think it's super interesting. I think it can be applied to many things. When you see somebody give a talk, you can immediately often notice that there are certain assumptions they make and start asking that question, what if that assumption is incorrect? What would that mean for the field? What would that mean? And maybe one of those things that everybody is assuming, is actually incorrect? And you can be onto a very new direction and in research that nobody else is pushing.
Shakir Mohamed: Completely agree. And actually, you know, for so many of us, when we go through our science because we are so committed to the work that we do, it's very easy for us not to even enumerate the assumptions. So I think the act of queering is something that invites you to detect, to search, to ask what the assumptions are. Even when we ourselves, as the scientists, as researchers, as engineers, we don't make those things too explicit or they always are implicit and hidden.
So you’ve understood that clearly, and I'll take that as a gift that there's a new word for the day, queering. Maybe that will be everyone's Wordle, or it's maybe too long.
Pieter Abbeel: Yeah, soon enough, or once we release the episode. So now zooming out for a moment, there's a lot going on in AI at DeepMind and and beyond DeepMind, of course. And you've been part of it for many years, inside DeepMind, before that at universities, through the conferences you organize, getting new people involved in machine learning that were maybe a bit outside of it before, getting them involved. From all of that, as you look ahead, what are some of the things that you think are some exciting discoveries or inventions that might happen in the next five to 10 years in AI, that would be really impactful.
Shakir Mohamed: Yeah, I, like all people, will always be biased by the kind of work that we do and the kind of interest and impact that we are trying to have. So in one area, I will come back to the space of weather and climate. I think there's so much opportunity to work in this space and kind of interesting ways of reshaping and adding to this very established field of weather and climate. So climate models themselves have so much space for innovation. There are several uncertainties in climate models that have not been reduced even since the first assessment report of the IPCC. And this is maybe a very interesting place for us to integrate more and actually contribute to the kind of key decision making that we have. But of course, machine learning, I think beyond that, is going to continue to give us new ways of thinking about the physical world and using that for the kinds of adaptations we need to the changing climate. So we spoke about nowcasting of rain, but you don't only need to do nowcasting of rain, you can do nowcasting for the amount of sunshine that's going to fall on your solar panel in the next one hour. And that could be very useful for the way that balancing of energy on grids needs to happen, or because we're so interested in the kind of quality of air that we're all breathing. Because of this COVID pandemic, you could do the nowcasting of air quality over the next year. And so I think over the next few years, we'll continue to develop those kinds of ideas. We will integrate new sources of data that are already available. And you know, part of the big challenge is to see these data sources and bring them together and consistently do the key work of engineering that we need to do, and then take those things beyond our key papers. It's not enough. I'm, of course, guilty of this to write a paper in nature and leave it there. We need to do more. What is the next step of validating that? And I think over these next five years and beyond, we'll do much more of that validation and then help reshape the way that weather climate science is thought of using the power of data, using the kind of algorithms of AI machine learning. So that's one area that I think of. The other area that sort of came up throughout our conversation was thinking of AI as a socio-technical system. We are trained almost as engineers, as computer scientists to always think of the technical object, the algorithm itself and what it does, how it performs, how it should react in a system of safety. And I think now we have a new kind of conception that's changing where we say, well, let's actually start with the person first. How does the person react to that technology? What does it mean to keep a person safe in this world? What does it mean to help that person flourish and be protected. And this kind of thinking is so much bigger. It's going to require us, of course, to do new thinking in algorithmic auditing, new ideas, in fairness and bias. And then we have to go all the way. Think again about our publication norms as the scientific field, the way that we release code and what that means to take accountability and responsibility for releasing codes. That kind of socio-technical idea. And the most important change of that was the technical idea is to recognize that people must be important and part of the way we are going to develop that technology, that they have some, there is a role for them to participate in the way we design. And what participation means, is going to become probably one of the most contested terms of the next five years. At least it's already highly contested and will only become more so but the role of participation of people within the way that we are developing AI, especially if we have missions that use AI to advance science and benefit humanity, then that role is just going to become more and more important. And then I hope, you know, we will just continue to develop each other as people think of each other as a field, continue to do the important work of equity and inclusion that we have been doing and help just bring more people in all the small ways that all of us can do as individuals to help, you know, bring one more person and give them a new skill, to help them put into this world something something new. And that's, I think, the best, the best outcome. So, yeah,
Pieter Abbeel: Well, thank you, Shakir. It's just a ton of fun having you on. Thank you.
Shakir Mohamed: Thank you. I had so, so much fun. And yeah, I love the podcast. And yeah, I just can't wait to hear who's coming next. You know, not me, of course, but whoever's coming next, it's just amazing to listen in and just see all the breadth of different insights people have. Thank you for having me, it's been a real honor and pleasure.
Pieter Abbeel: Well, our honor, thank you so much.