top of page

Ayanna Howard on The Robot Brains Season 2 Episode 13

 

Pieter Abbeel: This episode’s guest is a successful roboticist, entrepreneur, educator and is the author of the recent book Sex, Race and Robots: How to Be Human in the Age of AI. Dr. Ayanna Howard is the current Dean of the Ohio State University College of Engineering. Before joining Ohio State, she was professor at Georgia Tech, where she was the founder and director of the Human Automation Systems Lab. Her resume also includes a leading role at NASA's Jet Propulsion Lab, where she built next generation exploratory rovers, and recently she started Zyrobotics, a nonprofit dedicated to helping children with special needs. Welcome to the show, Dean Howard. So great to have you here with us. 

 

Ayanna Howard: Thank you. This should be fun. 

 

Pieter Abbeel: Yeah. Thanks for joining us. So right now, you're in Ohio. You spent time at Georgia Tech. But you actually grew up in Altadena, a suburb of Los Angeles. How did you, growing up over there, get into AI and robotics? 

 

Ayanna Howard: So there's actually two paths to that. One, I was always into science fiction. And so that's not because I was in Altadena, California, but it was because I was just into anything, whether it was Battlestar Galactica, Star Trek, superheroes. And I remember when I was in middle school, there was a show called The Bionic Woman, and that's what I wanted to do. I wanted to build a bionic woman. And so that was, you know, I wanted to be a roboticist. Although, you know, we really didn't have robotics as a career back then, it was an imaginary career. But that's kind of how it started. 

 

Pieter Abbeel: And can you say a bit more about what was the Bionic Woman capable of?

 

Ayanna Howard: Oh, so one, saving the world. But really, what it was is that she was human and she had been in, basically, a car accident, and they built her back better and more able based on bionic parts. Right? So she would go around and basically save the world. She would fight the bad guys and save civilians and things like that. And I just thought it was wonderful because it was this love of science fiction, the bionics. But it also, you know, superhero kind of thing. But it was real. It wasn't like Wonder Woman, which was not real. Bionic Woman was real. 

 

Pieter Abbeel: I like how you call it real for this one. Now, inspired by a bionic woman, how did you get started? When was your way to be able to get going on AI and robotics? 

 

Ayanna Howard: Yeah. So because I didn't really know what this was, what I tried to do, and I would say I would contribute to my dad, he was an engineer, was building gadgets. So I remember my very first robot project was, we had modems back in the day and the modems you would connect with the computer and I remember what I did. I hacked the modem so that it could be connected to a remote control car. And I then created a little program where I could then basically teleoperate this robot through the modem. And again, I didn't think it was. I was like, oh, this is how I do it and like, let's try this and let's try this. It basically just went forward and back. But that was my first official, I would say robot. And so it was really always about, you know, doing a little bit more, exploring, self teaching because they didn't really have courses back there in this regard. 

 

Pieter Abbeel: And now when you say a modem, should I think of an internet modem or is this yet something else? 

 

Ayanna Howard: No, the internet modem, you know, I'm aging myself, but you know, this was before before you had things like CompuServe and billboards and you actually had to call to a number in order to connect. That modem. 

 

Pieter Abbeel: And does that mean you can actually, truly remote control your car that way? I mean, you could call into it from another location?

 

Ayanna Howard: I could. Of course it was, you know, like I said, it was really kludgy. I could only get it to do forward and back. And I think it was probably because of the number of bits that I really needed in terms of giving out the signal. 

 

Pieter Abbeel: And no video calls at the time. So if you were remote, you wouldn't even see what the car was doing as you were controlling it.

 

Ayanna Howard: No.  

 

Pieter Abbeel: That’s interesting. Then from there, how did that determine your path from there? 

 

Ayanna Howard: So, you know, when I first went into high school, I actually thought I wanted to be a medical doctor, honestly. Even though I was building stuff, I didn't. The engineering computer science thing that was like things that I was self-taught. I didn't see it as a career. I thought that I was going to go to medical school and, you know, somehow create the Bionic Woman. I don't know what I was thinking, but, you know, 13, 14 year old, of course. But I took a course. I took biology. And again, I'm going to age myself. Back then, we actually had to kill the frogs. So you actually had to learn how to do it in a humane way. And then you open them up and I remember I was just so freaked out and I just was like, there's something fundamentally, I just don't want to do this. Which basically meant going to med school would have been a total complete failure. And that was when I actually had one of my teachers say, hey, why don't you think about engineering? And I thought, maybe not like, why would I see that? That just sounds horrific. But I think at that point, you know, I was good at math. I was good at all the sciences, except for biology. And I started saying, okay, maybe I'll do this engineering thing so I could do robotics. But it's also the reason why I chose the undergrad that I did because I still didn't know what kind of engineer, a robotics person would be. 

 

Pieter Abbeel: Well, a robotics person, kind of needs to do everything now. 

 

Ayanna Howard: Yes. Yes they do. I mean, now we know this. You know, back then, if you think about this, this time period. So this is basically now we're talking late 80s, early 90s. Really, the only kind of robots were manufacturing, industrial, and that was not the robots. That was not the Bionic Woman. And so, you know, one of the things is, I went to Brown University because the very first two years you basically took everything. So I took programming, of course, with Pascal and Assembly. That is a programming language. My formal courses and circuits and thermo and EMAG, which I figured out, one, which engineering disciplines I did not like. But also I could put together things, and I became basically this computer engineer, that knew how to program. 

 

Pieter Abbeel: Well, I remember from my own studies I did my undergrad in the late 90s, and computer science wasn't really that much of a discipline, yet at the time, and almost nobody went into it because it was barely brought into existence. And it wasn't the default path to actually study computer science as your engineering specialization. 

 

Ayanna Howard: No, it's true, which is so funny because I was actually the lab instructor for the computer science course, even though I was in engineering. Because I could go in and students were like, oh, I can't compile this. I was like, oh, this is how you do it, which is so funny. People say I was really a computer scientist back then. But again, computer science was not really that hard core of a discipline, especially with respect to robotics. 

 

Pieter Abbeel: Yeah. And so from there you went on, you decided to do a Ph.D.. 

 

Ayanna Howard: I did, but that was not intentional. And it was because I had already started working at NASA after my freshman year in college. And so by the time I was actually 18, I was at NASA working on programming with satellites and things like that. So I already had a job for life after my undergrad, and I only knew I needed a master's. And that was because everyone in the group I was working with had at least a master's. So I was like, okay, I'll do a master's. I did. I still work part time while going to my master's. Doing my master's courses. I was not going to do the Ph.D. I was like, I don't need it. I have my job for life. And NASA JPL, I'm doing robotics. And at that time I started programming neural networks. I was like, who would ever go for a Ph.D.? Like that seems crazy. 

 

Pieter Abbeel: But so, how come you did? 

 

Ayanna Howard: Because I got hoodwinked by a faculty member at USC, University of California. In fact, you probably know him, Ken Goldberg.
 

Pieter Abbeel: Oh, yes, of course. 

 

Ayanna Howard: He was a bright young assistant professor at USC. And I started doing research with him. He actually had approached me after one of his manipulator classes like, hey, what do you think about research? I was like, oh, I love research, I’ve been at NASA. And I started work and he was like, why don't you go for the Ph.D. I was like, yeah, no. Just apply. You can always say, no. So that's what happened. So I applied because of him. I got accepted. I was doing research because of him. In fact, my research was on manipulation, deformable object manipulation with learning. And then he left me. He actually left USC, even after he convinced me to do the Ph.D.. And so I always tell him when I talk to him, I'm like, yeah, you totally hoodwinked me, like this young, impressionable kid. I mean, I thank him all the time. But still, that was the reason. 

 

Pieter Abbeel: Yeah, he deserves it. Give him a hard time for that. Now you're a NASA, at the time, JPL right? And I think most people's exposure or big exposure to robotics, at the time was actually through JPL because JPL’s building the Mars rover and that was just kind of the really big thing happening in robotics, at the time. Can you say a bit more? How was it being at JPL, at that time? And working on that project? 

 

Ayanna Howard: So, you know, there's a long history of robotics at JPL, and people think about the Mars rovers, the Sojourner, which was the first one that that roved. But you know, one of the nice things about working at JPL is that you had a lot of security, so you could just kind of wander as a student, as long as you were an employee. And there was some early work, for example, on basically telemedicine. And of course, it was, you know, when we go into space and we send people, how are we going to do surgery? And so I remember one of the labs, there was a researcher that showed me and I kind of was like, oh, how does this work? Because they were always interested in talking about what they did. It was basically a manipulator that was doing eye surgery. So I mean, think about like early da Vinci and this was, you know, back in, what was that like? Ninety-four? Ninety-five? And so there was a lot in robotics at JPL. And this was like nirvana to me because I want to do robotics in that. So the Sojourner and the Mars rovers, those were really the summarization of all the good stuff that was going on in robotics over generations, pretty much, over many, many, many years. And even, you know, twenty-ish, thirty years before the world saw what robots was. 

 

Pieter Abbeel: Were you at JPL when the first landings on Mars happened? 

 

Ayanna Howard: Yes, it was in the summer. In fact, it was very seminal because it was July. It was very close to July 4th, if not July 4th, which actually represents a lot. At least in the United States. So I was there that summer and, you know, I'm going to tell a little story. Sojourner was considered, what's called the science mission versus an exploration. So there was no you know, with Mars rover, there was a strategy of Mars exploration. Sojourner was one of these missions where, hey, let's let's go. Let's test out the technology. Technology demonstration was really the focus, but no one knew if it was going to work or not, right? Because it was the first time and it was like a technology demonstration, which basically we can decode that to lab work for NASA, right? Like, there was deployment, real-world deployment. And then there's the stuff you do in the labs like, oh, that's so cute. This was NASA's, oh, that’s so cute, kind of thing. But in NASA, kind of, terminology. But it was exciting because, one, no one was really paying attention. Honestly, the world was not paying attention. We were because we're like, oh, robotics. But it was successful. And that was really, I think, the seminal kind of trigger, was that it was successful. No one was watching. No one really cared. The budget for it was pitiful, I’m just going to tell you it. It was like really pitiful. I mean, the scientists and engineers were scraping it together, and yet it was successful. And that really set the trajectory for the Mars exploration and doing missions every 18 months. And, you know, space landers versus spacecraft versus rovers. That was starting the strategy because, you know, the engineers and scientists, they actually know what they were talking about. 

 

Pieter Abbeel: Can you say a bit about what you personally were working on there at the time?

 

Ayanna Howard: Yeah. So I was working a lot. I guess the most exciting one was a project called SmartNav, and I guess it was the most exciting for me because it was the first time that I led a project. So I was the technical manager for this. And I just finished my Ph.D., I've been working at JPL throughout my Ph.D., and my masters. So I've been doing a bunch of different projects. But for that one, I was leading a task team and our responsibility was to think about, what does navigation look like in the next 10 or 20 years. Sojourner had already landed, was successful, starting to think about the first set of Mars rovers. But it was like, What can you do, to do what we call over the horizon navigation, so you can't see where you're going? How do you actually do planning? How do you do navigation if you can't see your end goal? You know there's A to B, you can see it, but you can't see B. You can only see where we are. And so that was exciting because one, I could just be creative. There was no answer. I couldn't go look up a book and say, Oh, this is how you're supposed to do it. Just put in this algorithm, there is the answer. And it was really thinking outside the box of how to do it and how to think creatively. And there was competition. So one of the things is like with anything at NASA, there's different teams that are set up to think about how to do the same objective function. And so you also know that there's competition, so is yours going to be the better or is someone else is going to be the better. Which is also kind of exciting to have a friendly competition. So in that regard, I was looking at long range traversal. My approach was how do you design the tools to grab human knowledge, expertise, science knowledge and encode it as expert knowledge on the rover so that it could learn from humans in this aspect, and do what a human would do or our scientists would do, once they went to Mars. 

 

Pieter Abbeel: Oh wow. And now, of course, one of the big challenges with Mars is that, well, the communication latency. You can't teleoperate easily. It depends, I guess, on how far away Mars is from Earth. It varies over time, but it's a very significant latency. It's a minimum of many minutes, I believe, right? And so it really has to be autonomous for extended periods of time to do anything useful there. 

 

Ayanna Howard: It does. You know, at the time, it was about a 20 minute delay between sending and receiving a signal. And so at 20 minutes, we definitely can't teleoperate at all. And so, one, you have to be autonomous. The other thing is you have to do things where risk, you're very very risk averse, because, you know, it's not like, here, if your car breaks down you could be like, OK, I'm going to call the mechanic. The software download doesn't work in whatever car, that you have, right? Hey, the software didn't download correctly. Tow truck comes out. You can't do that. And so even the way you think about intelligence and autonomy is always about, okay, rule number one, do not destroy the rover. Right? And then everything else comes after that. 

 

Pieter Abbeel: Yeah. Now, in recent times, it's SpaceX that's catching all the headlines related to Mars, right now, with the vision aspiration to go to Mars. And I think it's also been a bit quiet since NASA's successes in the late 90s, early 2000s, till now, at least what I see in the press, in terms of NASA's activity for exploring Mars. And I'm curious, I mean, of course, we'll get that in a moment. You've started doing other things and we'll talk about those. But now that SpaceX is becoming very serious about going to Mars, do you feel any inkling of, you know, getting back into that and, you know, change of what you might be working on again? 

 

Ayanna Howard: Yeah, so I left NASA in 2005. So the actual I mean, I was funded by NASA later, but in 2005, and I will tell you, I was part of the first crew that left and a number of us left after, you know, with a couple of us that left that year and so on. I went into academia, a lot of us went into academia. Others at the time were going to the stealth startup companies, that were starting to do self-driving cars. Later on, the Teslas of the world, SpaceX's of the world started grabbing. And, you know, I would say, entertaining NASA engineers to come work for them. And one of the things is, in one aspect, it's exciting because as we know, when you have competition, it makes people think a little bit more creatively of how to do things, more creatively and at a better price point. The negative is that NASA does things for the good of all people, all humanity, even if it's just science. Whereas if you're a company or corporate, your objective is not that. Your objective is more about profits. And therefore, I worry that we might miss some fundamental discoveries that we still need to make because they're not profit driven. So competition is good, but I think we still have, we still have a gap because there are still some fundamental discoveries that we still have to make for the benefit of all humanity that we're not going to have that companies do. Just because it's either too long term or there's no profit to achieving those goals. 

 

Pieter Abbeel: Now, maybe there could be a win-win, though I don't know, where if the next generation JPL rover makes a trip to Mars on a Space X rocket ship. And then once it's there, it can go do NASA's exploration, a scientific exploration, while maybe Space X robots do something else. 

 

Ayanna Howard: That would definitely be a win-win. And I think it's still, you know, I would say that this whole world is still in disruption. We don't know what space exploration is going to look like in the next 10, 20, 30 years. I'm hoping that is, that's the positive. That would be the goal and that would satisfy a lot of issues. But I think we just don't know.

 

Pieter Abbeel: Can't control it, I guess, until it's happened. Now, interestingly, when I was reading about your history at NASA, one of the articles says that if Ayanna had not joined NASA, she would have likely become a professor. And this is while you're still working at NASA. And I'm reading this and I'm just thinking, Oh, this NASA writer didn't do any favors to Ayanna's superiors at JPL because, essentially, that writer preluded your departure into academia. 

 

Ayanna Howard: I know. So I do sometimes look at some of the old articles like before I left academia, and there are some things that I talk about equitable, I talk about bias, in some regards. I’m like and that was like 10, 15 years before I actually did some of these things. But you know, that's why I love talking to creatives because they see through things like, Oh, wait, hold it, we're going to put this, this and this together. We already know your destiny, even though you do not know it yourself. 

 

Pieter Abbeel: Yeah. So how did that transition happen for you? How did you decide to become a professor? 

 

Ayanna Howard: So it was just like with my Ph.D., it was not intentional. What happened was? We had the shuttle accident. So this was the second one. NASA froze, basically froze research. Because there were a lot of questions, congressional hearings about what is NASA doing, what is the risk? And like with anything, research, like with any type of organization is usually the first, to basically go while you figure out the really hard problems. And so what happens is, back then, like now, when you have furloughs, you know, you don't even come into work. They didn't have that. So we came in. But all the missions were halted, like you couldn't work on a mission. And so you were going and be like, Okay, so what does that mean, when scientists and engineers are bored? They like to find other things to do. And so really, I was like, Okay, what am I supposed to do? I don't know how long this is going to last. Let me explore what's after. And at the time, you couldn't do research. I mean, you know, the AT&T Bell Labs, they didn't exist. The only place you could still do research was at a university. So the reason I went into the university was so that I could still do research. It was not about the education. I told the students this all the time. It was actually not about you. I learned to love students, but my original intention was because I wanted to continue doing research. 

 

Pieter Abbeel: Yeah, you definitely must have learned to love students given your dean now. But I want to stick with your early professor days for a bit, though, you went to Georgia Tech and you found the Human Automation Systems Lab. What was the research vision for the lab? 

 

Ayanna Howard: So the research, what I wanted to focus on, was still science driven robotics. It was working with scientists and figure out how do I grab their knowledge base and put it on rovers and algorithms. At the time, NASA had just talked about the, basically, navigating and exploring the Moon. So my original objective was, Oh, we're going to design the research for the Moon. And it made perfect sense. And then I found out that NASA was not necessarily going to give large missions to academics. I will tell you, I sure did ask. I asked the program managers, Come on, it’s Georgia Tech. But yeah, that didn't fly. But what NASA was funding was our scientists and all analogous sites on Earth. And so what that meant was I had to go around talking to scientists to say, You know, do you have any fundamental scientific problems here on Earth, that you could think about, that required you to have rovers? And I met one scientist, the climatologist, who was looking at global warming and climate change, and his quote unquote planet was the Arctic. And what he wanted to do was grab science data from these, you know, glacier environments so that he could populate his model and understand how ice shelf was melting and things like that. I was like, Wait, wait, you need science data? You need science data that's temporal, as well as in different locations of a hazardous environment where people don't necessarily go. That sounds an awfully like what I do. Except the planet as Earth versus Mars. And so that's what I started doing, and I submitted a proposal to NASA. Basically, the Earth Science Division received funding to design what's called the SnoMotes, which was the science driven explorers for glacier environments. 

 

Pieter Abbeel: Oh wow. And so were these autonomous robots? 

 

Ayanna Howard: These were autonomous. And what's nice is because I could do all the risky things. So when you're doing Mars exploration, you get risk. You're very, very intolerant of risk. Whereas here on Earth, it was like, I could do fancy stuff. And so doing things like market based auctions, which like, I could never have done when I was at NASA. You know, basically non deterministic ways of intelligence collaboration. I was designing multi-agent systems, so I was able to get into the multi-agency system where the human scientists at the homebase were part of the agents. It was so fun because I could be creative and explore all these new kind of algorithms. And you know, there, if it failed, it really didn't matter because there was going to be a human that could go and fetch the rover. It might take a couple of hours, but could fetch the rover and bring it back. Not like with Mars. 

 

Pieter Abbeel: Now I imagine you'd first test things in simulations of failures would still be pretty far apart in practice. 

 

Ayanna Howard: Yes, I will tell you, we designed pretty realistic. So this is before the game engines of the future. So we actually designed some pretty nice physics-based simulation, where we modeled the interactions of the ground, the texture, the forces. I mean, it was pretty nice. But I will tell you, simulations are really only good for testing the logic, but they didn’t work as well in terms of testing the actual physical interaction between the rover and the environment. And I discovered this during one of our first trials. We went to one of the glaciers and pretty much everything failed. I was like this is a really expensive trip to go and we spent many nights. You know, if you think about the grad student nights where before a competition, you're spending twenty four hours. We were doing that because it's like we can't go home. We have to get at least one good field trial. 

 

Pieter Abbeel: Yes, you’re saying going to the glaciers, where are they? 

 

Ayanna Howard: So there's a couple in Alaska. There's actually one in Colorado. And then our final one was in Antarctica, but my students went to that one. I physically did not go to the one in Antarctica, my students did because those were two month adventures. And as a faculty member, you can't really take that much time away. 

 

Pieter Abbeel: But that's pretty amazing, right? You ultimately, for this project, you got, how many robots did you have in Antarctica? 

 

Ayanna Howard: So we we built out seven, but our crew was four. So we had four agents. 

 

Pieter Abbeel: And at that point were these robots already helping with scientific research? Or was it still robotics research? 

 

Ayanna Howard: It was a combination. So we published, in as many, I would say, robotics AI journals as we did in the scientific journals of, how do you grab data. And of course, those were led by the scientists because we were collecting basically the data he needed, whether it was barometric pressure, temperature, and models so that he can basically talk about how do you integrate disparate measurements that have temporal differences into his model. And so that was quite interesting because same field trial and, I'm like, Oh, there's like two different papers with two different stories. I also learned at that time that as researchers, we can work on the same project but focus and have a different lens through which we actually view it. 

 

Pieter Abbeel: And if you zoom out a bit from from the specific measurements, but to the research the scientists were trying to make progress on, could you say a bit about that? 

 

Ayanna Howard: Yeah. So the one that started it off. He was studying how the ice melts and primarily in Antarctica. And he had these really nice models, analytic models that were based on the data. He was able to actually, I would say, calibrate some of his assumptions and his hypotheses are based on real time data. And looking at the differences based on, so what happens is in these regions, they have these, basically these, sensors that are already placed. They’re static. And so by comparing what the static measurements were giving with the temporal in-between, he was actually able to do a little bit of a calibration of what does it mean when you have low cost sensors because our sensors are fairly low cost compared to these very highly sensitive instrumentations that were static? And how do you change your model and adapt your model to deal with that? 

 

Pieter Abbeel: That's really that's really exciting. And in fact, AI research for climate is becoming a very big topic, though you were ahead of the game many, many years ago, but right now we see a very big push, actually, towards more work in that direction. Where can AI? Where can robotics help? And I'm curious, are some things today that you're particularly excited about that can be done now? 

 

Ayanna Howard: So in, at least in, the climate area, I think this element of using AI and data to do a little bit better prediction, I think is key. But not just in the glacial environments, a lot with respect to oceans, as well. I'm most excited about some of the work around oceans and the water because they're all interconnected. All of it is interconnected and looking even at the patterns of weather around the large bodies gives you a good indication of what's going on even at the poles. So that's some exciting things, and it also includes rovers that are basically, underwater devices and surface based ocean rovers devices that are collecting this data, so it still includes robots, AI, data, and science. 

 

Pieter Abbeel: Now, oceans are really, really large, right? I mean, and especially very, very deep. Which requires a different kind of robot design to be able to get down there, I imagine. Do you think it's important to get all the way to the bottom and collect data at the bottom to make progress on these projects? 

 

Ayanna Howard: So I think that the answers on going to the bottom of the deepest are different than the ones dealing with climate. But I think that there are some things that we can answer if we were able to go that deep down to the surface, and those are much more around geography, geological phenomenon that we have. And understanding even earthquakes in California, right? Like if you could actually do some mapping of seismic activity in the ocean deep near Tokyo or Japan, I think we could actually have a lot more predictive power of natural disasters, honestly. 

 

Pieter Abbeel: Well, that would be great to have. Do you see any prospects of us getting there and getting those measurements? 

 

Ayanna Howard: Well, so this is the problem is, you know, the things that are really important to humanity, no one wants to actually pay for it except for the government. And then if you think about what are priorities, unfortunately, science, depending on what kind of science it is, is a lot of times underfunded. And therefore some of these questions that require investment in the science are not they're not necessarily fulfilled because no one wants to pay for it. 

 

Pieter Abbeel: Yeah. Well, I share the struggle. Fundraising is always part of the activities as a faculty. It's never, never off the radar. 

 

Ayanna Howard: I know it doesn't matter what you're doing. 

 

Pieter Abbeel: So from Georgia Tech, you actually recently transitioned to Ohio State and you are the dean of the College of Engineering there. A lot of academics shy away from becoming deans, as I'm sure you know, but you must have had a clear motivation to take this on, this much bigger role and responsibility. What motivated you and what is really driving you in this role now? 

 

Ayanna Howard: So like the Bionic Woman, I want to really change the world and save the world, honestly. And there is something magical about being an academic at such a large land grant institution such as Ohio State. So one of the things that I worry about, nowadays, is that students don't necessarily see themselves as engineers or even really computer scientists. They want jobs, but they really don't see themselves fundamentally as an engineer or computer scientist. But we need, we need more. We need. We have, many gaps, you know, you can look at any job, any company, and there's always a deficit, they can't fill the jobs that are already available today. And so I think there are some fundamental things that we can change, around how do we make education much more relevant in some cases. Much more accessible to student needs and more creative. And so that's why, as I'm making this is my research problem, is how do we really change education so that we can really address the needs of the world and society through technology, through innovation.

 

Pieter Abbeel: Now you do all the work at the university, but in addition, you have a venture, a nonprofit called Zyrobotics. What does Zyrobotics do?

 

Ayanna Howard: So Zyrobotics is a nonprofit, but it now, it's finished designing. But it really deploys and provides services for children with special needs around STEM education and therapy. So it has a host of different apps and devices that enable children. Focus was on children with mobility impairments at first, but now children who are trying to achieve certain developmental milestones, access to infrastructure through software apps, as well as the devices. 

 

Pieter Abbeel: And now, if somebody wants to access what Zyrobotics is providing, where do they go and what can they get access to? 

 

Ayanna Howard: Yeah. So right now, Zyrobotics.com, there's a link to all of the apps that are available on iTunes and Google Play. And because we when we converted to a nonprofit, about a year and a half ago, everything is free for consumers. So for children and for parents, it's all downloaded for free, which is nice. In terms of if you want specific devices that is, through, we're basically doing it as a donation. An agency, whether it's a local charity organization, basically purchases the device to give to whoever the child is in need. And so that's the path through the way we're funding device purchases. 

 

Pieter Abbeel: And when we talk about devices. What kind of devices are you talking about? 

 

Ayanna Howard: Yeah. So we have two primary devices. The one that I like the most is called Zumo. It's basically a stuffed animal. But what happens is the buttons correlate to different gestures on the tablet based on whatever the game is. And so if a child has limited mobility in terms of being able to control a gesture, from A to B, like a swipe or a pinch on a tablet, the buttons are coded based on whatever child, whatever the ability of the child is, to have that as a function. So think of it as shortcut keys, but in a playful device that kids want to interact with and play with. 

 

Pieter Abbeel: And this is a device that stands on its own? Or is it something you would wrap around, let's say an iPad and then you're able to use the iPad? 

 

Ayanna Howard: Yeah, so you would connect, it needs to connect with a tablet, whether it's iOS, or basically an Apple device or Android device. So that's where the software or the apps are resident on. 

 

Pieter Abbeel: So it's a physical accessibility device that's really fun for kids to to use. 

 

Ayanna Howard: Correct. Correct. 

 

Pieter Abbeel: And you said that's one of your favorite ones. What's the other one? 

 

Ayanna Howard: Yeah. So there's one we have, which is much more static. It allows you to connect any of your own devices, so if you have a wheelchair, for example, which has switches in say the headset or you have a joystick, you can use the interface and there is a standard interface and basically unplug it and connect into this. It's called this tablet access device, and it does the same things in terms of interacting with the apps or on the tablet devices. 

 

Pieter Abbeel: Very cool. And so I imagine that aside from being able to go to the website to access the free software and stories and be able to possibly get a donation order, of one of the devices, maybe people who want to contribute can also go to the website to donate and help out that way. 

 

Ayanna Howard: They could, you know, that's a good point. We need to put a Donate Now button because most of our donations are from organizations that have a need. You know, they're like, Oh, we need to provide this for a child. But that is a good point. 

 

Pieter Abbeel: It'll be there, maybe before the release of this episode. Right? Now switching topics again, because you're doing so many things. In 2019, you authored a audiobook, Sex, Race, and Robots: How to Be Human in the Age of AI. What inspired you? And can you tell us a bit more about the book? Definitely the title attracts attention. 

 

Ayanna Howard: I know it's racy, isn't it? So, you know, one of the things, I was teaching a number of courses on ethics, ethical AI, responsibility and things like that. And you know, there was a couple of books that were coming out that we were using, Weapons of Mass Destruction and things like that. But I felt that there was a gap in, well, how, what do you do to fix this? So we know all the stories and we can, we can look and we can Bing or Google. You know, there's always a story of the week about how AI is biased, but there's very rarely something that says, well, how do we fix it? Either as a developer or as the person who is, you know, the quote unquote victim of this? Is there something that you can do? And so really, that was the goal was to talk about these things that are out there, but also how we can be empowered to do stuff, as well. We do not have to be, you know, basically passive sheep where this is the AI wolf that is attacking. We actually can do something ourselves. And so that was the original motivation. But then working with the editor, they also wanted me to weave in the story of just how, like all the things that I've done and all the things that I've seen, you know, from very early on to really basically provide some grounding of why these things are important. And so there are stories throughout it that, you know, unfortunately, I will say link to some of the things in AI because being one of the oldest black females in this field, I have a lot of stories. And so they just felt that sharing those stories of why this is so important was also really important. So but yeah, there's there's a there's a chapter on sex, but sexual identity, as well as that the aspect of companionship. So that's there and what's the problem? And what’s the bias? Of course, race and ethnicity in terms of language models, in terms of facial recognition, that's there. But yeah, it's all there. 

 

Pieter Abbeel: It's all there. So I mean, we can't cover it all in this podcast. On that one topic though, I am very curious to ask at least one follow up question in this conversation here. As you think about, you know, what can we do to essentially reduce very quickly and hopefully bring to zero the kind of instance you're alluding to and make the AI world, robotics world more inclusive? What are some suggestions you have for us? 

 

Ayanna Howard: Yeah. So, one, I will say there is no way we can make things zero. I think that's a fallacy. And anyone who says that, clearly does not work in this space. And it's because we, as people, we are biased. It is actually the way we're designed. If weren't biased, we'd all be dead, right? Like, there's things that we are conditioned to do as our survival mechanism. And therefore, when we look at things, when we look at data, when we code, we are coding from our lived experience. And that's a fact. And so, we're never going to get to zero because we are human. But we can do, though, is make it so that we can mitigate it. Some of this aspect is, I believe, when thinking about designing new software or new algorithms, making sure your team is diverse or has a diverse perspective. Because what happens is maybe I have a lens and I'm talking in coding and I see the data one way, but you're going to see it differently. And when you mention your way and I mentioned my way like, oh, and guess what? The intersection of us means that we're also going to find someone who we both didn't even think about, because now we're realizing that we're looking at the data or the algorithms in a different way. So that's one. Is having a requirement where anything we produce has different perspectives, at the team, developing and designing and thinking about coding the data itself and the algorithms and the output. That's an easy solution. 

 

Pieter Abbeel: You're saying you don't even need to fully solve every single person, as long as you make every person part of a nice team, a diverse team. As a team producing things that are good is going to be an easier goal to achieve than achieving it for every individual to be able to be able to pull it off on their own.

 

Ayanna Howard: Right. And so that's one way and then that's on the developer side. And then the other side is providing the ability for the community to provide their own feedback. So, you know, in research, especially in my home, in Human-Computer Interaction, we do participatory design, all the time. Community based engagement, all the time. I think as a community, we also have to have the ability for the community, the non-developers to provide feedback, provide input, during the development process but also even after when things don't work versus you know, coming up with a tweet storm. That's not doing anyone, any good. That's not solving the problem. How do we provide the ability to have a voice, a voice of correction versus just a voice of frustration or anguish or anger because it didn't work? 

 

Pieter Abbeel: Is it possible that part of this is hard, also because everything's moving so fast and people are trying to rush things and be first because there is so much going on? How do we balance that part? 

 

Ayanna Howard: Some of this has to do with companies being able to say, Yeah, we are going to pause a little bit. We are going to, you know, not release the alpha version. We are going to wait and actually get customer feedback before we're going to release it because we're worried that, you know, the company next door is going to release it first. But that is something that companies need to work on. And if they don't, I've seen so many things in terms of regulations, that's coming. If you actually go to the government website, at least in the US, if the companies don't get together and start doing a little bit of self-reflection, there are going to be regulations imposed, guaranteed. That's that's going to be a fact. I think we're still in this time where we can possibly do something good at the company level, but I'm not in the company so.  

 

Pieter Abbeel: But I like this model, though as a company, build the reputation of doing it right. Rather than doing it the first or fastest. And of course, everybody has to value that. Otherwise companies won't want to do it because people don't buy it, then it doesn't do the company any any, I guess, dollar profit good. But if people also buy into it, which they probably would. It seems like a great path forward. And I like your notion. If you don't have, they don't do it themselves. There's a big stick. 

 

Ayanna Howard: There's a huge stick. You know, I also give an example. I think about the green initiative and I don't know if you recall. So we're now green as a thing, but I remember there was companies thinking about sustainability right from very early on about. You know, thinking about coffee growers and just fundamental things. And those companies survived, right? They had a market they didn't grow to, you know, billions and billions. But they figured out this is what we're thinking about. This is what it is to be sustainable or think about responsibility. They figured out their messaging, figure out the customers. And they did right. And now look at it, like, there are rules. I mean, SEC is talking about, Wait, we need, we need a sustainability plan or the investors are going to start de-investing. Again, the regulations will always come. It's just a matter of when and how big is the stick. 

 

Pieter Abbeel: Yeah. Switching to a very different topic. I noticed on Twitter, you do post about the Buckeyes, every now and then. Are you a big Buckeyes football fan these days? 

 

Ayanna Howard: I am a huge Buckeye football fan. Go Bucks! 

 

Pieter Abbeel: Okay. So are you going to some of the games? 

 

Ayanna Howard: I go to every single home game and we're headed to the Rose Bowl for the January 1st game. So that's exciting. 

 

Pieter Abbeel: Yeah. Good luck with that. I mean, yeah, it's been a good season so far. And you can top it off. 

 

Ayanna Howard: And next season, we will win. 

 

Pieter Abbeel: Sounds good. I don't think at Berkeley we're close to winning, so I don't think that means anything worse for us. 

 

Ayanna Howard: No, although Cal is, I mean, they have decent, decent sports.  

 

Pieter Abbeel: But I don't think we're close to making the college football playoffs, next year or anytime soon. So AI has been transforming so many things in our lives already. When you look ahead, Ayanna, what do you see as some of the most important things that will happen in our near future? And where are you, personally, looking to do your own research to contribute to that? 

 

Ayanna Howard: Yeah. So, you know, when I think about overarching AI, honestly, I'm pretty excited about, you know, right now it really is the connectionist viewpoint of AI. That's kind of, really made the most, I think, strides recently. But now the conversations about symbolic AI are starting to pop up. And what I'm really excited about is figuring out how do we combine the two? I really think in order to take AI to the next level, we have to figure out how do we really combine the advantages and skills of both intellectual, kind of, IP camps into one? I'm actually excited because I'm seeing conversations on how to do this. And so that's my prediction is that we will figure that out and it will push us to the next level. So what this means and why I say, push us to the next level. What this means is better, quote unquote better AI for people. Things that are useful in our everyday lives and can be deployed fairly easily. And I'm not going to say, you know, generalized AI, but specialized AI that doesn't take, you know, months or years to figure out. Like it can be, I need a new application. I can figure it out very quickly, put the pieces together and deploy it. And it works. That's the next stage. We're not there yet. So with my own research, what this is exciting to me is to think about what does this mean for people when they're trusting AI and their interactions and their behaviors? And then what does it also mean if we're thinking about the, you know, they used to call it the digital divide. Is AI going to expand the divide between the haves and the have nots and socioeconomics in terms of national camps? And so in my research, we do look at this element of over-trust in AI and how do we mitigate that? But also enhance it when it matters. I want people, you know, I'm in healthcare and this healthcare robotics AI, if my AI is 100 percent, I want people to follow the guidance of the system. I want them to be compliant. I don't want that to be like, Yeah, the AI told me to take my medication. I don't trust that I'm not doing that right. That defeats the purpose. But I also want people to question if it is wrong because as we know, even if it's accurate, 99 percent of the time, you know that one percent of the time that is not correct. I want people to say, Wait, hold it, that just seems weird. And what we see right now, at least in my own research, is people don't question. They don't question that one percent. And so we're doing a lot of things around looking at trust, looking at over-trust, looking at deception. You know, these chatbots, they are deceiving. Like, let's not let's not fake it. I mean, they are deceiving us because they understand our human norms. They understand our values. They understand what triggers us. And so they kind of push us in one way or another. I mean, that is really a form of a white lie. And so really understanding what does that mean and when is it allowable and when is it not? And so those are the things I'm excited about in terms of the next stage of my own research. 

 

Pieter Abbeel: Well, I would tell you make it even harder than just studying AI, because you also have to study humans in the way you line it up for your research. 

 

Ayanna Howard: I do. But humans are fascinating. The AI part is easy. It's the humans that are hard. 

 

Pieter Abbeel: Humans are very hard to understand, that's for sure. Now talk about humans. Let's say some younger students or even students, still in maybe high school or even elementary school, who want to get into AI, robotics, do you have any suggestions for them? 

 

Ayanna Howard: Yeah. So, you know, at this point, it's getting involved in coding and computer science. Whether it's, you know, if it's elementary level, things like it's such a Scratch, or high school level thing, such as maybe Code.org or some other organization like Black Girls Code. Really getting involved in coding and what we're starting to see, at least in the AI world, is there's now a curriculum that's being developed. AI4ALL is an example. That is, basically, providing some of the modules curriculums around AI. I'm seeing that is going to continue accelerating. In Georgia, for example, they're looking at AI curriculum that can be adopted at the high school level. We're going to start seeing that at least in a lot of the states in the United States, as well. 

 

Pieter Abbeel: Well, and it's been wonderful having you on. Thank you so much. 


Ayanna Howard: This has been great. Thank you for having me.

bottom of page