Clement Delangue on The Robot Brains Season 2 Episode 21

 

Pieter Abbeel

Originally created to be an artificial bestie that you could chat with when your real friends weren't available. Six years later, Hugging Face is the leading natural language processing or NLP startup with more than a thousand companies using Hugging Face in production, including Bing, Apple and Monzo. Hugging Face has fostered a huge open source community that is an enticing hub for anyone working in deep learning. And while Hugging Face was initially mainly aimed at NLP, as of recently, it's expanding to machine learning more generally, including computer vision and reinforcement learning. Talk about growing and expanding. Hugging Face just recently announced they raised a $100 million Series C from, among others, Sequoia and Coatue. I'm so fortunate to have with me here today, Clement Delangue, the CEO of Hugging Face, to talk about the company's mission, its models and his views on the overall AI landscape. Welcome to the show, Clement. It's so wonderful to have you here with us. 

 

Clement Delangue 

So nice. So nice for the invitation. Thanks. Thanks a lot. 

 

Pieter Abbeel

Yeah, it's so nice to meet you because actually, I mean, many guests I have met many times before, but in our case, it's the first time we meet. And I'm so glad, Richard Socher, our mutual friend was able to connect us. 

 

Clement Delangue 

Yeah, it's exciting. I'm looking forward to the conversation. 

 

Pieter Abbeel

Maybe we'll start at a very high level. I know Hugging Face is a community favorite and a lot of people know so much about Hugging Face already, but just, you know, to get started at the highest level, nevertheless. What is Hugging Face? 

 

Clement Delangue 

Yeah. So at Hugging Face, we believe that machine learning is becoming like the new way of building technology. Right? It's like software 1.0 versus software 2.0. And we've been working in this gap like transition from software 1.0 to software 2.0 to become kind of like the most popular platform for companies, researchers to host their machine learning artifacts, which are like models and data sets, to be able to share them between the team members, to collaborate on them, to evaluate them, and then ultimately to use them in production. So a little bit what, like what GitHub has been for software, right? Being this, this platform where people who code, collaborate on code, we've become kind of like a similar platform but for machine learning. 

 

Pieter Abbeel

And now of course, machine learning gets used in many forms. I mean, some people use APIs, just serve models. Other people want to use the actual train model itself and download the model. Other people wanna maybe just have the data the model was trained on. Where is Hugging Face in that spectrum? 

 

Clement Delangue 

I mean, the interesting thing that we've learned is that Hugging Face is that layer of abstraction that you want to take, not only depends on the type of company, but also on where you are in your machine learning lifecycle. In the sense that when you start building a new feature or a new workflow or a new product, you want to go for maybe like this simplest level of abstraction for you. So maybe you're going to start with an API, or even like a DMO. So for example, on the Hugging Face platform, on all the models, you have to wait to try and test the model right away without writing any single line of code. And so that's really good for like the start of your project, when you want to test a new model or test a new use case, test a new feature. And then progressively as you mature in your development of machine learning features, then you want to have more control and you want to invest more resources, for example, to optimize for inference, scale or latency on your own infrastructure. So that's when you progressively move to more extensible parts of our stack. And sometimes you can use our open source libraries which have been amongst the most popular libraries out there. So what we're seeing from customers, from companies, from users, is that they usually move from different layers of abstractions depending on where they are in their machine learning journey, in a way. 

 

Pieter Abbeel

Now, one thing that I was curious about with open source companies like Hugging Face, is that you're hosting all these open source models, which means that somebody can just come in and download a model and start using it, right? Which is absolutely beautiful. And you can actually contribute models and see your models being used by others and so forth. But what does that mean for a company? If you are hosting these models, people come and take them. Where are you getting paid? 

 

Clement Delangue 

Yeah. So what's interesting for us is that if you look at the market, if you look at the startups, how they've been built, the open source model for startups has definitely been validated in the last decade. If you look at very successful companies, you can see like Mongo, you can see Elastic, you can see Confluent, who is being said by Andreessen Horowitz as being like the fastest growing company in terms of revenue ever based on an open source, open source project. The thinking around it is that by creating a lot of value for the ecosystem with open source you can get a level of adoption that is very insane. So for example for Hugging Face, we have over 10,000 companies using us now who have shared over a hundred thousand models, half of them being public for everyone to use. And so you get a really unprecedented kind of adoption. And then part of this huge adoption, there's always like a fraction of these companies that are using it, that are willing to pay. And sometimes it's for additional enterprise features, right? For the larger enterprise companies that are using your platform. Sometimes it's because they're using your platform so heavily and they're so dependent on your on your platform that they have some specific needs. And so what we learned from these successful open source companies is that it's definitely valuable for companies to have a very popular open source and generate enough revenue at the same time to be sustainable and to be very successful companies. And it's even more true for a case like ours where we believe, you know, for solving machine learning and democratizing machine learning. It's something that we can’t do on our own as a closed source individual company. We want to take an approach where we are very collaborative, open, and everyone can contribute with us to achieve this massive like milestone for humanity, which is the democratization of machine learning. 

 

Pieter Abbeel

And it's certainly wonderful for all machine learning practitioners to have to have such a hub where you can go find all the models and you don't necessarily have to train all the models yourself, which can be quite time consuming and so forth. There are also, as you said, companies using Hugging Face. And for what kind of things are they using Hugging Face? And would a consumer, maybe somebody not working in AI, themselves, running to applications that they work with or run into somehow? That is under the hood, actually powered by Hugging Face. 

 

Clement Delangue 

Yeah, it would be actually hard today for most people listening to this podcast to go one day without using some sort of feature that is powered by Hugging Face, one way or another. Just because machine learning really made its way basically everywhere. It's hard. It's easy to miss it if you're not paying attention. But it's been very impressive for the past few years. So some of the biggest use cases that everyone is using are things like search. Now if you're using search, it's heavily machine learning powered. So Bing, for example, is a good, good example of a company that is using Hugging Face to make search better. If you are using autocomplete for example, you know like on LinkedIn when you have quick replies when you're messaging. Or, for example, when you go to Gmail and you have to complete it. These are use cases that people are using all the time when you're going on social networks. But a lot of the news feed ranking, a lot of moderation and information extraction is powered by transformers. And now more and more, we're starting to see use cases outside of text for audio. For example, when you use a video call platform that is going to transcribe your course or get information from your course. When you're using something like a computer vision, for example, we have a sort of code segment that the AI is using us to do video segmentation and image segmentation to do automatic annotation that is then used for other use cases. So it's very, very widespread now in a lot of different use cases. What we're seeing in the industry and with companies is that machine learning is definitely becoming like the default way of building technology. Right? It's almost like for the best in class companies, when they build a new feature, a new product, a new workflow, it's almost like they start with machine learning now. And if it doesn't work, they fall back to this almost like old school way of doing software writing like a million lines of code, very rule based. So it's been very exciting to see. 

 

Pieter Abbeel

Now, it turns out, I actually claim that on the podcast we haven't really had a guest who's done nearly as much work in natural language processing as you and Hugging Face have. And so I'd love to take the opportunity to kind of also zoom out from Hugging Face, all the way to NLP. And I'm curious about your view on essentially, I mean, the last five years in natural language processing have been just crazy in terms of rate of progress. And you've been at the center of this with Hugging Face. Now I realize this is expanding much beyond Hugging Face at this point. But you were at the center of this when it started. And I'm really curious about how you experienced this NLP crazy progression in the last five years, and where do you see this going? 

 

Clement Delangue 

Yeah, you're totally right. It's been really insane in NLP. And it all kind of basically started with the paper, “Attention is all you need” in 2017 and then BERTs that came out a year later in 2018. And what this new generation of architecture, the transformer models, did is that they basically started to beat the state of the art on every single NLP task out there and every single benchmark. And you started to see more and more pre-trained models appear. It started with BERT, but then you got GPT, you got RoBERTa, you got T5. This whole new generation of models that not only proved to be more accurate for NLP, but it also proved to be very easy for companies to use. And so it happened that just a few weeks after these models were released, companies, thanks to transformers, our open source library and thanks to the Hugging Face hub, started to use that in production for their use cases, like the ones I talked about, right? For search, for information extraction, for text classification. And really quickly, you saw this loop between, you know, like the models getting better accuracy, getting better companies using them and seeing value from them and investing more into machine learning, into NLP to have better models. And it created kind of this very positive loop that completely changed the NLP landscape and turned it from a niche machine learning topic, maybe three or four years ago to arguably the biggest machine learning domain today. And to the point that now transformer models, that we've seen changing the NLP landscape, are starting to make their way into other domains, into writing, into speech, into vision, into reinforcement learning and much more. So it's been a very, very fun moment to see. And it's been driven mostly from what we've seen by open science, right? So scientists from all the best research labs are really sharing research papers and open source. So scientists sharing their models openly with the world. And I say that because today there is this trend where some labs are starting to do less and less open science and open source. So I want to remind that to everyone. Because I think if we want to keep progressing as fast in the future, we have to make sure we understand that this progress has been brought about by open source and open science. 

 

Pieter Abbeel

Absolutely. It always amazes me how open science is, especially anything in machine learning, there is so much sharing. Even though you're right that there is now a bit of a trend of people not always open sourcing everything anymore, which I guess ties into a whole related trend that some training of this model sometimes costs millions of dollars, right? To process all the data, train your model, millions of dollars, and then just put it out there. I guess not everybody feels like after they've paid a few million dollars to train their model, to just put it out there for everybody else to just now have available for free. 

 

Clement Delangue 

Yeah. I mean, the beauty of these models is that they can be a bit heavy to train. But once they've been trained and you get like a pre-trained model, this model can be used in a lot of different use cases without any retraining or when you want to account for things like new tasks or new domains or even sometimes new languages, what is called the fine tuning, so adapting this model for new new tasks, new domain, new language is much, much cheaper. That's the beauty of the underlying technique for it, which is called transfer learning. You basically go from a very large training to very small, fine tuning. So what's really interesting about the ecosystem is that even though the initial training is costly, if you look at the ecosystem in general and the usage in general, it's actually pretty efficient for everyone to use these models compared to previous generations where you had to do a lot of training for each task. You just domain each language, each use case. So that's kind of the first point. And then the second point, I think it's worth remembering that most of these very, very large trainings are done by big tech or very large companies with very big funding revenue streams. And so even the multi-million dollar training for them is, is very much like a drop in the water. We all know how expensive researchers and team members in machine learning are today. And so at the end of the day, if you take that all into account, I don't think it's much different. And the economics are much different than before because, at the end of the day, the benefits of sharing, open sourcing your models usually end up much, much greater because by sharing your models openly, you get more visibility for it. You get the whole ecosystem. We can work on, how do we make sure that we have the right level of control for accountability? How can we work on systems to mitigate the values of these models? You get better researchers because researchers want to work at companies where they can contribute to the ecosystem and not just contribute to your single company. So these are all kind of like the logics that apply to science in general. And that's why science is based around publishing papers and machine learning being just like science driven topic. The same logic applies. Right. So there's a big gap, like a gap between maybe the short term financial thinking about models and not open sourcing them versus kind of like the more sustainable long term thinking around that. 

 

Pieter Abbeel

I like the way you think about this. Well, one thing that's been on my mind for a long time, ever since I heard the story, is that Hugging Face didn't initially start as an open source hub for machine learning. It started as a chatbot as a lot of people would think about it. So I'm really curious. Take us back to 2016, when you decided to start this, at the time, chatbot company. What was going through your mind? And how did that change into what it became today? 

 

Clement Delangue 

Yeah, when we started Julien Chaumond and Thomas Wolf at Hugging Face, we were really excited about machine learning. We were like, okay, this is the future. This is super exciting. That's what we want to work on. And we almost started with the end goal being like, okay, what's the most challenging thing we could be working on right now on machine learning? And we ended up on this idea of building a fun, entertaining, open domain, conversational AI. You know what everyone has seen in sci fi, you know, like Her, you know, this dream of having a conversational AI who you could talk to about the weather, about your friends, about romance, about the latest sports game and things like that. And so we were like, oh, that's, that's really challenging. It looks like nobody's managing really to do it well. Siri, Alexa, they're very transactional, very productivity driven. Not really fun, not really entertaining. All right, let's start with that. And that's what we did. We had a lot of fun for like almost, almost two years. And we kind of were lucky, I think, as sometimes startups are, in the fact that to do what we wanted to do initially, well, to do things like open domain, conversational AI, we had to think about how to do a lot of different machine learning tasks. We wanted to be able to obviously extract information from text. We wanted to understand the intent from the text. We wanted to understand the sentiments. We wanted to be able to generate text to reply. We wanted to receive images from the conversation. And so it detects objects in images. And we wanted to handle so many different topics that we needed to handle a lot of different datasets, right? Like a sports dataset to talk about sports. Or a weather dataset to talk about weather. And so we ended up building this platform to handle a lot of different models, a lot of different datasets. And almost randomly because we always had this vision of contributing to the community, we studied open sourcing parts of that. And just right from the get go, the interest from people was kind of insane. People started to contribute. Open source contributors came in. Companies started using it. And so we got this huge adoption validation. That kind of led us to think, wow, this like so much value that we're creating there. We don't really know what it is exactly yet, but there's so much interest from people that there must be something. And so in a matter of a couple of months, we basically went from this initial thing that we were working on to focusing on doing a machine learning platform. And we've never looked back. It was the best decision of the company's life to make this transition. And we're super happy about it. 

 

Pieter Abbeel

That's so interesting because it could also be that thanks to the Hugging Face platform, that it becomes a lot easier for you or anybody else in the future to build that chatbot, that you weren't targeting, right? 

 

Clement Delangue 

Yeah. And we do have a lot of conversational AI companies using the Hugging Face platform now. Siri and Alexa are using us for example. So yeah, maybe at some point someone is going to be able to build our initial vision using the Hugging Face platform, and it's going to be like a fun cycle for sure. 

 

Pieter Abbeel

And at the core, of course, as I understand, part of where it started was the Hugging Face transformer. And I'm curious when in 2017, the paper came out, the transformer paper Attention is all you need. Arguably, the biggest change in AI since the AlexNet, ImageNet breakthrough from Geoff Hinton and his students. How did you experience that? How did you just see that this is actually really important and then decided to implement an open source model?

 

Clement Delangue 

So we were already excited by some things that built some foundations for that before. And so we were following really closely what was happening there. For example, Jeremy Howard, Sebastian Riedel were already doing some fun stuff on representation learning there. And so we were really close to what was happening. And then when we saw the paper, when we started playing with BERT, we were really, really mind blown. And like, started building, thanks to that. And really quickly, just the fact that people started adopting it gave us even more confidence that it could be massively impactful. 

 

Pieter Abbeel

Now, one of the things that really fascinates me about transformer models, and I think you see this closer up so I'm very curious about your perception, is they seem more general than previous models. And we know that the brain is fairly general. If somebody, let's say, is blind, they might use the part of the brain that's usually used for seeing for other things. There's this kind of more general purpose fabric somehow in the brain, definitely more general purpose than anything we have today in an artificial neural nets. But it seems like transformers get a lot closer to that. And I'm curious if you're seeing that in the code and how much code sharing is possible now between different problems, even different domains? And is there still code being shared in great detail? 

 

Clement Delangue 

Yeah. That's one of the main properties of transfer learning, which is at the basis of transformer models. And that's to me, the most exciting development in machine learning, because, as you said, it allows you to generalize to more tasks at the beginning, within and appear within text and now on most of the other modalities. So we're starting to see more and more multimodal models, right? Text plus image, for example, like CLIP and DALL·E that we've seen. There's also audio plus text. And we believe that there's going to be more and more of this differentiation between all the different modalities that is going to get blurrier and blurrier. There's a good thread from Andrej Karpathy explaining how the difference between the different modalities are getting slimmer. I believe that in three years we won't even talk about different modalities. We won't talk about computer vision, NLP, and speech. We will just talk about transformers and transfer learning and maybe machine learning in general. You won't even make sense to differentiate the different modalities. It's just going to be like different inputs in a way in the models. And that's what allows different things to happen. First, it allows, like more community members in the science community, to collaborate on similar topics so it makes progress faster. And then it also allows companies to use that same kind of abstraction, for example, to use the Hugging Face hub with different features, different workflows, without having to reinvent the wheel and create completely different systems. It makes it much, much easier for companies to build a lot of different features using the same abstractions. So we've seen a lot of companies, for example, starting with a very simple feature. Maybe they start with information extraction to detect some things in some text that they have and then they do that well. And because it's the same kind of model to do text classification, they do that next. And then after that they would autocomplete. And then they had some kind of image classification features. And so it gives some consistency to the way companies also can build the machine learning features which participates in this democratization of machine learning for companies that feel like that. 

 

Pieter Abbeel

So that's the user perspective. Somebody comes in and sometimes leverages, takes advantage of everything available on Hugging Face or pieces available on Hugging Face for their application. Now, one thing that I think also plays a big role is the other way around. If you are an AI researcher, developer, if you build a good model, you can put it on Hugging Face and have a tremendous impact with your contribution. How did you start building that side of the community and what are some of the exciting things you see happening there right now? 

 

Clement Delangue 

It's been very organic. At Hugging Face, we are very community driven. So the way we build the product is just asking people what they want and in that case, asking researchers what they want and build that. So initially, what researchers wanted was just an easy way to host their weights. And so we built that. And then, they were like, okay, that's really nice to have our weights hosted. But now, you know, I have team members, or have the public, who want to test these models and who are not scientists. And so it's hard for them to run them. And so we started building the ability to demo your models, to test your models on these pages, for example, with something called Gradio, which is like a machine learning demo tool in Python. And then some scientists, we're like, okay, that's nice, but how do I communicate about devices or the limitations of my models? And that's when we started implementing the model cards that have been invented by Meg Mitchell, Dr. Margaret Mitchell, who actually joined us a few months later from Google, for researchers to be able to communicate properly about how other companies can use or not use their models. So it's been a very iterative process like that with scientists. And I encourage obviously there are some scientists listening to this and they're struggling to do some things with that, with their models, just like tweet at me, tweet at Hugging Face to tell us, what they're struggling with. And we'd be happy to integrate that and add that as a functionality in the platform because that's always how it's been working. Very community driven. 

 

Pieter Abbeel

Talk about community driven. I’ve got to ask, as a reinforcement learning researcher, I saw you recently integrated the decision transformer into the Hugging Face library. Any more general plans in the reinforcement learning direction? 

 

Clement Delangue 

Yeah, we're seeing reinforcement learning as a very important domain that we really want to invest more and more in. We were super excited about decision transformers, obviously because it's adding transformers into the mix. And we think of reinforcement learning being part of more and more of machine learning workflows, even when it's not the only machine learning types of models that are used, but even a complement, in addition to other models. So it's super, super exciting to us. We have in the team someone called Tom Simonini, who's been doing an amazing job on this. He's actually organizing an introduction to a reinforcement learning course soon to explain a little bit some of our initiatives there. So it's super, super exciting. We're super bullish on reinforcement learning, and we plan to do more and more in the coming months. 

 

Pieter Abbeel

I look forward to checking out the course and pointing more people to it. It's really exciting to see more resources on that front being put together. And of course, the other thing that I noticed is that I see more vision models emerge because I think a year ago at least, I was thinking of Hugging Face as the place for open source, NLP, everything's there. And then recently vision came into the mix. Also curious, what's your, well, vision on that?

 

Clement Delangue 

Yeah. Well we've seen really, really great adoption of speech and vision. I think now we're seeing 300,000 monthly downloads of speech models and over 200,000 monthly downloads of vision models on the platform. So it's been super exciting, especially as transformers like vision transformers have started to beat some state of the art and be very successful on these topics. So the idea is to continue investing really heavily on these topics. We actually just closed a new round of funding, a Series C of 100 million. And most of it is going to go towards more investments, towards computer vision, speech, reinforcement learning. And also biology, chemistry. We're starting to see some usage in these in user domains. And what I think in my mind is even more interesting is the intersection of all of them. So how can you use speech with NLP with computer vision, adding reinforcement learning in the mix to do some alignments. So really trying to blend all these machine learning domains and see how we can do the traditional tasks better with better accuracy and more easily, for researchers and for companies. But also how you can create new use cases. And solve new problems that haven't haven't been solved before. 

 

Pieter Abbeel

Congratulations on the recent C round. 100 million dollars, that's a lot of funding available to start building a lot of new things. I imagine you're also hiring then, at this time. Is that right? 

 

Clement Delangue 

Yeah. We are hiring for like any position that you can think of. We have a little bit of an unusual way of hiring in the sense that we don't really hire on very specific job positions and job descriptions. Instead, we like finding really smart people who share our values, who are aligned with our culture, and then we bring them in. And then we assume that if they're excited about what we're doing and if they're really amazing, they're going to find their way to having an impact, almost no matter their position or their job description. So it's been really fun. I mean, we went from 30 team members a little bit more than a year ago. Now we have 130 team members. We are going to end the year with around 200 team members. It's probably been my greatest joy of the past 12 months, scaling this team based on our culture, based on our values, and making sure that we really stick to them and we build this organization that is a little bit different from others or unique. Very, very decentralized, very open, very collaborative, very value inspired and value value driven. It is a very, very important topic for us. So it's been really fun personally for me as a CEO to be able to scale the team. It has been really, really amazing. And I'm really grateful to be able to do that even more in the future, thanks to this new round of funding. 

 

Pieter Abbeel

Yeah, congratulations again. And it's definitely quite a different approach that you're describing to recruiting and hiring, where it's not driven by specific positions, but it's driven by values and sheer, I guess, abilities and excitement for people to join in the journey. I also noticed that you're actually located across multiple cities right now. How are you living that life? Where are you spending your time? 

 

Clement Delangue 

So when we started the company, we were already in three countries from the get go because I was, at the time, in New York, Julien Chaumond was in Paris and Thomas Wolf was in the Netherlands. So it's been fun to have that part of our culture from the get go. Now we have a couple of bigger offices in Paris and in New York. Then we have a bunch of smaller offices, like we have one in Telluride too. For example, we have one in Switzerland. We have one in London. We have one in the South of France. And I have a small one here in Miami, Florida, where I'm spending most of my time, but also spending a lot of time traveling to the other Hugging Face locations. And also most of the people working at Hugging Face, like over 60% of the team are just remote everywhere, all over the world, working remotely and then traveling to the offices to to spend time with the team in person. 

 

Pieter Abbeel

It sounds like you started this remote work before COVID. Is that right? 

 

Clement Delangue 

Yeah. We started remote like in 2016 and always had this culture of this decentralization that's working really well for remote. We have a strong asynchronous culture, a strong culture of transparency. Like most of the things at Hugging Face are either happening totally in the open. So like on Twitter, for example, or on massive Slack rooms. So all of these we started pretty early on in the company's journey, they ended up being pretty useful and pretty helpful when COVID hit and that we had even more constraints in terms of going to an office. 

 

Pieter Abbeel

Now I'm actually curious, you mentioned the series C earlier. Can you share something about the investors that led and are part of the series C? 

 

Clement Delangue 

Yes. And we have a mix of our existing investors who we've been working really well with. Some of them are Lux Capital, Addition. But also older investors, for example, we have AIX, with Richard Socher, our mutual friend, as an investor. And we added to that Sequoia and Coatue, which are known US investors, who showed a lot of conviction to join us. And now going to bring especially some expertise around open source and community. For example, Sequoia, they've been the investor at GitHub which is obviously a very big inspiration for us as like a similar platform for typical software to what we're trying to do for machine learning. So we're super excited about doubling down on what we've been doing on the open source community. Making sure we have enough resources to really focus on the long term, on what we're doing. And really keep investing again in the community and open source with that. 

 

Pieter Abbeel

Well, congratulations. Often with fundraising, a big part of the money, a big part is also bringing in investors who really want to support the mission and ideally are very highly reputed themselves. And obviously, you manage to check all the boxes in your fundraise. It's amazing. 

 

Clement Delangue 

Thank you. 

 

Pieter Abbeel

So I really like the thing you bring up there because very often when I talk with people who, let's say, work at OpenAi or Google, who are two companies who train some of the biggest models, I hear things like, oh, it turns out there are a lot of challenges in scaling up the training of these models. There's a lot of engineering, but also research related challenges that you wouldn't really expect. But then it's seen as a private IP to understand how to deal with those challenges because, of course, it's natural. They want to make a profit, I guess, and have an advantage. But your spirit is so different, you're saying, hey, our impact can be much bigger if we actually can somehow run a project that exposes everybody to those challenges and lessons learned and so forth. And so I imagine this paper or report is going to be very widely read because until now, it's just in a few places where people really see the specific challenges that are encountered when running things at such a large scale, training for weeks and weeks, possibly months, to get a model out. What does it take to do that successfully? And I'm also curious because I mean, since this is going to be the first run, you're taking on it, I mean, it could be that you need to try this a few times before it actually succeeds, no? 

 

Clement Delangue 

Yeah. We already had a big science team. Already did a few runs. So we already have smaller models. So it's really very much like the last run of training that is happening now. It's happening. What's also interesting about the project is that it's happening thanks to Nvidia in France and Jean Zay supercomputer, which was one of the, I think top ten biggest supercomputers in the world. And it's happening, as you might know. And most of the energy production in France is based on nuclear energy. So it's much more CO2 efficient than in some other countries. That's something we care a lot about too, at Hugging Face. Part of the team, someone called Sasha Luccioni, who's working a lot on the environmental impact of machine learning. But it's really interesting to follow the training and we even have someone on the team called Stas that you can even follow on Twitter. And if there's a problem happening, he's going to tweet about it and share on GitHub on Hugging Face about what happened and how it got solved. So the nitty gritty of the training also is super interesting and it hasn't really been seen in the past except if you are at one of these big organizations that have trained a large model like that. It's really fascinating to follow. There's even a deep science training Twitter account that has now, I think, over 3000 followers that is tweeting about the progress every day. So it's a very mesmerizing thing to follow.

 

Pieter Abbeel

I'm going to start following that once I'm off this call. 

 

Clement Delangue 

Yeah. And just to also answer the first part of your comments. Open sourcing and sharing publicly is really the result of different mindsets. If you start from thinking that the main competitive advantage of companies is not so much the technology that they have, at a given points, but more their ability to build technology faster than others, especially in such a fast moving domain as machine learning where you can be outdated, basically in two months. Then open source and open science is actually a means to that ends because by sharing publicly, involving the community, getting contributions from users, attracting the best scientists, you actually improve your ability to build technology faster, faster than others. So even if you don't have this will to contribute to the ecosystem and contribute to the community, I think as a company it is just a smart decision to do that because it increases your ability to save heads and stay at the cutting edge of what's happening. I think in machine learning, we've seen a lot of companies, especially for example, enterprise, AI companies, basically the companies that are doing the most revenue today in machine learning, who lost their technology edge and started not to be able to hire really the best people in the domain. And it has been a big problem for them because really fast you start working on like older school stuff and you get outdated and just your things are starting to work much less than more cutting edge stuff. So I think open sourcing, open science, and contributing to the community is really kind of like the smart way to stay ahead in machine learning, stay close to what's happening and actually build up your capability as a company to build technology faster than others. 

 

Pieter Abbeel

I really like that view and think that something is enabling you to move faster because you have no choice anymore, almost. 

 

Clement Delangue 

Yeah it forces you, right? Because once you release it, you see the next thing, too. That's very true. 

 

Pieter Abbeel

Now, one thing I've been curious about for a while, well, I'm personally a big basketball fan, and I noticed that you have Kevin Durant, one of the NBA basketball stars on your investor team. So I'm really curious about the story behind that. 

 

Clement Delangue 

Yeah, it's a pretty funny story. I'm pretty convinced that being French and growing up watching the NBA, when I met him, I barely knew who he was. So I really talked to him like he was like the average, average Joe. And I'm pretty convinced that's a big part of why he decided to invest because I wasn't this big fanboy being very impressed, very like sweating and stuff. I was like, hey, man, so that's what we're doing. You know, I didn't really care too much. And I think in some way, he appreciated that. And I think without putting words in his mouth, I think he was excited about the prospect of working on something very serious and very technical, like machine learning, but with probably a less serious approach and less professional approach, like B2B approach, than most other companies. At the end of the day, our name is Hugging Face. Our logo is an emoji. And so it's similar to this approach of doing something entertaining like the NBA but being very serious about it, but it's on something not serious. There's some parallels there. Another funny story is that when he invested, he was playing for the Warriors. So I kind of like professional duty. I started to root for the Warriors and become this huge Warriors fan. I even had my Warriors hat. You can ask people at the time. I was always wearing my Warriors hats, but living in New York and then two years later he decided to move to The Nets. That's literally one block away from where I live, because I was living in Fort Greene just next to the Barclays Center. And then I was in a conundrum, it's like, okay, what should I do now? Should I switch teams? And people are going to tell me like, oh, you're switching to the better team, you're not a good fan. Or should I keep rooting for the Warriors while he was playing literally a block away from me in my like, not home team, but of like where I was living team. So at that time I decided, okay, the best answer to this challenge is just to stop following the NBA. And so I kind of stopped a little bit or at least have been following it a bit less than I was before. But it was, it was pretty funny to see that happening. 

 

Pieter Abbeel

Yeah, what a good story. And how do you get connected with him? Who made the introduction or did he just reach out? 

 

Clement Delangue 

We got introduced by one of our, one of my favorite angels, Brian Falcone. A lot of people are calling him coach because he's been fantastic for Hugging Face’s, especially early stage. He is the angel, who is an investor of ours and introduced me to KD and to Rich Klienman, who was his agent working on this fund called Thirty Five Ventures, which is a fantastic fund, by the way. They've done a lot of equity investments. And so that's how we connected.

 

Pieter Abbeel

What a good story. So Clem, I think it's obvious to many people and it's been obvious to you for a long time. AI is having a really big impact everywhere in the world and just more and more every year. And as I understand it, you also spent quite a bit of time thinking about AI ethics. Can you say a bit more about that? 

 

Clement Delangue 

Yeah, we basically believe that AI and machine learning is already having a positive impact today. If you think about how it improves search, it gives access to more knowledge to more people, how it's improving translation, removing some of the language barriers, how it's like improving moderation, for example, for social platforms that really badly need that. There's going to be a lot of positive use cases that we haven't really figured out and that the community is going to invent and that the builders using machine learning are going to build. So that's why it's important for us. And that's our mission to democratize good machine learning. At the same time, we think there are very practical challenges, ethical challenges posed by machine learning, things like the biases that are present in these models, things like, you know, the presence of PII in private information in data sets or in these models. Things like energy consumption of these models. So it's really important to be very intentional about that and work and invest time and resources on these topics. That's one of the reasons why we brought in the team. Someone called Dr. Margaret Mitchell who is the co-founder and co-lead of the ML ethics team at Google in the past and who is one of the most recognized researchers on these topics, to work on things like the data measurement tool which is a tool to detect, analyze your data sets and, for example, find bias in your data sets. That's why we've been working on something called model cards, right? Which is the standard way to communicate about limitations, about biases in your models to make sure that people who are going to use these models will do properly with limiting dual use, for example. We are just scratching the surface, but we're really excited about this topic. We think it's a very important topic. We think every company working on this field, working on machine learning, should think about how they can build what Dr. Margaret Mitchell calls a value inspired processes when they build machine learning. And that's kind of the way that we can really all, as a field, in a very collaborative way, a very open way, make sure that we steer machine learning towards a positive impact for humanity and for the world. 

 

Pieter Abbeel

Well, that's wonderful. And what a wonderful conversation this was, Clem. Thank you so much for making the time. 

 

Clement Delangue 

Thanks so much for having me.