top of page

Amit Prakash on The Robot Brains Season 2 Episode 16

 

Pieter Abbeel 

Amit Prakash is CTO and co-founder of ThoughtSpot, a company providing artificial intelligence and search driven analytics. The Sunnyvale, California, based company reached unicorn status in 2019. Its latest valuation is two billion dollars. Its clients include Nike, Walmart and Apple. Before founding ThoughtSpot, Amit was a software engineer at Microsoft and a tech lead at Google. On the show today, Amit is here to tell us about his mission of building a more fact driven world. Amit, so great to have you here with us. Welcome to the show. 

 

Amit Prakash

Thank you, Pieter. Really excited to be here. 

 

Pieter Abbeel 

Well, thanks so much for joining us. Now, of course, the main thing I'd like to chat with you about is ThoughtSpot. But before we get to that, right now, you're in Silicon Valley, running your own startup. But as I understand, you actually grew up in India. How was the journey starting there and now having founded your own company? 

 

Amit Prakash

Yeah, it's been an interesting journey. So and a little bit unique, I should say. So in some ways, the journey started with my father. He was a professor in an engineering college, and he came here to do his PhD in Purdue, and he got really, really sick. And then he had to go back. And as I was growing up, it was more of this dream that someday I'm going to go to one of the top universities in the US and do a PhD. That was kind of pretty much with me since I was, you know, kindergarten or something like that. And yeah, so that was something that, you know, just was kind of a natural part of my life. And I had to do it and I was planning to follow the footsteps of my father and be a professor in the engineering college and do some research. But by the end of my PhD, I had taken a very, very practical problem and I liked hearing, and in the end, I had produced a solution that was very theoretical. I was proving like logstar inbounds and things like that, and I realized that what I produced was a very elegant theory, but not very practical. And that got me to want to work in the industry for a little bit before jumping back into academia. And once I went to industry and started working on interesting problems, that sort of attracted me so much that I decided to just stay. And so that's how it ended up working for Microsoft for a little bit. And then when I joined Google, it was on the CMU campus and I was still kind of thinking I will probably work with Andrew Miller, who was the director and also professor at CMU, who worked with some of his students and do some academic work. But it was just too much fun and industry to work on hard problems, and at some point it was like, okay, so I like this, but what's the next thing that I want to do? And the natural answer was to kind of go start a company and invent something new. And that's how my startup journey began. 

 

Pieter Abbeel 

Now I believe this is around 2012. Is that right? 

 

Amit Prakash 

Yeah, yeah. 

Pieter Abbeel 

So it's interesting because now AI is such a big deal. But 2012 this was I mean, assuming it was not the very end of 2012, it was pretty much pre ImageNet moment, pre AlexNet. So deep learning hadn't really broken through yet. I’m sure you're using a lot of it now. And so at that time, what was it that you had in mind that you were going to try to build? 

 

Amit Prakash 

Yes. So at Google, I was responsible for the team that was building the click through rate models for AdSense. And so we were training really, really large machine learning models, and it was more of a systems problem than really a machine learning problem. We would tweak around with the algorithm in the parameters a little bit. But primarily, it was like, how do you pump essentially billions of training data points, actually trillions of training data points within a week or so to, you know, this weird gradient descent code. And that's what I was doing over there. That was the very beginning of sort of GoogleBrain team forming. And I remember talking to Andrew Ng and Jeff Dean at the time when the team was getting formed. And they wanted to work with us and some of the AdSense problems, too. But when I met my co-founder, Ajeet, we were, he had a very interesting and different approach to, he had already co-founded Nutanix and had spent some time at Aster. And he was talking about when we started the company, I think the best thing to do is to force people to narrow down a market because that's the most important thing. So let's go pick a really large market with important problems. And then let's go pick a problem in that space that we think we can solve better than anybody else. And then let's go work on that problem. And analytics seemed like a really great market. A lot of enterprises care about it a lot. It's been one of the fastest growing segments, and we looked at some of the surveys from CIOs, and we realized that for the last three years, consistently CIOs have been saying that this is the most important area for them to get right. And most of the industry was using solutions that were built 10, 20 years ago, and it seemed like a market ripe for disruption. And so that's how we kind of zeroed in on it. 

 

Pieter Abbeel 

Now, analytics, this is a pretty broad term, and I'm sure with ThoughtSpot you cover it in quite a broad way. But can you give an example of maybe something that, you know, in the early days would fall under analytics and where you thought you could really make a difference? 

 

Amit Prakash 

Yes. So in some ways, it's really simple stuff, but getting it right is probably the most important thing. So what happens in industry mostly is that if you draw two venn diagrams, sort of like people who really know business, people who are making business decisions and they see kind of where the competition is coming, where the market inefficiencies are and, you know, are responsible for sales, marketing and things like that or finance. And the people who understand data and systems to interrogate data are completely two different sets of people, and the intersection is usually very narrow and thin in the middle. And as a result, even though there's been massive investment in data infrastructure and collecting data and making it queryable, people are not able to get the benefit from it. Like, imagine if, so one of our customers is Canadian Tires, one of the biggest retailers in Canada, and during the COVID time, the demand was changing rapidly for them. You know, one week it's toilet paper. Next week it's, you know, exercise bikes. Next week it's men's shaving equipment and things like that. And they were supposed to respond to these things quickly. And it was not just that the demand was changing. The medium of selling was also changing because most of it was brick and mortar, and now all of it was online and they needed something to be able to respond to the demand very, very quickly. And they generate, you know, very, very large volumes of data. In traditional systems what happens is that to deal with that, you usually sort of aggregate the data to a scale where you can interact with it and then you put in tools like Tableau and Qlik, where an expert can kind of go and interrogate data. But that creates some distance between the people who need to make the decision and ask the question. And typically, it might take a week in some cases might take a month before the right person can get the answers. So the primary mission for us has been to empower the people who understand the business, who understand the significance of data to be able to interrogate questions, interpret data and get answers to their questions. 

 

Pieter Abbeel 

Now, when you started building this in 2012, I mean, you talk about your co-founder. Was it just two of you? How did the company start? 

 

Amit Prakash 

Yes, it was me and my co-founder, Ajeet, who kind of started hashing out the plans and putting it together. But we were extremely lucky to have sort of very early on assembled some of the really talented people that you could find in the Valley. And we quickly added another set of five people, who were really amazing engineers and had a lot of experience and done different things. We had someone who was one of the architects of the BAE Systems. At our core, we had someone who was one of the leads for Google's, back and sort of, query engine for what's called Mustang, internally. We had one of the leads who was building sort of the orchestration systems for Google data centers, and we had someone who had spent his lifetime researching databases at Microsoft. So we had a pretty amazing team to go build an ambitious solution that otherwise would have been very hard to build. 

 

Pieter Abbeel 

Now you said the key is to find a real problem that you know people want a solution to. And I got to imagine that means working directly with potential customers from the very beginning. Do you have any interesting stories there? Where, you know, certain customers really drove the direction you took things into? 

 

Amit Prakash 

So early on what we did was, we kind of spent about six months writing very little code and just imagining what the solution is going to be. And it wasn't obvious how we were going to solve this problem of, you know, non-experts being able to get to data as fast as they could. And we traded through many different ideas and, you know, in the beginning one of the things someone said was, hey, what if we built a Siri like interface? Just let you just ask questions and then you get the answers. And we thought about it for a little bit and we said, there's no way that we can do this and make it into a successful product because, you know, given the state of technology, even the best research teams with hundreds of engineers can only get to maybe about 80 percent accuracy. And if people are asking business questions and making important decisions, there's not even a 0.1 percent chance that you could take of giving the wrong answer. So it has to be very, very deterministic. And so we kind of scrapped that whole thing and then went in different directions with a different UX ideas to make it simple. And then one of the people who joined us said that, you know what? I don't need a Siri, but I do need something that takes away a lot of different layers of information that I need to know to be able to ask questions. And even as a developer, when I'm writing code, the simple autocomplete in eclipse is extremely helpful for me to write code. So if we move the state of the art significantly towards non-technical users being able to ask questions, that will be an amazing product and that kind of became the inspiration for what we did next, which was essentially we built a Google like interface to be able to ask questions. But behind the scenes, it was a very deterministic system. And you could think of it almost as a DSL factory. So it's something that can take the entities that the customer would care about. Everybody in that company would care about and then assemble from their kind of DSL and then build an interface where people could ask questions within that DSL. But the UX is designed such that it doesn't feel like you're programming. It feels more like you are just asking a question in natural language. And so that became the foundation of our very first product. And what we did was we went to many potential customers and showed it to them in just design marks and resonated very well. And then it took a long time to build this, like three to four months. And then we did our first user study, and it was really amazing to see the kind of response from the business users who had never been able to interrogate a question. The response we got was that our jaws are on the floor. That was kind of a light bulb moment, like, okay, we are going in the right direction. We have a lot more to build, but let's continue building and make something out of it. 

 

Pieter Abbeel 

I mean, with my Covariant hat on, we work on artificial intelligence for automation in warehouses, but specifically robotics automation. But I think what you're talking about is to help decision making for not automation in the physical sense, but yeah, for the whole logistics chain behind retail. So can you say a bit more about that? Anything you've seen there? 

 

Amit Prakash 

Yeah. So I was talking to one of the largest car manufacturers in the world. And as you can imagine, they have a fairly sophisticated operation with the parts coming from all over. And it just gets hard for them to figure out what's the optimal way of procuring something unless they can quickly, you know, ask a question and then ask a follow up question as a follow up question. And so they were comparing their life before the ThoughtSpot and after ThoughtSpot, and they were saying that, A, they're able to get these decisions much faster and free up a lot of time and energy. But B, they're essentially saving millions of dollars by finding the most efficient way to procure something in a given point of time. Given the history of all the transactions that they've been able to do, 

 

Pieter Abbeel 

Now, we've talked about use cases and how people use ThoughtSpot. But I imagine there's a whole other part to ThoughtSpot, which is not just the users, but everything you had to build behind the scenes, right? What are the things powering it? 

 

Amit Prakash 

Yeah. So there are three things that, kind of, the three pillars on which ThoughtSpot stands and each one is as important as the other. And it all kind of comes together. And so the three pillars are building large scale systems that can handle a lot of data. The machine learning and a part of it that makes it intelligent and smart. And then the UX part that makes it intuitive for users who are not necessarily the most technical to be able to get the information that they can, that they need. And so when I talk about systems part, essentially what we do is we build a really fast, really efficient indexing system that can index pretty much everything that the user might care about. And so this could mean anything from like names of products to names of customers to street addresses and product categories and things like that. And it builds a map of these entities to like which column they live in and how do you join that thing to this thing and stuff like that? And this is actually, a lot of computer science innovation went into the system space because of two reasons. One, we were shooting for a Google like experience, and that means that every keystroke returns under 100, 200 millisecond. So it just feels kind of natural. But at the same time, it's an enterprise world, so everybody's permission is different than everybody else's permission. So who one person might be? The store manager for all of the stores in California and the other person might be the, you know, supply chain management who can see all the data around the supply chain for the United States, but nothing else. And so as someone hits the keystrokes, you need to go in the back end and look for everything that they can access. And this sort of somewhat overlapping sets indexing becomes a really hard problem, and you're trying to run essentially compiler on top of it. The next piece in the systems is this DSL factory that kind of builds a language in a compiler and auto completion system for it on the fly. And over there, the interesting thing is that this is unlike most computer science languages, that there's no notion of delimiters and things like that. So parsing is more like natural language, like you can have a product name that contains a comma or a bracket or do spaces in the end and people are just typing those things. And then the third piece is more around a ranking, and this is where some of the machine learning things come in as well. Where you know, what we're trying to do is guide a user to be able to ask meaningful, relevant questions in a language that they have no idea that it's actually a language. And they're just thinking in terms of their business entities. So being able to surface within a few keystrokes the exactly the right sort of entities and tokens and questions is the most important part of this UX. And then the USBs itself to essentially do this progressive disclosure where the user gives you a little bit of information and you take that information and present, kind of, the choices that are very limited so that the user is not bombarded with choices. So they kind of progressively build their question without knowing that they are actually doing this thing progressively. So that's the search piece of it. The other part that I haven't talked about yet is the automated insight piece where after we had mastered the search piece, the question was, there's a lot of things for, the user could later wish that they had asked the question, but they didn't think of asking that question. And how can we help the user over there? And what we do is that we build models that says that if the user is asked this question, what are the next thousand questions that the user might be interested in? And let's go ask those questions on behalf of the user and then figure out which five or 10, are going to be the most interesting business case for this particular user and surface those. And so there is another sort of M.O. machinery that goes into first generating these thousand hypotheses and then evaluating them and seeing which ones are interesting and which ones I should surface. So that's kind of the core stack, I should say. There's a lot of other things that go into, you know, just running a large scale data system efficiently and making it up all the time and upgrading and things like that. But as far as sort of the differentiating part of the stack, these are the things that go into the system. 

 

Pieter Abbeel 

I really like this notion where you have to rank the results based on interest, right? But then you're saying you can create another thousand queries behind the scenes that are natural next queries and then even rank the results of those. And I'm curious, how do you know which ones should be ranked higher? Where's that coming from? 

 

Amit Prakash 

Yes. There are multiple sources of information that we have to combine together to be able to do this ranking. So the first one is really where the search having the search engine, the product, really helps us. So just like, you know, at Google, if lots of people ask questions about dogs and then the very next question is about puppies, that you kind of know that dogs and puppies are related and you want to guide users from dogs to puppies or to expand their queries naturally, from dogs to puppies. We are doing the same thing. If lots of people are asking questions about revenue and their next question is about cost or revenue broken down by product verticals and things like that, then we kind of take that inference and apply that to this hypothesis generation engine. And that's really significant because most analytics tools are not built on top of this search base engine. So they don't they don't have that source of data to be able to benefit from that. And the next one is purely statistical. So if you're seeing a large anomaly in the data, then obviously that's going to be interesting to the user. So we are looking for either large anomalies or some interesting correlation or some sort of interesting trend. That's again, sort of stands out as an anomaly like your entire business is growing at five percent, but your fidget spinner seems to be growing at 200 percent or something like that. And so just looking at the data statistically and seeing where something stands out. And then the third piece is really the closed loop feedback system where when you show an insight to the user and they say thumbs up or thumbs down, we kind of use that to infer what was it that was not interesting. So to think of it almost like Facebook posts, right? So you can say, I don't like this thing, but you can also say, I don't like this thing because I just in general don't like post from this particular user. But I don't like posts on this particular topic or I don't like the specific posts, but everything else is fine. So we have that kind of a feedback system that flows into this ranking where somebody could say that, you know, I don't care about costs, I just care about revenue. So don't show me anomalies about cost or somebody could say that I do care about cost, but you know, or let's say I do care about revenue, but you're showing you revenue jumped on the Fourth of July by 20 percent. I kind of expected that. Or somebody could say that you're showing me that there's a large anomaly where the X axis is null, that's to be expected. Don't worry about that. So taking all that information together, we do the ranking. 

 

Pieter Abbeel 

Now a natural thing I imagine is, then even though you say it feels like people type in natural language and is close to it, then maybe at some point people can use any natural language to tap into the interface, right? Especially with the recent advances in NLP, GPT, BERT models and so forth. So what's your thinking there? 

 

Amit Prakash 

It's one of the problems that I'm really passionate about, and I spent quite a bit of time finding a solution for this that can be turned into a product. And we also have sort of an alpha version of this thing out there where people are trying and using it. I've been really excited watching all these advancements coming with GPT and BERT and things like that. But at the same time, my read on this problem is that it's going to take a lot more than something like GPT and BERT to be able to solve a specific problem. And it's for lack of a better word, and I know some people frown on it. I think that this solution to this problem really relies on how we can represent knowledge efficiently and how we can capture knowledge efficiently. And, you know, GPT is capturing a lot of real world knowledge. But I'll give you an example. So I was talking to an airline company and they were trying out our alpha product and being able to ask the question in natural language, and they wanted to ask questions like What's the A0 for DFW or what's the D0 for DFW? Now, A0, in their context, means average arrival delay of a flight segment and D0 means average departure delay for a flight segment. And they ask what's easier for DFW? What that means is that was the average arrival delay for any flight segment where the arrival airport was Dallas-Fort Worth. And when they say, what's the D0 for the DFW? What they mean is what's the average? The average flight second delay with a departure airport was DFW. Now only people in that company know this correlation. When I say A0, this filter changes to the arrival airport. When I say D0, this photo changes to the departure airport, right? And they're busy people. They're not going to generate a million training examples for you to be able to learn these kinds of correlations. So now you have to capture this knowledge from them and then apply it. They might give you one or two training examples, but then you need to find ways in which to say that, okay, in this context, this is the determining factor that's kind of affecting this thing. Same thing. And things get arbitrarily complex. So like I was talking to one of the travel agency companies and one of their prime questions was how many of my customers are in New York right now? You know, New York happens to have 17 different meanings in their data set. You know, whether it's the arrival airport departure airport, hotel, their office headquarter or the travel agents headquarters, city, state, all the combination of that right. And in this specific case, what they really want is that their departure date should be less than today, and the return date should be after today. Now GPT is not going to be able to tell you that, right? And so you need some way of efficiently capturing this knowledge, representing it and then presenting it. So that's kind of one aspect of it that, you know, I've spent many, many cycles just trying to understand what would be the right way to capture it with the right data structures to capture it, what's the right way of inferencing and things like that? But we are nowhere close. The other thing that's actually very interesting, over here, is that even though natural language is fantastic for short questions when people go deep into analytics, typically the questions grow very, very fast. And at some point very quickly when the questions have no equivalent in natural language, right? Give me the revenue for every state for this year but don't include the revenue coming from all the products that were returned and discount the revenue for each, you know, partner by the margin that we are providing them, kind of thing is very, very common. When I look at it, if you look at Google queries, the average length is like three words. If you look at our queries and the average query length is like 20, 30 words. And at that point, a natural language is too noisy a medium to even communicate precisely the meaning of the question. And whoever you're talking to, their question and repeat it back to them, you know, a week later, they wouldn't know what exactly the interpretation for this question should be. So that's one of the reasons we really feel strongly about the DSL engine that we have is kind of necessary for this product to work. But at the same time, if we can build a very reliable, you know, 99.9 percent accurate translation layer from natural language to DSL that will definitely move the state of the art. 

 

Pieter Abbeel 

Let's say I were to use ThoughtSpot and I'm typing things into that interface. What exactly would that look like? I guess, I can not type fully open ended natural language, but it's going to feel like natural language. So how do you make that happen? What does it look like for me? 

 

Amit Prakash 

Yeah. So most people have either learned to, kind of, it comes naturally to them that if they're looking for a Starbucks near them, they just say a Starbucks near me. They don't say, give me a list of Starbucks within five miles from here. Right? And the same thing happens in our searches as well. So if you want to know the revenue for the last three weeks, you just type revenue last three weeks. You're not going to say, show me, give me the revenue of the last three weeks. I mean, that's an easy problem to solve. But what you're trying to do is just get users used to the idea of, think of the important entities in your question and just type those entities and you'll get the answer. So that you can save revenue for North America, last three weeks. And it will give you the answer. 

 

Pieter Abbeel 

You mentioned that people are busy. They're not going to want to supervise a translation engine between natural language and the lingo that they use within the company. But one of the nice things about the recent language models like GPT is that they don't require super explicit supervision. They just require any data from the right domain. So I’m curious about your thoughts. If you can just access internal communication channels, right where that internal language would be in heavy use and retrain the language models or fine tune in on that? 

 

Amit Prakash

Yeah, I think I think there's definitely something to it. This is one of those areas where like, if I can get my hands on that, I would love to try something like that. And maybe in some time we will. But I also right now at least struggle to see in that unsupervised way, where do you get that information efficiently from? So like when I say, what's the longest movie ever? Does that mean the duration of the movie? Does that mean the length of the name of the movie, right? If I have an unsupervised data set where people are just talking about movies, will I be able to capture that inference to be able to correctly translate that question into duration of the movie? I don't know. 

 

Pieter Abbeel 

Switching topics for a moment. Snowflake and ThoughtSpot announced a partnership. Big announcement. Can you say a bit more about that? 

 

Amit Prakash

Yeah. Yes, so it's been a very interesting partnership for us. And somewhat, you know, in early days, somewhat counterintuitive as well. But it just happened to be just exactly the right thing that industry needed and we are seeing the results of that as well. So when we were getting started in 2012 and we wanted to build this product where you ask a question, then you get back a response almost like Google. Part of that deal was that the response should come back in the same latency as Google. Google in most cases would come back and half a second or second or something like that. And we're trying to solve the problem for entire enterprise, which means that some enterprises have, you know, tens of billions of records that they want to integrate routinely. And you can’t pre aggregate that either because that constrains what kind of questions you ask. So we end up building one of the fastest distributed in-memory databases around for this problem. And so in early days when we used to sell the ThoughtSpot, ThoughtSpot came with its own database layer where you load all the data and we hold that in-memory across, you know, possibly hundreds of nodes. And then when you ask the question, we translate that question from DSL all the way to very efficiency plus plus, do a compilation just in time and then run it over these in-memory data structures and come back. That's served us well. But with a lot of enterprises moving to cloud and solutions like Snowflake coming in, what we saw was that these databases were pretty fast and kind of served the need we were trying to fill in. And for the enterprise, it's much better to just have one data warehouse where they put all the data as opposed to moving that data into yet another database that's just meant for this particular analytical tool. So it made sense for us to work on top of Snowflake. Why this partnership worked out really well was that we were the only analytics tool that was built on the assumption that the database is going to be super fast, super scalable and can handle large queries over large volumes of data and things like that. So we were architected that way. So when we started sending queries instead of our own internal database to Snowflake, it just kind of fit right in and created much better integration than what anybody else could create. Because most of the other tools in the industry were kind of designed for hardware databases that were much slower or dealt with much smaller volumes of data. And in some ways, Snowflake kind of removes the constraints around data scale, and we remove the constraint around users ability to ask questions. And so that kind of a marriage made in heaven is working out really, really well. We have a lot of happy customers who love this combination. 

 

Pieter Abbeel  

Looking further ahead, what do you see as the future of artificial intelligence in the context of ThoughtSpot? 

 

Amit Prakash 

That's a really interesting question, and there are probably three different directions that we are going to pursue that's really, really interesting. And each one of these, in some sense, is kind of an incomplete problem and sort of keep making progress until, you know, general artificial intelligence is solid, it'll be somewhere in that spectrum, right. So we already talked about this problem of being able to ask questions in natural language and being able to interpret that. So that's one direction where we'll continue to invest and continue to find better and better solutions. What we are finding is that a lot of at least 80 percent of the questions that are being asked are just a variation of a question that was asked before. So if we just kind of take the history of all the questions that were asked by the experts before and figure out what's the right way to ask that question in natural language and then every question in its neighborhood can be answered, then that's a fantastic way to go. So that's kind of hard putting on the product hat, how can we build a good solution from the technologies that exist today? SpotIQ, which is our automated insight engine, I think it's a very interesting problem where if you look at all the advancement in AI, it has served us really well on all the perception related problems. But when you talk about complex systems, I don't know that it has translated that well yet, Right. I don't know if many people, you know, training very deep neural networks and kind of getting benefit from it by sort of trying to figure out what's the best decision for their business next. And part of it is because a lot of context is still, you know, in people's head. And the kinds of problems that people use, you know, predictive techniques and things like that for are still somewhat like low dimensional problems, like trying to forecast your revenue and things like that. And so we're trying to figure out over there, how how can we leverage some of the advancement that has happened in AI in this particular domain in a way that most business users can benefit from it? So there's a whole slew of tools out there that serve the data science persona, where they're trying to train predictive models. What we're trying to do is figure out how to help, you know, marketing manager or a finance controller or somebody like, spark patterns in their data and then benefit from that. And a lot of the problem over there again lies in how do you capture the context that's in somebody's head to be able to give them something meaningful? So I'll give you an example. If I tell someone who's in retail that your revenue from California is an outlier, they're going to say, tell me something that I don't know. California is the most populous one of the richest states where I have the most number of stores. Obviously, I'm going to have a lot of revenue from there, right? But in that response, what's baked in is the business knowledge that revenue is supposed to be proportionate to the number of stores or the population that it's serving. How can I capture that in an automated and efficient way so that when I present insights, they're more useful and meaningful and they just don't look like random anomalies in data? And then the third one is again about capturing business context. So as creators, we think our tool is for anyone to be able to benefit from it, there's a lot of steps initially that's involved in bringing that use case up. So let's say somebody who wants to analyze their sales pipeline or something like that, right? So they bring in the Salesforce data and they have to model it. It's like tables and columns and then the relationship between tables and then provide business meaning to different columns like this particular column means revenue and so on, so forth. And then that's when it gets ready. And so what we're looking to do is figure out ways of making that push button so that people are doing similar tasks hundreds of times across different industries. Can we capture that knowledge and then automate that whole process so that as soon as you connect our product to your data, it knows most of the business context and it can be ready for you to interrogate it as opposed to some expert kind of providing that knowledge. 

 

Pieter Abbeel 

No, I mean, in the extreme. Of course, that's not a natural thing to do now. What you're describing, some of it sounds like a reinforcement learning problem, right? It's what business decision to make that will more likely lead to higher reward. It's literally a reward in many, many senses in the business environment. Of course, it's a bit tricky to just let an RL agent run the business. But it's really interesting because some of these ideas of RL these days do apply to offline data sets, or you can just do imitation learning. And it assumes that you record decisions made, right? That somehow what you have right now is a query engine gives a lot of information, but the question is, do people also register back into it the decisions they made based on the analytics data they got? Because once they do that, then maybe you can actually start learning that connection. And so how do you incentivize people to, you know, to enter and also enter their decisions, their conclusions? 

 

Amit Prakash

Yeah, that's a really interesting direction to think about. So we recently acquired a company called SeekWell. And what they're into is the industry term is reverse ETL, where essentially you're pushing data from your warehouse back into the apps. And typically the reason for doing that is to take some action. So you may decide to send out emails to a bunch of your customers about a particular topic, or you may decide to drop a promotion or you may, in a procurement system, may push certain items to be ordered with its quantity and things like that, right? So right now, what we're doing is just providing convenience. You may have asked the question, what are the products on which inventories are running low and what's the difference between the forecasted demand and the current inventory? And then once you build that data set, you push that to the procurement system. But you're right, we could start instrumenting that and then it could be something that could be interesting in the direction of imitation learning, but early days for things like that. 

 

Pieter Abbeel 

Talk about early days. I think for a lot of people who want to start their own company, a big open question, once you have the idea and maybe have the team to found a company is how do you get your first money? How is that for ThoughtSpot? 

 

Amit Prakash

Oh, so we're very, very fortunate in our relationship with our investors and in particular, lightspeed. They've trusted us from the very beginning and have been amazing in supporting us. So even before we had kind of a concrete idea that we wanted to pursue, because Lightspeed was also an investor in Nutanix, which was Ajeet’s previous company, we had a strong relationship with them and we had their backing. And so that part came somewhat easy to us in the beginning. And then after that, it's been really good for us in terms of just being able to hit the right milestones and show success. 

 

Pieter Abbeel 

Do you have any advice for people who maybe didn’t already start a company before and have investors on speed dial from that?


 

Amit Prakash

I think two things, one in general, things have gotten so much better for entrepreneurs than, you know, 2012. In terms of there's a lot of desire to invest in tech entrepreneurs. And there's so many problems that are now accessible because you don't need a really large investment to build a solution, right? Because there's a lot of cloud infrastructure and things like that are already available. So I think a lot of what possibly holds people back in the beginning is kind of their own fear of how to navigate these things. But once you jump in, actually, the infrastructure is so much better than before, in terms of supporting entrepreneurs. So that's one thing. But the other thing I feel is that there are two ways to start something. One is to kind of have an idea and you just kind of take out six months of your life investing in it with a couple of co-founders and building something and then going and showing it to someone. And the other one is kind of, you have an idea and you start talking about the vision and validating it with people. And once you have some validation, then you can start investing your time and resources into it. And I see, actually the second one working better than the first one because as fun and as interesting it is to solve problems for engineers like us, it's even more important to get the problem right. And so I think spending more time early on to get the problem right, even before you've written the first line of code is kind of one recommendation that I would have. And once you have a concrete problem statement that you believe in, you have conviction on and you have some validation. It's a lot easier to have those conversations with investors. 

 

Pieter Abbeel 

Well, that's definitely my experience, too. I like the way you articulated that. So we actually had a guest on, a while back, Keenan Wyrobek from Zipline. And, you know, by default, you would think of him as a robotics person because he did a lot of work in robotics and Willow Garage before Zipline. But he said when he started Zipline, it was, you know, robotics was an afterthought. It was delivering medication to hard-to-reach regions that was the problem that needed to be solved, the logistics of health care in remote areas and from there everything just followed. 

 

Amit Prakash

Yeah, yeah. So I think getting the market and the problem right is probably the most important thing. And if you can get those things, not necessarily right, but in the right zone. Everything else from there, whether it's raising money, whether it's recruiting people, all of those things become relatively easier. It's still entrepreneurship. And still, you are shooting for something really, really ambitious. So there's going to be a lot of struggle, but it's definitely one way to de risk your venture? 

 

Pieter Abbeel 

So Amit, as I was doing a bunch of reading and watching videos of you yesterday in preparation for our conversation today, one thing that I ran across that really intrigued me was a quote that said, the best leaders are students. And apparently, you put an hour aside every day for yourself to learn something new. Is that right? 

 

Amit Prakash

Yeah, it's not something that I've always been able to do. But the last couple of years, I've made it a point to do that. And it's been really rewarding. And I guess partly I have the pandemic to thank for that because well the commute is out, so you got to use that time for something. But yes, you know, when I was at Google, I felt like I was at the forefront of where machine learning was at the time. And then I got busy with ThoughtSpot and so much happened in this field that I had missed. And so a big part of the last two years when I've spent a lot of time learning was kind of just catching up on the field. But then there are other things that I'm passionate about and I like to learn whether it's like art or, you know, politics or whatever. So that extra hour that we saved from the commute has been really, really fantastic for me to learn and gives me a lot of joy and it's useful as well. 

 

Pieter Abbeel 

It seems hard though, right? Because on any given day you probably have a lot of fires to put out, a lot of things that you, you know, want to take care of that are directly useful that day or the next day, maybe. And now you're setting an hour aside where you're not working on those things that could help you today. And so how do you make sure you don't stop doing it? 

 

Amit Prakash

No, so I think in what we do, probably customers in their urgent needs definitely comes first and then I think taking care of people in the team is, kind of, the second highest priority and then you have your strategy meetings and things like that. So in general, those things always kind of take priority, but I make it a point to set aside some time and if it gets preempted, it gets preempted, but it doesn't get preempted that often. So that kind of works well. 

 

Pieter Abbeel 

Sounds like things are running very smoothly.

 

Amit Prakash

And the other thing I was going to say was having, you know, someone like Sumeet, who is leading the engineering for us and I'm not kind of in the day-to-day management of engineering, helps a lot as well. So basically hiring amazing leaders who kind of run the organization is the other secret sauce here.

 

Pieter Abbeel 

Great hiring is the secret to having time again. Well, Amit, it was so wonderful having you on. Thanks for making the time. 

 

Amit Prakash

Thank you so much. I really enjoyed it. It was a fun conversation. 

bottom of page