The embedded world is an open source world. So, the world is used to Linux, it's used to computer vision platforms like OpenCL, OpenCV. And from an ML framework, they want to retain that same open source access.
The edge market is going to be the next AI gold rush. And if you really think about the edge market, it's 40,000, 50,000 customers globally. So it's diffused in its footprint.
But if you aggregate that, it's a much bigger market than the cloud.
By popular demand, Netsuite has extended its one-of-a-kind flexible financing program for a few more weeks. Head to netsuite.com/eye On AI. That's Eye On AI, E-Y-E-O-N-A-I, all run together.
Again, head to netsuite.com/eye On AI. netsuite.com/eye On AI, for its one-of-a-kind flexible financing program.
Craig, pleasure again. And so I took a long road to really come in to CMA, about 30 plus years in the industry. A background in software, silicon, and then I managed fairly large businesses.
In the last 10 years, I was at a company called Xilinx. I was there for 18 long years. I was their executive vice president, general manager when I stepped out.
And I was pretty passionate about watching what's happening in AI. And clearly, I think AI, even then, five years ago, took off in the cloud. And obviously in the consumer mobile experience as well.
And what was fascinating for me was that I think the physical world that we live in, the industrial world we live in, robotics, industrial automation, medical, automotive, we're still fairly using archaic old techniques and we're re-embracing AI. And I wanted to start a company that would really build a purpose-built platform to scale AI at what I call the embedded edge market. So that's the history and the backdrop on how we started, what led me to start SEMA, and now we're five and a half years into it.
And SEMA is designing chips for the edge, is that right?
Correct. We built our purpose-built chips for the edge. We also make the associated development software.
So our application developers or our customers can write their application and port that application through our software onto our chip.
Yeah. Just one question which I may or may not leave in. I had Dave Patterson on the podcast quite a while ago, and he was talking about instruction sets and creating an open-source instruction set for chip design.
Is when you talk about the programming environment, are you talking about the instruction set or how does the instruction set figure into that?
Okay. It's a simple question, but a very complicated answer, and I'll do my utmost. So you should think of our chip.
As a heterogeneous compute platform. So we have our own proprietary ML accelerator. We have an ARM processor complex, which is our control plane processor, and we have a DSP vector engine that we have licensed from Synopsys.
So those three make up the various compute elements we have on our chip. So that's number one. Number two, unlike the cloud where ML acceleration is the only problem.
In our world, it's application acceleration. So you need to accelerate not just the ML workload, but also the computer vision workload, the pre-processing, the post-processing, the control plane analytics, and the decision-making all needs to be on a single chip. So perhaps what Dave Patterson was referring to is that ARM has its own instruction set to program their chip.
He, I think, is driving a RISC-V platform.
That's correct.
Which is creating a open-source instruction set so that anybody could have... You're democratizing, if you will, the access for compute. And I think one day, no doubts, RISC-V will be a very, very popular, if not a parallel path or an opportunity for many designers.
In the space that we are in, the embedded market, the incumbent solution provider or lead solution providers are. And what's more important here is not just the instruction set alone, but the ecosystem of software vendors that surround that platform. Unlike other companies which may create a closed platform where they control the entire software stack, we create an open platform for anybody to be able to build applications on top of us and also leverage the software ecosystem.
So in that aspect, we found that finding Arm as a partner is a better way to go for who we are as a company. And I think they've been a fantastic partner. And so we use their instruction set for the control plane aspect of it.
But our secret sauce is really in the ML portion of it. But what we enable is what we call a product called a MLSOC, a machine learning system on a chip. And we allow people the opportunity to partition the problem in any of the heterogeneous compute elements.
Yeah. And when you say to open it up, I mean, when you were referring to the proprietary software development environment around a chip, I assume you're referring to Nvidia's CUDA.
Yeah.
Correct. And in what way is yours different? I mean, not different technically, but in terms of it being more flexible or more open.
Correct. So really good question. So let me try to stay at the high level and try, let's see if I can get my point across.
The embedded world is an open source world. So the world is used to Linux. It's used to computer vision platforms like OpenCL, OpenCV.
And from an ML framework, they want to retain that same open source access. So they want to be able to work with PyTorch or TensorFlow or Onyxx or any variant of an ML framework. So what we have done quite different than Nvidia from that perspective is we have built something made for the embedded market.
Everything we do touches open source. And so we kind of, I mean, if we're abstract, think of us as the Ellis Island of everything. So you could give us your poor, you're tired, you're hungry.
And once they take, once we ingest their application, then we convert them to a proprietary environment to deliver the performance and the power that they desire. So we are very open source and we are not limiting. CUDA was not built for the embedded market.
So to a certain degree, people are having to transform their environment to fit into an Nvidia world or CUDA world. So we do not force the transition. They can remain in the application and the environment they've been used to.
So that's a large difference that we go to. But obviously the combination of the hardware and the software, CUDA and their silicon, similarly, our version of our software is called Palette. Palette is a combination of MLSOC, Dolores, a world-class performance and power, and ease of use parallel.
I see. What you mean is people can use other platforms than Palette, or Palette works with Linux and other open-source systems?
The latter. So Palette is our entry. So anybody can bring in a thing, and they come to work with us through Palette.
Okay. And I'm sorry, I jumped into those questions that have been nagging me. But let's back up and talk about the evolution of chip architecture.
You know, from the Von Neumann architecture and CPUs and the rise of GPUs, and which was a kind of accelerator, and then the kind of split in, you know, as Moore's Law slowed, then people's, instead of trying to miniaturize transistors further, they started creating specific chips called accelerators, and then now we're moving to the edge. So can you just kind of walk us through that evolution?
Absolutely. And I actually think you did a great job already in walking through the progression of how, I think, compute architectures of a wall. I would still say at the underlying element of it, until we got into ML, everybody's architecture leveraged one alignment and some simple load store instruction.
You load and you store data and you compute, right? So the fundamental architecture hasn't really shifted. And combination of Moore's law slowing, but also the performance needs being exacerbated.
And power being a limiter has forced an hour, has created an opportunity for innovation. And a plethora of new approaches are still being brought together. GPUs have kind of taken pole position on solving AI ML.
But from our perspective, there's no one size that fits all. And no doubts, they've done a fantastic job. And in reality, Nvidia's strength is not just their GPU architecture, but more importantly, CUDA, right?
And they've done a fantastic job in building that as a mode to really navigate and protect their growth and opportunities. And I admire what they've really done. I think for the edge market, hard limiters power.
So things in the edge market cannot support a GPU like power consumption. So your world is really very simple and five watts, ten watts or 20 watts. And anything more than that is untenable from a system power performance or from a thermal heat dissipation perspective.
So you got to kind of think of it the other way in that how much compute can I get for a given power budget? That forces a radically different approach and an opportunity for people to think very differently, like we have. And what we really came up with is saying, hey, it's not ML acceleration alone or AI acceleration alone.
We need to think about application acceleration. And that's why we created a heterogeneous computer platform. And if you really dig into our ML architecture, it's a tile array of 10 by 10.
And we fundamentally stream applications to run through the tile array and efficiently utilize the compute that we have on our ML array. And so that's our approach that we have taken. And from our perspective, that's the best we could figure out to solve for the needs of the edge market.
And talk about the edge market and why AI is moving to the edge.
Great question. So I think I will open with this and perhaps even close with the same statement. I think the edge market is going to be the next AI gold rush.
And if you really think about the edge market, it's 40,000, 50,000 customers globally. So it's diffused in its footprint. But if you aggregate that, it's a much bigger market than the cloud.
So think automotive, think robotics, think industrial automation, think medical, think the government sector. We use compute in very, very many different facets and everything that we touch in our lives today. And none of them really have significantly adopted AI yet.
And over the next 10 years, the next 20 years, that is the next big gold rush. And a lot of focus and a lot of attention that duly is focused on the cloud today is probably going to shift the edge. And this is the exciting opportunity, I think, that's ahead of all of us.
And so, I think I've been positioning this as the next big thing from a compute perspective and wherever you're going to be focusing. And the reason for it is very simple. Everybody's data needs are going through the roof.
The compute needs are going through the roof. And three key dynamics are forcing people to think about the edge a lot more. Number one, locality.
Not every application where you collect the data has the affordable opportunity to transmit the data back on a 5G or on some network back to the cloud and compute and come back. Proof point, automotive. You're scanning the road, you want to detect a pedestrian, you have milliseconds to make a decision, not seconds.
Robotics is a similar thing. So localized compute is key. Second one is security and privacy.
And as you particularly get into generative AI and LLM elements, access to data and really protecting both the security and the safety of it, and the privacy of it is going to become very, very key and you're going to see a lot of architectures move to localized edge compute, where the data stays with you. The third one is TCO. The cloud is pretty expensive.
And also there's monetization of the data. When you have the data, why not monetize the data for your own benefit? Why give it away to somebody else, right?
So the TCO element is going to kick in. So it's not that the cloud is going to go away. The cloud will continue to be a large entity.
But there will be a better balance between what needs to be at the edge and what has to go to the cloud. And I believe in the next 20 years, you're going to have a hybrid mode for almost every architecture or for every compute aspect that I think we're going to look through. And it should be very exciting times ahead.
Yeah. And I remember when we spoke last time, everyone's looking at the edge of all of the chip designers. And there are two ways that people are approaching it.
One, the big companies that have an existing, you know, very capital-intensive architecture are looking to scale down the architecture to make it work at the edge. And then there are companies like SEMA that are building up from scratch with the edge in mind. Yeah.
And what's the role for each of those? Or in your view, is there no role for the scaling down that's...
Yeah. More than the scaling down, what people are not paying enough attention to is the software. So scaling down the silicon, there's a lot of smarts, a lot of capability.
But AI software is an entirely new paradigm. So you cannot just add AI to an existing infrastructure. It's kind of open heart surgery.
So people need to think about the AI software architecture first before they really look at scaling down. This is where the industry's large gap is. There are very many amazing public companies that have classically served the edge.
But very few of them have an industry leading AI software and an AI roadmap. So that's one large opportunity and gap. I would submit to you that if people don't have an AI roadmap in 10 years, they may not be around as a company.
So there is going to be a mad scramble and a rush to really kind of AI enable everybody's roadmap going forward because it's here to stay. And this is not a fad anymore. And if you don't have an AI story, it's probably going to be yesterday's story, right?
So that's the big shift people are moving to. What we are building is something purpose built. And we really targeted this market.
And I would say we are one of the first few companies to really dedicate our focus to this. We are not the very first, but we are one of the very first few. And we have come a long way in five and a half years, and we are Gen 1 in production for a year and a half.
We are engaged with a wide range of applications and customers and market leaders. And our Gen 2s are on the corner. And so we are pretty excited about what we are going to bring.
The learning I have gone through is, while AI architectures are hard, what's been the Achilles heel remains the Achilles heel for the industry, which is AI software. And this is also perhaps the reason why Nvidia has had a great run and very little competition to date. And so AI software is probably, whoever does a great job in AI software is going to be the lead.
That much is clear.
And when you say AI software, you're talking about CUDA or?
Yes, RDA Coolant, right, a palette in our case. Correct.
Yeah. Yeah. And maybe just explain that a little bit more for listeners that aren't familiar with CUDA or with palettes.
Our palette, right? And so I think customers design their application and they come back and say, hey, this is what I want to go accomplish. And they have an expectation on the performance they want, the power that they want, the accuracy that they want, or the cost that they want to spend for a particular application, right?
So they walk in and say, here's my application description, here's what I want to get out of it. What translates the application from a customer description? Into a chip executable that runs on the chip, and then delivers the promise of the performance, the power, the cost, is really the software programming environment, and in Nvidia's case, it's called CUDA, and in our case, it's called PALT.
So it's really a translation between the customer's description of the application to what eventually runs on the chip.
Yeah, and a translation to what? I mean, to...
It ends up being fundamentally a microcode or an executable that is then downloaded onto the chip, so that the chip can be programmed to perform as the application requested.
Okay. You mentioned some of the applications. What are the kinds of industries that you're getting the most interest from?
And right now, it's, I would say, across the board. And I'll give you the spectrum, and then I'll amplify the areas where I believe that's going to be the highest traction. So we are engaged with the top five customers in robotics, top five customers in the government sector, the aerospace and defense sector, automotive, medical, transportation tech, agricultural technologies, and also a category of smart vision, that's a Venn diagram, oral app or industrial automation, right?
So visual inspection, factory flow automation. So these are all the applications, and we have as a company in the last five and a half years, engaged with the top customers in each one of these market segments. Play this out five years from now, where will we see the highest volume, highest growth?
I think clearly, automotive, just by the nature of it, is going to be a key area of focus for us and the driver. Right behind it, I estimate would be robotics. I think there is a resurgence of a lot of focus on robotics.
COVID brought a lot of the supply chain issues and the legacy architectures coming in the way of global productivity in a big way. But I also think that there's a lot of articles written up on labor shortages globally. And people really want to bring in a lot of automation into the factory floor environments.
So you're going to see a lot of AMRs or robotic environments across quite a few warehouses. There's a lot of talk around humanoid robots. This is a new area of development.
Some of these may not pan out or as with everything in life, may take much longer than anybody's estimate. But the fact that I think robotics is going to play a very large portion of our lives, that's a given.
Yeah.
With these edge applications, they work with the Cloud, right? I mean, they're not intended to work independent of the Cloud, are they?
It depends on the customer's architecture. Take, for example, R-chip. R-chip is capable of working independent of the Cloud.
So you could fundamentally train an application, and you could infer that at the edge. There is really no need for the application to be connected to the Cloud, unless you choose to do so, right? And so the application is fully capable of being entirely independent, running on its own with no connectivity.
And many applications, really if you think through it in the real world, may not have connectivity elements, and or may struggle to get connectivity. And you don't want to be dependent on the connectivity alone, right? So we built a chip that's entirely capable of being self-managed, and being out there with no need for a Cloud connectivity.
But if the customer chooses to have a Cloud connectivity, either for updating the algorithms, or to really do a nightly upload of information or analytics, we enable that also.
Yeah. Are you working with, I would assume you are with many drone manufacturers, because this sounds like what you would need on drones.
Absolutely. And so we have built a platform that is the best in class in performance and power, and the programming environment we believe is the best ease of use in the industry compared to our peers, if you will. And drones are a poster child that needs all three.
They want the highest performance, the longest flight time, lowest battery consumption. And I would say we are better poised to serve that compared to anybody else. And so absolutely, we are engaged with the market leaders on unmanned aircrafts and drones.
And we see a lot of good traction for technology in that market.
Yeah. And the processing that's taking place is computer vision and analytics and control, I guess. Are there others that I'm not thinking of?
No, I think there is also navigation.
Right.
You could argue navigation fits in the context of computer vision. So there is fundamentally a camera subsystem or some kind of a LIDAR radar subsystem. So gathering location-specific data and you're computing and you're making decisions.
So it's a combination of computer vision, perception, localization, segmentation, and really distributing the workload, both from an AI context, but also into classic computer vision. And you also have control plane functions to watch your flight navigation, and really ensuring that I think you're combining all that into a single chip. And this is kind of what we do well.
And given that we are a heterogeneous computer platform, we do more than AI workloads. And so you could actually put the control plane, the navigation, the decision-making, and have it all work on a single chip.
Yeah. And how large are the chips? Is there a standard product that you talk about it being a heterogeneous genus?
I'm glad I'm not the only one that struggles with that.
Yeah, that's right. How large are the chips? Because certainly, you know, the Nvidia packages or systems are, you know, getting, you know, you need a truck to move them.
And they're certainly not going to fly in a drone. So how large are your chip packages?
We are really big. That's it. And so you can almost visualize what I see in my thumb.
That's about the size of what we need to have. And that's what we deliver. And so we, that's the chip.
We also provide bold form factors so that customers can easily integrate that into our existing structures. And one of the benefits in having spent 30 odd years is I've spent a lot of time studying, not just me, our entire team, what we need to be at in terms of cost, power, form factor. And we have put all of that into really what we have built eventually.
And I think the opportunity we have had is we have built something from grounds of purpose built. I am not a public company that's got a lot of legacy. I have zero legacy.
So we have said, hey, this is our learning. This is what we need to do. And we have raised a good amount of capital.
And we said, let's put the capital that we could use and build something really for the customers.
The other side of what's happening in the chip world are the foundries. You're designing chips, but you need a foundry to etch the chips. And, you know, there's various companies are building foundries for various types in the United States.
But for the time being, TSMC and Taiwan really is the one place where you can get this done. How do you get capacity with them?
I keep joking that though we're in the technology business, it increasingly more and more is a relationship business. And one of the side benefits of growing old is you get to know a lot of people that are all in key decision making positions at all these companies. A very, very small company.
We are fantastically happy with the partnership we have with TSMC. They've been an amazing partner to us from day one. And they've given us and they've made us feel like a big public company.
And so I'm very lucky that our problems are all demand related and not supply related. And so we have worked very hard with them and we have done a good job in managing the inventory needs and the market demand and really planning well. And I would say I think it's never an easy thing to do, but you've done something for 30 years, you tend to become reasonable at it.
And so I think the team is a very well-qualified team, but we won't be here without our partnership with TSMC. So we're very proud of what we've done together with them.
Yeah. Is there something different in the equipment required to build these chips? I mean, I would presume in that you're trying to build them small, that you want to use the smallest, I'm not sure what it's called, but nanometer gates or whatever.
And what size are you at? And are you looking, I would presume you are looking at the new founders being built and is your architecture something that could be produced at any foundry or is it a particularly, particular kind of foundry that you would need?
Great set of questions. I think one of the key things we decided to do early on was we recognized that our software-centric approach gave us a lot of opportunity to get better performance and power. We decided to scale back in what you referred to as process technology.
And so we created our Gen 1 silicon product at 16 nanometer, 1.6. And this is at least 10 to 12 year old technology. So we did not want to take very advanced process technologies and take a lot of risk.
There's also a cost equation and a development equation. And as you get more aggressive or more newer nodes, the cost, the mask set to reproduce the chip, but also the development cost both really go up quite a bit. So Gen 1, we said, hey, we're getting amazing performance and power compared to anybody else.
And in real applications, we're 10x of the very best companies. So we said, let's stay back in process technology in Gen 1. We did 16 nanometer.
And in Gen 2, which we have now gone public, we have announced that we'll be at 6 nanometer. So we'll be getting a 2x improvement over what we have done in Gen 1 on a performance per watt, right, which is a huge leap. Typically in our business, you get 10-15% improvement, Gen 1 to Gen 2.
For us to get a 100% leap is pretty phenomenal. And we now feel like we have the right opportunity to do it, and our customers will usually derive the benefits of it. And to answer your question, we are tracking very closely where we get our chips manufactured, and we have the luxury of getting them manufactured now in the US as well.
And at the appropriate time, we'll take full advantage of every opportunity that we have. And within TSMC, I think we definitely have the choice of being manufactured any place. And just one thing is, and I think we are, our business doesn't work in that you build a design and you could go to anybody.
And so, yes, at a design implementation level, I think you have that opportunity. But I think once you get into the chip development, you fundamentally locked into one entity. That's typically how our companies work, and we are too small to play with too many players.
And our volumes are too low to really play the game. And if you redo it, it's uneconomical or economically not viable. And so I think we really want to work with the best in class, and TSMC is as good as it gets.
And they have done a phenomenal job, and they truly understand that this is a services business. And even today, I'm very impressed that as a startup, we get the same attention, if not perhaps more than a public company. And so all of those really are huge positives.
Yeah. So you're five years in. Are your chips deployed now in any edge devices, or are you still working toward that?
So we are deployed. I would still say it takes a long time in our business to really scale. Like the cloud business or the consumer business, where the design and cycle is fast, the embedded space thinks takes a long time.
But then the other positive is that they stay for production for 15, 20 years. So it's a game of inertia. Getting in is pretty hard, and the opportunity to stay in production is also pretty significant.
So we have quite a few customers that are moving to production with us globally. And over the next year or two, we are excited about scaling more and more customers to get into production with us.
Yeah. And how many people are at the company right now? And how do you scale a company like this?
Because as you said, the demand is massive.
We are about 160 of us today. And with a lot of consultants, I think, or contractors, we are on 200. So I would say wrongly, we are 200 people working at the company.
We are lean. We are credit lean for what we are doing. And it really is kudos to everybody that works here in that it's a startup and we do whatever it takes to make things happen.
And we are in some ways doing what many companies use, tens of thousands of people to get done. And I'm a big, big admirer of everybody that we have here. I'm biased and these are all more than employees, family for us and that we do whatever it takes to go make things happen.
The macro thing that is a priority for us is scaling. Is scaling our go-to-market and how do we touch 40,000 customers in scaling our go-to-market partners that we have, reps and distributors and VARs and ODMs and ISVs. And again, I go back to not only myself, but I think we have had a lot of experience scaling businesses in our career before.
And so we retain the core of the company to be small. But through a partner network, we expand our footprint and where we go look to. The underlying thing that really enables us, Craig, is really our software.
And more self-managed the software is, the more the customers can manage themselves. That's a large, large component of how we can enable the scaling. If every one of them needed a hand holding, we don't have a company.
One day, I think, we are going to touch tens of thousands of customers and it will be a very exciting scaling opportunity for us.
Yeah. And the biggest market, I would assume right now is the United States, but I don't know. I mean, this is a global phenomenon.
Absolutely.
I think by footprint, North America would be the largest market for us, no doubts. But we are enjoying the benefit of our customer traction in Europe, Japan, in Korea and India too. And about 70 of our team members are based in Bangalore in India.
And we also are building out sales and go to market activities in India. And I think India is going to be a massive growth, in general, a doctor of AI. And we feel like we are one of the leaders and pioneers in this category of edge market scaling.
And I am quite convinced that I think in the next 10 years, this is going to be a high priority for every nation, a high priority for every industry sector. And people that don't have an AI-enabled environment are going to be left behind. And I think it's a matter of now who's wanting to gain leadership position.
There's going to be a fear of missing out that's going to go through the entire customer base.
Yeah. And you were talking about how I'm going to avoid the word again, but how you use your chip is not specialized one particular application. But is there an application that at some point it seems, if not in the design of the chip in scaling your sales and marketing and all of that, you've got to focus on a few applications and let the others come to you.
Which are you guys focused on?
I would say robotics, industrial automation, and vision systems. If you really do a Venn diagram of those three, I think that'd be a key gravity area for us. And I should expand on that and saying, every day our struggle is specificity versus generality.
So we've got to really make this balance happen. And I've built products before that will work for 40,000 customers, where you build one thing, every one of them is doing something radically different than the other, but it all still ends up being mostly the performance, mostly the power, mostly the cost that they need. Not easy to go to.
And we have learned and built good businesses earlier in our career, and we have used that experience to build it. And I would say, though I explained our gravity point of our highest volume, where I think we're going to drive, most people are going to find out that I think we are better than everybody else for their application as well, and all the other market segments we've already talked about.
Yeah, this wasn't, I wasn't going to ask this, but it just occurs to me on this question of scaling. You know, the big boys are all chasing this market as well. What's your future?
Do you think you guys will end up as part of a larger, you know, Google or Nvidia or somebody at that scale? Or is your expectation to stay independent and grow to that scale?
Really good question. And I would maybe give you two or three things that I think maybe I should mention. One is what I've learned now in five and a half years, and all my career I've been in large public companies, and this is my very first startup.
I started and what I've really come to recognize and truly internalize this, it's not the size of the dog, but it's the size of the fight in the dog that matters. I've been in very large companies and I always wondered, my God, we need more to get things done. There are 200 people and we are doing things of 10,000, 20,000, 100,000 people companies.
When you really build yourself into an element of, this is our team and this is what we're going to go do, and this is our life and blood, you operate very differently. That's one thing. The second thing I should say is, I have no idea how to predict the future.
What I do know is if you have results, you have options. We have really submitted ourselves to not worry about what outcome is ahead of us. We don't know.
What I do know is if I drive results, I have outcomes. And if I don't have results, I don't need to worry about outcomes. And so we single-mindedly put our head down to really focus on driving to be the best in the category that we are in.
And we cannot be everything to everybody. We really need to focus and do a good job of what we do. And our company definitely has been interesting to a lot of people.
I think that should be a fair thing for me to validate. And I think it would be fair to say that I think we are definitely noticed by everybody, by our customers, by our peers. So what our future really holds?
TBD. When I wake up every day and go, there's one thing to get done, and I drive back home saying, yeah, we got that done today. And I'm happy.
Yeah. And this is getting off subject, but I'm curious about the fundraising journey. How easy was it to get the initial capital?
And are you still raising funds?
I would say two things. I think for a long time, investments in chips had dried up. Chips were no longer really a high priority and a focus just because of the investment amount needed to the return cycle.
It just didn't fit every C mold. And so almost from 2005 till maybe even 2016, 2018, there are hardly any chip startups. It's now a key gold rush, and there's an enormous amount of attention and span put into it.
And I would say raising money is hard. Anybody that starts a company, if you walk in assuming that raising money is easy, I would urge you just really pragmatically internalize that it's hard to do. But I also simultaneously held the notion that the right kind of idea, the right kind of system and a setup always is capable of attracting good capital in any circumstance.
And we've been very lucky. We have raised around 270 million so far. And we have amazing investors that are all deep pedigree investors.
And it's public information that I think are key investors, Fidelity, Dell Technologies, Amplify Partners, and recently we extended around and added in Maverick and also Point72. So these are a few and these are household names as it comes to deep tech, and also companies with deep pockets. So having money always helps.
Having great people around you that know everybody and can connect you helps. Having a great board helps. And you need all of that to really go pull this off.
And so it's my day job to be raising money every day. I'm thinking about raising money every day. We have done well so far.
We also manage the money well. And I think we have carved ourselves out to be an industry-leading company. And I do think that I think at some point we'll be earning some more capital to further our growth and our growth.
Yeah. The other the other difficult thing is hiring good people. I mean, they're particularly when you have somebody like Nvidia, you know, vacuuming up people.
Has that been a challenge?
I would say it's been less of a challenge than I had expected. It would be my answer. Again, hiring is really difficult.
Hiring good people is harder. Hiring amazing people is even harder. And in a startup, you better have amazing people.
And how do you get 200 people to do something that tens of thousands are doing, right? And it's not just amazing people, amazing people that are good team players. So you do the and function of all of this.
You kind of come back to a very small universe of human beings that really fit all of that. And what we do is not just chips, it's software, it's AI, it's computer vision, it's automotive. Not easy thing to go to.
And so what's really helped us is network. I've spent 30 years. I've known a lot of good people that collectively believed in me and I believed in them.
And so there's a network, not just me, everybody that's around me has a network. And so the network's really helped a lot. The second thing is we're building something that's industry leading.
And I'm an engineer. You want to be part of a winning formula. You want to be part of a David vs. Goliath fight. You want a story in your life where you said, hey, we started with nothing and we built something that beat the very best. So we have done that now and that also attracts a lot of people and that people like success.
And they're like, wow, 200% company. And we have publicly validated that we are the best in class in the category that we are in. And people want to be a part of the winning problem.
So I would say it's been easier than I had expected, but every single hire is hard. Every single hire is something we pay a lot of attention to. And even now, though we are close to 200 people, I know everybody in the company and I kind of know their spouse and I know their children.
And we need that family environment to really attract more people. And then it's not just hiring, retention too is hard. And people in our trade are the most sought after personalities in software and hardware and AI.
And so it's not just hiring, but it's also retaining and creating an environment where it's motivating and they choose to stay and choose to really be part of a winning formula. So that's, I would say, you've covered fundrais, you've covered people, you've covered solving tough problems, you've covered customers and winning. And that's what I wake up to every day and it's my day job to make sure I play my part.
But luckily, I have amazing people around me that also do way better than me luckily. I'm really glad to see them become a good company.
Where do you see the chips going? I mean, we're getting into specialized accelerators. There's now, I had somebody from Western University in Australia.
I think that's the name of the university. They're building a brain scale, neuromorphic computer using FFPGA. Is that right?
Yeah.
Programmable arrays. And where do you see this going? Are we going to end up with a highly fragmented chip market where depending on your use case, you'll use one kind of a chip?
Or do you think that and then there's quantum chips? Sure. Is neuromorphic and quantum, do you think that they're going to eventually take a big share of the market away from the market?
Great question, Craig. And hard to predict the future because you're probably going to be more wrong than right. And with that caveat out there, so what actually motivates architectural choices very rarely silicon alone.
It's actually software. And so part of the reason why neuromorphic and quantum, though these are not new concepts, they've been around for a long time, is the ability to scale software to deliver the eventual performance and power that you need with the ease of use. It always ends up being the case.
And case point, neural networks are not new. AI is not new. It's been talked about from Turing Times 100 years ago.
End of the day, it's math. It's all Newtonian calculus end of the day. And so you need to really wait for the software maturity and the right silicon ability to deliver that.
That's really the form of where things are. My observation is that I think everything gets hyped a little too much too quick. And things take way longer than people estimate.
And what people underestimate is, how do you scale a proof point? And so a proof point is easy. How do you scale a proof point to becoming an industry standard?
And it always is 10 to 15 years longer than people estimate. So I have no doubts that there is a large portion of a future that's going to be a combination of some version of technologies we have today. No doubts neuromorphic has a play, no doubts analog has a play, no doubts quantum computing has to be a part of it.
And each one brings various different benefits. But they also have large deltas of things that need to be solved for scaling. A proof point doesn't make a product.
Yeah. And so that's really where the gap's at. So I definitely think that the world will have a plethora of choices ahead of it.
And to play this out, there's also biocomputers, DNA computing. And so there's a lot, I think, still ahead of us. And I would say, what I think I foresee for the next 10 plus years is still going to be the mainstream technology driving, or the known that we have today, still being the majority of the deployments, maybe 95, 99 percent of all the deployments.
But I think 10, 20, 30 years old, I think there will be an opportunity for newer elements to come together. So that's my jaded 30 years of seeing this and expecting and hoping and things being delayed story. But I think I'm a little more right on this than wrong, I think.
By popular demand, Netsuite has extended its one-of-a-kind, flexible financing program for a few more weeks. Head to netsuite.com/eye On AI. That's Eye On AI, E-Y-E-O-N-A-I, all run together.
Again, head to netsuite.com/eye On AI. netsuite.com/eye On AI, where it's one-of-a-kind, flexible financing program.
no subject
Date: 2024-09-05 21:03 (UTC)UPD на Apple Podcasts он есть
no subject
Date: 2024-09-05 21:37 (UTC)The edge market is going to be the next AI gold rush. And if you really think about the edge market, it's 40,000, 50,000 customers globally. So it's diffused in its footprint.
But if you aggregate that, it's a much bigger market than the cloud.
By popular demand, Netsuite has extended its one-of-a-kind flexible financing program for a few more weeks. Head to netsuite.com/eye On AI. That's Eye On AI, E-Y-E-O-N-A-I, all run together.
Again, head to netsuite.com/eye On AI. netsuite.com/eye On AI, for its one-of-a-kind flexible financing program.
Craig, pleasure again. And so I took a long road to really come in to CMA, about 30 plus years in the industry. A background in software, silicon, and then I managed fairly large businesses.
In the last 10 years, I was at a company called Xilinx. I was there for 18 long years. I was their executive vice president, general manager when I stepped out.
And I was pretty passionate about watching what's happening in AI. And clearly, I think AI, even then, five years ago, took off in the cloud. And obviously in the consumer mobile experience as well.
And what was fascinating for me was that I think the physical world that we live in, the industrial world we live in, robotics, industrial automation, medical, automotive, we're still fairly using archaic old techniques and we're re-embracing AI. And I wanted to start a company that would really build a purpose-built platform to scale AI at what I call the embedded edge market. So that's the history and the backdrop on how we started, what led me to start SEMA, and now we're five and a half years into it.
And SEMA is designing chips for the edge, is that right?
Correct. We built our purpose-built chips for the edge. We also make the associated development software.
So our application developers or our customers can write their application and port that application through our software onto our chip.
Yeah. Just one question which I may or may not leave in. I had Dave Patterson on the podcast quite a while ago, and he was talking about instruction sets and creating an open-source instruction set for chip design.
Is when you talk about the programming environment, are you talking about the instruction set or how does the instruction set figure into that?
Okay. It's a simple question, but a very complicated answer, and I'll do my utmost. So you should think of our chip.
As a heterogeneous compute platform. So we have our own proprietary ML accelerator. We have an ARM processor complex, which is our control plane processor, and we have a DSP vector engine that we have licensed from Synopsys.
So those three make up the various compute elements we have on our chip. So that's number one. Number two, unlike the cloud where ML acceleration is the only problem.
In our world, it's application acceleration. So you need to accelerate not just the ML workload, but also the computer vision workload, the pre-processing, the post-processing, the control plane analytics, and the decision-making all needs to be on a single chip. So perhaps what Dave Patterson was referring to is that ARM has its own instruction set to program their chip.
He, I think, is driving a RISC-V platform.
That's correct.
Which is creating a open-source instruction set so that anybody could have... You're democratizing, if you will, the access for compute. And I think one day, no doubts, RISC-V will be a very, very popular, if not a parallel path or an opportunity for many designers.
In the space that we are in, the embedded market, the incumbent solution provider or lead solution providers are. And what's more important here is not just the instruction set alone, but the ecosystem of software vendors that surround that platform. Unlike other companies which may create a closed platform where they control the entire software stack, we create an open platform for anybody to be able to build applications on top of us and also leverage the software ecosystem.
So in that aspect, we found that finding Arm as a partner is a better way to go for who we are as a company. And I think they've been a fantastic partner. And so we use their instruction set for the control plane aspect of it.
But our secret sauce is really in the ML portion of it. But what we enable is what we call a product called a MLSOC, a machine learning system on a chip. And we allow people the opportunity to partition the problem in any of the heterogeneous compute elements.
Yeah. And when you say to open it up, I mean, when you were referring to the proprietary software development environment around a chip, I assume you're referring to Nvidia's CUDA.
Yeah.
Correct. And in what way is yours different? I mean, not different technically, but in terms of it being more flexible or more open.
Correct. So really good question. So let me try to stay at the high level and try, let's see if I can get my point across.
The embedded world is an open source world. So the world is used to Linux. It's used to computer vision platforms like OpenCL, OpenCV.
And from an ML framework, they want to retain that same open source access. So they want to be able to work with PyTorch or TensorFlow or Onyxx or any variant of an ML framework. So what we have done quite different than Nvidia from that perspective is we have built something made for the embedded market.
Everything we do touches open source. And so we kind of, I mean, if we're abstract, think of us as the Ellis Island of everything. So you could give us your poor, you're tired, you're hungry.
And once they take, once we ingest their application, then we convert them to a proprietary environment to deliver the performance and the power that they desire. So we are very open source and we are not limiting. CUDA was not built for the embedded market.
So to a certain degree, people are having to transform their environment to fit into an Nvidia world or CUDA world. So we do not force the transition. They can remain in the application and the environment they've been used to.
So that's a large difference that we go to. But obviously the combination of the hardware and the software, CUDA and their silicon, similarly, our version of our software is called Palette. Palette is a combination of MLSOC, Dolores, a world-class performance and power, and ease of use parallel.
I see. What you mean is people can use other platforms than Palette, or Palette works with Linux and other open-source systems?
The latter. So Palette is our entry. So anybody can bring in a thing, and they come to work with us through Palette.
Okay. And I'm sorry, I jumped into those questions that have been nagging me. But let's back up and talk about the evolution of chip architecture.
You know, from the Von Neumann architecture and CPUs and the rise of GPUs, and which was a kind of accelerator, and then the kind of split in, you know, as Moore's Law slowed, then people's, instead of trying to miniaturize transistors further, they started creating specific chips called accelerators, and then now we're moving to the edge. So can you just kind of walk us through that evolution?
Absolutely. And I actually think you did a great job already in walking through the progression of how, I think, compute architectures of a wall. I would still say at the underlying element of it, until we got into ML, everybody's architecture leveraged one alignment and some simple load store instruction.
You load and you store data and you compute, right? So the fundamental architecture hasn't really shifted. And combination of Moore's law slowing, but also the performance needs being exacerbated.
And power being a limiter has forced an hour, has created an opportunity for innovation. And a plethora of new approaches are still being brought together. GPUs have kind of taken pole position on solving AI ML.
But from our perspective, there's no one size that fits all. And no doubts, they've done a fantastic job. And in reality, Nvidia's strength is not just their GPU architecture, but more importantly, CUDA, right?
And they've done a fantastic job in building that as a mode to really navigate and protect their growth and opportunities. And I admire what they've really done. I think for the edge market, hard limiters power.
So things in the edge market cannot support a GPU like power consumption. So your world is really very simple and five watts, ten watts or 20 watts. And anything more than that is untenable from a system power performance or from a thermal heat dissipation perspective.
So you got to kind of think of it the other way in that how much compute can I get for a given power budget? That forces a radically different approach and an opportunity for people to think very differently, like we have. And what we really came up with is saying, hey, it's not ML acceleration alone or AI acceleration alone.
We need to think about application acceleration. And that's why we created a heterogeneous computer platform. And if you really dig into our ML architecture, it's a tile array of 10 by 10.
And we fundamentally stream applications to run through the tile array and efficiently utilize the compute that we have on our ML array. And so that's our approach that we have taken. And from our perspective, that's the best we could figure out to solve for the needs of the edge market.
no subject
Date: 2024-09-05 21:38 (UTC)Great question. So I think I will open with this and perhaps even close with the same statement. I think the edge market is going to be the next AI gold rush.
And if you really think about the edge market, it's 40,000, 50,000 customers globally. So it's diffused in its footprint. But if you aggregate that, it's a much bigger market than the cloud.
So think automotive, think robotics, think industrial automation, think medical, think the government sector. We use compute in very, very many different facets and everything that we touch in our lives today. And none of them really have significantly adopted AI yet.
And over the next 10 years, the next 20 years, that is the next big gold rush. And a lot of focus and a lot of attention that duly is focused on the cloud today is probably going to shift the edge. And this is the exciting opportunity, I think, that's ahead of all of us.
And so, I think I've been positioning this as the next big thing from a compute perspective and wherever you're going to be focusing. And the reason for it is very simple. Everybody's data needs are going through the roof.
The compute needs are going through the roof. And three key dynamics are forcing people to think about the edge a lot more. Number one, locality.
Not every application where you collect the data has the affordable opportunity to transmit the data back on a 5G or on some network back to the cloud and compute and come back. Proof point, automotive. You're scanning the road, you want to detect a pedestrian, you have milliseconds to make a decision, not seconds.
Robotics is a similar thing. So localized compute is key. Second one is security and privacy.
And as you particularly get into generative AI and LLM elements, access to data and really protecting both the security and the safety of it, and the privacy of it is going to become very, very key and you're going to see a lot of architectures move to localized edge compute, where the data stays with you. The third one is TCO. The cloud is pretty expensive.
And also there's monetization of the data. When you have the data, why not monetize the data for your own benefit? Why give it away to somebody else, right?
So the TCO element is going to kick in. So it's not that the cloud is going to go away. The cloud will continue to be a large entity.
But there will be a better balance between what needs to be at the edge and what has to go to the cloud. And I believe in the next 20 years, you're going to have a hybrid mode for almost every architecture or for every compute aspect that I think we're going to look through. And it should be very exciting times ahead.
Yeah. And I remember when we spoke last time, everyone's looking at the edge of all of the chip designers. And there are two ways that people are approaching it.
One, the big companies that have an existing, you know, very capital-intensive architecture are looking to scale down the architecture to make it work at the edge. And then there are companies like SEMA that are building up from scratch with the edge in mind. Yeah.
And what's the role for each of those? Or in your view, is there no role for the scaling down that's...
Yeah. More than the scaling down, what people are not paying enough attention to is the software. So scaling down the silicon, there's a lot of smarts, a lot of capability.
But AI software is an entirely new paradigm. So you cannot just add AI to an existing infrastructure. It's kind of open heart surgery.
So people need to think about the AI software architecture first before they really look at scaling down. This is where the industry's large gap is. There are very many amazing public companies that have classically served the edge.
But very few of them have an industry leading AI software and an AI roadmap. So that's one large opportunity and gap. I would submit to you that if people don't have an AI roadmap in 10 years, they may not be around as a company.
So there is going to be a mad scramble and a rush to really kind of AI enable everybody's roadmap going forward because it's here to stay. And this is not a fad anymore. And if you don't have an AI story, it's probably going to be yesterday's story, right?
So that's the big shift people are moving to. What we are building is something purpose built. And we really targeted this market.
And I would say we are one of the first few companies to really dedicate our focus to this. We are not the very first, but we are one of the very first few. And we have come a long way in five and a half years, and we are Gen 1 in production for a year and a half.
We are engaged with a wide range of applications and customers and market leaders. And our Gen 2s are on the corner. And so we are pretty excited about what we are going to bring.
The learning I have gone through is, while AI architectures are hard, what's been the Achilles heel remains the Achilles heel for the industry, which is AI software. And this is also perhaps the reason why Nvidia has had a great run and very little competition to date. And so AI software is probably, whoever does a great job in AI software is going to be the lead.
That much is clear.
And when you say AI software, you're talking about CUDA or?
Yes, RDA Coolant, right, a palette in our case. Correct.
Yeah. Yeah. And maybe just explain that a little bit more for listeners that aren't familiar with CUDA or with palettes.
Our palette, right? And so I think customers design their application and they come back and say, hey, this is what I want to go accomplish. And they have an expectation on the performance they want, the power that they want, the accuracy that they want, or the cost that they want to spend for a particular application, right?
So they walk in and say, here's my application description, here's what I want to get out of it. What translates the application from a customer description? Into a chip executable that runs on the chip, and then delivers the promise of the performance, the power, the cost, is really the software programming environment, and in Nvidia's case, it's called CUDA, and in our case, it's called PALT.
So it's really a translation between the customer's description of the application to what eventually runs on the chip.
Yeah, and a translation to what? I mean, to...
It ends up being fundamentally a microcode or an executable that is then downloaded onto the chip, so that the chip can be programmed to perform as the application requested.
no subject
Date: 2024-09-05 21:42 (UTC)And right now, it's, I would say, across the board. And I'll give you the spectrum, and then I'll amplify the areas where I believe that's going to be the highest traction. So we are engaged with the top five customers in robotics, top five customers in the government sector, the aerospace and defense sector, automotive, medical, transportation tech, agricultural technologies, and also a category of smart vision, that's a Venn diagram, oral app or industrial automation, right?
So visual inspection, factory flow automation. So these are all the applications, and we have as a company in the last five and a half years, engaged with the top customers in each one of these market segments. Play this out five years from now, where will we see the highest volume, highest growth?
I think clearly, automotive, just by the nature of it, is going to be a key area of focus for us and the driver. Right behind it, I estimate would be robotics. I think there is a resurgence of a lot of focus on robotics.
COVID brought a lot of the supply chain issues and the legacy architectures coming in the way of global productivity in a big way. But I also think that there's a lot of articles written up on labor shortages globally. And people really want to bring in a lot of automation into the factory floor environments.
So you're going to see a lot of AMRs or robotic environments across quite a few warehouses. There's a lot of talk around humanoid robots. This is a new area of development.
Some of these may not pan out or as with everything in life, may take much longer than anybody's estimate. But the fact that I think robotics is going to play a very large portion of our lives, that's a given.
Yeah.
With these edge applications, they work with the Cloud, right? I mean, they're not intended to work independent of the Cloud, are they?
It depends on the customer's architecture. Take, for example, R-chip. R-chip is capable of working independent of the Cloud.
So you could fundamentally train an application, and you could infer that at the edge. There is really no need for the application to be connected to the Cloud, unless you choose to do so, right? And so the application is fully capable of being entirely independent, running on its own with no connectivity.
And many applications, really if you think through it in the real world, may not have connectivity elements, and or may struggle to get connectivity. And you don't want to be dependent on the connectivity alone, right? So we built a chip that's entirely capable of being self-managed, and being out there with no need for a Cloud connectivity.
But if the customer chooses to have a Cloud connectivity, either for updating the algorithms, or to really do a nightly upload of information or analytics, we enable that also.
Yeah. Are you working with, I would assume you are with many drone manufacturers, because this sounds like what you would need on drones.
Absolutely. And so we have built a platform that is the best in class in performance and power, and the programming environment we believe is the best ease of use in the industry compared to our peers, if you will. And drones are a poster child that needs all three.
They want the highest performance, the longest flight time, lowest battery consumption. And I would say we are better poised to serve that compared to anybody else. And so absolutely, we are engaged with the market leaders on unmanned aircrafts and drones.
And we see a lot of good traction for technology in that market.
Yeah. And the processing that's taking place is computer vision and analytics and control, I guess. Are there others that I'm not thinking of?
No, I think there is also navigation.
Right.
You could argue navigation fits in the context of computer vision. So there is fundamentally a camera subsystem or some kind of a LIDAR radar subsystem. So gathering location-specific data and you're computing and you're making decisions.
So it's a combination of computer vision, perception, localization, segmentation, and really distributing the workload, both from an AI context, but also into classic computer vision. And you also have control plane functions to watch your flight navigation, and really ensuring that I think you're combining all that into a single chip. And this is kind of what we do well.
And given that we are a heterogeneous computer platform, we do more than AI workloads. And so you could actually put the control plane, the navigation, the decision-making, and have it all work on a single chip.
no subject
Date: 2024-09-05 21:45 (UTC)I'm glad I'm not the only one that struggles with that.
Yeah, that's right. How large are the chips? Because certainly, you know, the Nvidia packages or systems are, you know, getting, you know, you need a truck to move them.
And they're certainly not going to fly in a drone. So how large are your chip packages?
We are really big. That's it. And so you can almost visualize what I see in my thumb.
That's about the size of what we need to have. And that's what we deliver. And so we, that's the chip.
We also provide bold form factors so that customers can easily integrate that into our existing structures. And one of the benefits in having spent 30 odd years is I've spent a lot of time studying, not just me, our entire team, what we need to be at in terms of cost, power, form factor. And we have put all of that into really what we have built eventually.
And I think the opportunity we have had is we have built something from grounds of purpose built. I am not a public company that's got a lot of legacy. I have zero legacy.
So we have said, hey, this is our learning. This is what we need to do. And we have raised a good amount of capital.
And we said, let's put the capital that we could use and build something really for the customers.
The other side of what's happening in the chip world are the foundries. You're designing chips, but you need a foundry to etch the chips. And, you know, there's various companies are building foundries for various types in the United States.
But for the time being, TSMC and Taiwan really is the one place where you can get this done. How do you get capacity with them?
I keep joking that though we're in the technology business, it increasingly more and more is a relationship business. And one of the side benefits of growing old is you get to know a lot of people that are all in key decision making positions at all these companies. A very, very small company.
We are fantastically happy with the partnership we have with TSMC. They've been an amazing partner to us from day one. And they've given us and they've made us feel like a big public company.
And so I'm very lucky that our problems are all demand related and not supply related. And so we have worked very hard with them and we have done a good job in managing the inventory needs and the market demand and really planning well. And I would say I think it's never an easy thing to do, but you've done something for 30 years, you tend to become reasonable at it.
And so I think the team is a very well-qualified team, but we won't be here without our partnership with TSMC. So we're very proud of what we've done together with them.
no subject
Date: 2024-09-05 21:48 (UTC)And what size are you at? And are you looking, I would presume you are looking at the new founders being built and is your architecture something that could be produced at any foundry or is it a particularly, particular kind of foundry that you would need?
Great set of questions. I think one of the key things we decided to do early on was we recognized that our software-centric approach gave us a lot of opportunity to get better performance and power. We decided to scale back in what you referred to as process technology.
And so we created our Gen 1 silicon product at 16 nanometer, 1.6. And this is at least 10 to 12 year old technology. So we did not want to take very advanced process technologies and take a lot of risk.
There's also a cost equation and a development equation. And as you get more aggressive or more newer nodes, the cost, the mask set to reproduce the chip, but also the development cost both really go up quite a bit. So Gen 1, we said, hey, we're getting amazing performance and power compared to anybody else.
And in real applications, we're 10x of the very best companies. So we said, let's stay back in process technology in Gen 1. We did 16 nanometer.
And in Gen 2, which we have now gone public, we have announced that we'll be at 6 nanometer. So we'll be getting a 2x improvement over what we have done in Gen 1 on a performance per watt, right, which is a huge leap. Typically in our business, you get 10-15% improvement, Gen 1 to Gen 2.
For us to get a 100% leap is pretty phenomenal. And we now feel like we have the right opportunity to do it, and our customers will usually derive the benefits of it. And to answer your question, we are tracking very closely where we get our chips manufactured, and we have the luxury of getting them manufactured now in the US as well.
And at the appropriate time, we'll take full advantage of every opportunity that we have. And within TSMC, I think we definitely have the choice of being manufactured any place. And just one thing is, and I think we are, our business doesn't work in that you build a design and you could go to anybody.
And so, yes, at a design implementation level, I think you have that opportunity. But I think once you get into the chip development, you fundamentally locked into one entity. That's typically how our companies work, and we are too small to play with too many players.
And our volumes are too low to really play the game. And if you redo it, it's uneconomical or economically not viable. And so I think we really want to work with the best in class, and TSMC is as good as it gets.
And they have done a phenomenal job, and they truly understand that this is a services business. And even today, I'm very impressed that as a startup, we get the same attention, if not perhaps more than a public company. And so all of those really are huge positives.
no subject
Date: 2024-09-05 21:48 (UTC)So we are deployed. I would still say it takes a long time in our business to really scale. Like the cloud business or the consumer business, where the design and cycle is fast, the embedded space thinks takes a long time.
But then the other positive is that they stay for production for 15, 20 years. So it's a game of inertia. Getting in is pretty hard, and the opportunity to stay in production is also pretty significant.
So we have quite a few customers that are moving to production with us globally. And over the next year or two, we are excited about scaling more and more customers to get into production with us.
Yeah. And how many people are at the company right now? And how do you scale a company like this?
Because as you said, the demand is massive.
We are about 160 of us today. And with a lot of consultants, I think, or contractors, we are on 200. So I would say wrongly, we are 200 people working at the company.
We are lean. We are credit lean for what we are doing. And it really is kudos to everybody that works here in that it's a startup and we do whatever it takes to make things happen.
And we are in some ways doing what many companies use, tens of thousands of people to get done. And I'm a big, big admirer of everybody that we have here. I'm biased and these are all more than employees, family for us and that we do whatever it takes to go make things happen.
The macro thing that is a priority for us is scaling. Is scaling our go-to-market and how do we touch 40,000 customers in scaling our go-to-market partners that we have, reps and distributors and VARs and ODMs and ISVs. And again, I go back to not only myself, but I think we have had a lot of experience scaling businesses in our career before.
And so we retain the core of the company to be small. But through a partner network, we expand our footprint and where we go look to. The underlying thing that really enables us, Craig, is really our software.
And more self-managed the software is, the more the customers can manage themselves. That's a large, large component of how we can enable the scaling. If every one of them needed a hand holding, we don't have a company.
One day, I think, we are going to touch tens of thousands of customers and it will be a very exciting scaling opportunity for us.
Yeah. And the biggest market, I would assume right now is the United States, but I don't know. I mean, this is a global phenomenon.
Absolutely.
I think by footprint, North America would be the largest market for us, no doubts. But we are enjoying the benefit of our customer traction in Europe, Japan, in Korea and India too. And about 70 of our team members are based in Bangalore in India.
And we also are building out sales and go to market activities in India. And I think India is going to be a massive growth, in general, a doctor of AI. And we feel like we are one of the leaders and pioneers in this category of edge market scaling.
And I am quite convinced that I think in the next 10 years, this is going to be a high priority for every nation, a high priority for every industry sector. And people that don't have an AI-enabled environment are going to be left behind. And I think it's a matter of now who's wanting to gain leadership position.
There's going to be a fear of missing out that's going to go through the entire customer base.
Yeah. And you were talking about how I'm going to avoid the word again, but how you use your chip is not specialized one particular application. But is there an application that at some point it seems, if not in the design of the chip in scaling your sales and marketing and all of that, you've got to focus on a few applications and let the others come to you.
Which are you guys focused on?
I would say robotics, industrial automation, and vision systems. If you really do a Venn diagram of those three, I think that'd be a key gravity area for us. And I should expand on that and saying, every day our struggle is specificity versus generality.
So we've got to really make this balance happen. And I've built products before that will work for 40,000 customers, where you build one thing, every one of them is doing something radically different than the other, but it all still ends up being mostly the performance, mostly the power, mostly the cost that they need. Not easy to go to.
And we have learned and built good businesses earlier in our career, and we have used that experience to build it. And I would say, though I explained our gravity point of our highest volume, where I think we're going to drive, most people are going to find out that I think we are better than everybody else for their application as well, and all the other market segments we've already talked about.
Yeah, this wasn't, I wasn't going to ask this, but it just occurs to me on this question of scaling. You know, the big boys are all chasing this market as well. What's your future?
Do you think you guys will end up as part of a larger, you know, Google or Nvidia or somebody at that scale? Or is your expectation to stay independent and grow to that scale?
Really good question. And I would maybe give you two or three things that I think maybe I should mention. One is what I've learned now in five and a half years, and all my career I've been in large public companies, and this is my very first startup.
I started and what I've really come to recognize and truly internalize this, it's not the size of the dog, but it's the size of the fight in the dog that matters. I've been in very large companies and I always wondered, my God, we need more to get things done. There are 200 people and we are doing things of 10,000, 20,000, 100,000 people companies.
When you really build yourself into an element of, this is our team and this is what we're going to go do, and this is our life and blood, you operate very differently. That's one thing. The second thing I should say is, I have no idea how to predict the future.
What I do know is if you have results, you have options. We have really submitted ourselves to not worry about what outcome is ahead of us. We don't know.
What I do know is if I drive results, I have outcomes. And if I don't have results, I don't need to worry about outcomes. And so we single-mindedly put our head down to really focus on driving to be the best in the category that we are in.
And we cannot be everything to everybody. We really need to focus and do a good job of what we do. And our company definitely has been interesting to a lot of people.
I think that should be a fair thing for me to validate. And I think it would be fair to say that I think we are definitely noticed by everybody, by our customers, by our peers. So what our future really holds?
TBD. When I wake up every day and go, there's one thing to get done, and I drive back home saying, yeah, we got that done today. And I'm happy.
Yeah. And this is getting off subject, but I'm curious about the fundraising journey. How easy was it to get the initial capital?
And are you still raising funds?
I would say two things. I think for a long time, investments in chips had dried up. Chips were no longer really a high priority and a focus just because of the investment amount needed to the return cycle.
It just didn't fit every C mold. And so almost from 2005 till maybe even 2016, 2018, there are hardly any chip startups. It's now a key gold rush, and there's an enormous amount of attention and span put into it.
And I would say raising money is hard. Anybody that starts a company, if you walk in assuming that raising money is easy, I would urge you just really pragmatically internalize that it's hard to do. But I also simultaneously held the notion that the right kind of idea, the right kind of system and a setup always is capable of attracting good capital in any circumstance.
And we've been very lucky. We have raised around 270 million so far. And we have amazing investors that are all deep pedigree investors.
And it's public information that I think are key investors, Fidelity, Dell Technologies, Amplify Partners, and recently we extended around and added in Maverick and also Point72. So these are a few and these are household names as it comes to deep tech, and also companies with deep pockets. So having money always helps.
Having great people around you that know everybody and can connect you helps. Having a great board helps. And you need all of that to really go pull this off.
And so it's my day job to be raising money every day. I'm thinking about raising money every day. We have done well so far.
We also manage the money well. And I think we have carved ourselves out to be an industry-leading company. And I do think that I think at some point we'll be earning some more capital to further our growth and our growth.
Yeah. The other the other difficult thing is hiring good people. I mean, they're particularly when you have somebody like Nvidia, you know, vacuuming up people.
Has that been a challenge?
I would say it's been less of a challenge than I had expected. It would be my answer. Again, hiring is really difficult.
Hiring good people is harder. Hiring amazing people is even harder. And in a startup, you better have amazing people.
And how do you get 200 people to do something that tens of thousands are doing, right? And it's not just amazing people, amazing people that are good team players. So you do the and function of all of this.
You kind of come back to a very small universe of human beings that really fit all of that. And what we do is not just chips, it's software, it's AI, it's computer vision, it's automotive. Not easy thing to go to.
And so what's really helped us is network. I've spent 30 years. I've known a lot of good people that collectively believed in me and I believed in them.
And so there's a network, not just me, everybody that's around me has a network. And so the network's really helped a lot. The second thing is we're building something that's industry leading.
And I'm an engineer. You want to be part of a winning formula. You want to be part of a David vs. Goliath fight. You want a story in your life where you said, hey, we started with nothing and we built something that beat the very best. So we have done that now and that also attracts a lot of people and that people like success.
And they're like, wow, 200% company. And we have publicly validated that we are the best in class in the category that we are in. And people want to be a part of the winning problem.
So I would say it's been easier than I had expected, but every single hire is hard. Every single hire is something we pay a lot of attention to. And even now, though we are close to 200 people, I know everybody in the company and I kind of know their spouse and I know their children.
And we need that family environment to really attract more people. And then it's not just hiring, retention too is hard. And people in our trade are the most sought after personalities in software and hardware and AI.
And so it's not just hiring, but it's also retaining and creating an environment where it's motivating and they choose to stay and choose to really be part of a winning formula. So that's, I would say, you've covered fundrais, you've covered people, you've covered solving tough problems, you've covered customers and winning. And that's what I wake up to every day and it's my day job to make sure I play my part.
But luckily, I have amazing people around me that also do way better than me luckily. I'm really glad to see them become a good company.
Where do you see the chips going? I mean, we're getting into specialized accelerators. There's now, I had somebody from Western University in Australia.
I think that's the name of the university. They're building a brain scale, neuromorphic computer using FFPGA. Is that right?
Yeah.
Programmable arrays. And where do you see this going? Are we going to end up with a highly fragmented chip market where depending on your use case, you'll use one kind of a chip?
Or do you think that and then there's quantum chips? Sure. Is neuromorphic and quantum, do you think that they're going to eventually take a big share of the market away from the market?
Great question, Craig. And hard to predict the future because you're probably going to be more wrong than right. And with that caveat out there, so what actually motivates architectural choices very rarely silicon alone.
It's actually software. And so part of the reason why neuromorphic and quantum, though these are not new concepts, they've been around for a long time, is the ability to scale software to deliver the eventual performance and power that you need with the ease of use. It always ends up being the case.
And case point, neural networks are not new. AI is not new. It's been talked about from Turing Times 100 years ago.
End of the day, it's math. It's all Newtonian calculus end of the day. And so you need to really wait for the software maturity and the right silicon ability to deliver that.
That's really the form of where things are. My observation is that I think everything gets hyped a little too much too quick. And things take way longer than people estimate.
And what people underestimate is, how do you scale a proof point? And so a proof point is easy. How do you scale a proof point to becoming an industry standard?
And it always is 10 to 15 years longer than people estimate. So I have no doubts that there is a large portion of a future that's going to be a combination of some version of technologies we have today. No doubts neuromorphic has a play, no doubts analog has a play, no doubts quantum computing has to be a part of it.
And each one brings various different benefits. But they also have large deltas of things that need to be solved for scaling. A proof point doesn't make a product.
Yeah. And so that's really where the gap's at. So I definitely think that the world will have a plethora of choices ahead of it.
And to play this out, there's also biocomputers, DNA computing. And so there's a lot, I think, still ahead of us. And I would say, what I think I foresee for the next 10 plus years is still going to be the mainstream technology driving, or the known that we have today, still being the majority of the deployments, maybe 95, 99 percent of all the deployments.
But I think 10, 20, 30 years old, I think there will be an opportunity for newer elements to come together. So that's my jaded 30 years of seeing this and expecting and hoping and things being delayed story. But I think I'm a little more right on this than wrong, I think.
By popular demand, Netsuite has extended its one-of-a-kind, flexible financing program for a few more weeks. Head to netsuite.com/eye On AI. That's Eye On AI, E-Y-E-O-N-A-I, all run together.
Again, head to netsuite.com/eye On AI. netsuite.com/eye On AI, where it's one-of-a-kind, flexible financing program.
From Eye On A.I.: #206 Krishna Rangasayee: Why Edge AI is the Next Big Thing in Tech, Sep 4, 2024
https://podcasts.apple.com/us/podcast/206-krishna-rangasayee-why-edge-ai-is-the-next-big/id1438378439?i=1000668396023
This material may be protected by copyright.
no subject
Date: 2024-09-05 21:37 (UTC)no subject
Date: 2024-09-05 21:49 (UTC)