“Machine learning is deep, wide, and short!”
At Episode 1, one of our key investment areas is DEEP TECH. I’m often asked what that means exactly. My typical answer is that deep tech covers everything with a scientific background. A brilliant example would be COMPUTING ARCHITECTURE.
The last decade was not about the semiconductor segment being particularly sexy in the eyes of VCs. ‘Software is king!’ was the mainstream. No one really wanted to pour money into the old friend chip business. Exit multiples were lagging behind those of the easily scalable tech platforms and other software solutions. Then came the ‘Era of AI’, which vivified innovation in the world of computing power.
In my background research, I have identified 44 startups visible in the space, which have recently raised USD 2+ billion (note: funding was not disclosed in each case). Imagine how these numbers will rocket once those invisible teams / solutions reveal themselves to the public. While incumbents (e.g. Intel, Nvidia) better watch out, the AI chip sandbox can be easily turned into gold mines for them by acquiring the most promising innovators.
In 2018, we are talking about a USD 450+ billion MARKET in terms of revenues when it comes to the now lucrative chip business. And of course, there is no signal that anything would stop this heated growth in the upcoming years. The demand originating in automation and a sensor dominated new era fuel the steady increase. Thus, investors are more receptive to products of genius minds in this space. While most of the VCs are hardware-averse, it is important to understand that there is much more to the computing architecture technology than the hard components.
The CHALLENGE can be well defined in a short paragraph:
“AI systems, computers need to pull huge amounts of data in parallel from various locations, then process it quickly. This process is known as graph computing, which focuses on nodes and networks rather than if-then instructions.”
What are the TASKS that require the above capabilities from AI accelerators?
(source: Nvidia)
What is key to understand is where the INNOVATION is really happening. The Training part is already tackled by dominant players. Nvidia is leading the space with its GPU called Volta, but Google brought competition into the industry by developing its own TPU (Tensor Processing Unit), which is a type of ASIC.
Investors had better keep an eye on the Inference bit as the more exciting things are taking place here. Inference can be run on CLOUD or on EDGE.
And we haven’t even mentioned the layers that sit on top of semiconductors such as software libraries, compilers and frameworks. Thus, let’s pause for a second…
At the dawn of the 21st century there are various types of chips for different purposes, so here is a brief overview of the scale.
(source: VentureBeat)
The hype is more around the right end as FPGAs and ASICs better meet AI and IoT related expectations. Edge computing, which means localised computing instead of transmitting big chunks of data to the cloud with low latency and privacy issues, has become prevalent for various purposes (e.g. autonomous cars, IoT sensors) and it uses these programmable, application-specific chips. Commoditisation is over-shadowed by specialisation.
Therefore, from an investment point of view, software offerings which help to bridge the gap between various back-end (e.g. CPUs, GPUs, FPGAs) and front-end (e.g. TensorFlow, PyTorch) platforms, are especially appealing. Compiler solutions make sure that AI-driven applications can be executed on the underlying hardware components. At the moment, it is quite challenging for developers to find their ways around different artificial intelligence libraries thus, anything that enables searchability and interoperability, might be a sweet spot.
To be continued…
By Eva Rez