Just a reminder that we operate our newsletters on a paid model. Paid subscribers will get three newsletters a month including our China Deep Tech notes. Paid subscribers will also get early access to the newsletter as well other benefits coming soon. Please subscribe and support our work.
Highlights from our Blog
Nvidia’s GTC developer conference starts today. In preparation for that, we took a look at the defensibility of their software moat. Put simply, it looks very strong - the advantage of having a long head start, and the inertia inherent in large software systems. In fact, the only flaw in their defenses is the efforts of the hyperscalers. Everyone else is going to be swimming in CUDA for a while.
With everyone racing to add AI chips to edge devices, we think it is important to understand some of the costs involved. These chips are not only going to run local inference queries, they will also be used to collect data for fine tuning models. This has important implications for privacy and battery life, and a host of other subjects in between.
Arm unveiled a suite of new products targeting automotive customers. These are interesting on their own, and likely to have an impact on auto chips. But more interesting for us is what it says about Arm’s evolving business model. This is the widest offering we can remember them dropping all at once, covering everything from Cortex libraries for microcontrollers to software for the cloud. It highlights the way in which the company is solving customer problems, for instance greatly speeding time to market, which should help them capture more value from the ecosystem.
We also highlighted two companies we have gotten to know better recently.
Napatech sells FPGA modules, but deep down, they are really a software company able to monetize their software through a hardware sale. Their products run in DPUs and other network gear. And while this category is not huge, they are a small company and a little bit of volume can drive a lot of earnings leverage.
Our phones and all our other devices are getting hotter, but the technology for cooling them has not advanced in 40+ years. Enter Ventiva, a company with a new way to move air around devices that has no moving parts. It is easy to see this becoming something that everyone will need to use a lot more of.
If you like this content, you should check out our podcast The Circuit
Semis, Hardware and Deep Tech
TSMC opened up a plant in Japan and it is getting a lot of much-deserved media attention. In particular, the press is focusing is on how quickly that plant got up and running while TSMC’s plant in Arizona seems bogged down in red tape and other delays. The comparison is not totally fair, these are different fabs with different objectives, but now there is news that TSMC is going to open its first advanced, CoWoS packaging plant outside Taiwan in Japan as well.
We are deeply skeptical about Cerebras as a business. That being said, their technology can be awe inspiring. It is amazing that they can create a single die the size of a wafer, it’s just that we are not sure why anyone would really want to do that.
It should come as no surprise that OpenAI is looking to design its own chips, nor should it come as a surprise that it will cost less than $7 trillion.
We are not the only ones fascinated by growing power and heat problems with modern chips.
Who uses Google’s TPUs outside Google? Not many. The general consensus is that the software and user interface still has a long way to go. One more example of Google not being great at products.
Networking and Wireless
Academics in Europe provide a detailed look at Starlink performance. This is pretty similar to the analysis we saw from China’s defense complex a few months ago, but highlights Starlink’s abilities to connect mobile phones.
Software and the Cloud
Are we reaching the limits of Broadcom’s private equity playbook model? There seems to be a growing chorus of customer frustration with what the company is doing to recently acquired VMWare. It is tempting to see this as a failure, but instead we view this as Broadcom intentionally shedding smaller customers, to focus on the largest, most locked-in customers and harvesting them for profits. Despite all the customer and partner complaints, Broadcom will probably still make a lot of money with VMWare, but it really does open up the bigger question as to how much longer this model can run.
A look at one developer’s journey learning to program for GPUs, after years of developing for CPUs. Articles like this demonstrate why the transition from one silicon architecture to another can take a decade.
Google has fallen under immense scrutiny lately. The company that invented AI Transformer models seems to be struggling to know what to do with them. And let’s not forget their new headquarters blocks Wi-Fi signals. For those following Google for years, it is fairly clear that the problem runs deep. As one senior executive there once told us, Google is a one-trick pony, but it just happens to be one of the best tricks in the world. At some point, they really need to consider shutting down everything except Search and YouTube.
While we are on the subject of corporate dysfunction, AWS recently lowered the cost of moving data off their servers. As we have noted in the past, these costs can be massive and it is a major source of frustration for customers. So now Amazon is just giving it away. On the one hand, it is good to see them responding to customer feedback, on the other, this seems highly reactive rather than strategic. We are convinced that no one at Amazon really knows all the services available at AWS, let alone is able to think about them all holistically.
One way to overcome the CUDA barrier to entry is to just copy that software onto someone else’s chip. Nvidia is trying to put a stop to that.
Diversions
We are old enough to remember the days of pirate radio. Apparently, this was a great business for some, until the Internet came and ruined it.
Image by Microsoft Co-Pilot
Thank you for reading D2D Newsletter. This post is public so feel free to share it.