Discover more from The Asianometry Newsletter
Google's Open Source Hardware Dreams
The video is below
I want to give my thanks to the Google team for taking the time to speak with me. I really appreciate it. I am a big fan of what they are trying to do and hope we can see more involvement from Google in the space down the line.
Google has been doing a lot in the open hardware space, and it has caught my eye.
Those were very fun conversations, and it gave me the chance to learn a bit about Google's thinking behind these open source moves. In this short video, I want to talk about these initiatives.
And a note before we start. Google didn't sponsor this video, but they did give me a few helpful links and resources as well as show me some cool software things. I am appreciative for their time.
The Big Announcements
Probably the biggest happenings for this space in the past year were Google's announcements that they had partnered with Skywater, Efabless and now GlobalFoundries to make it possible for people to actually fab their custom silicon design.
I did a video about Skywater before. They are one of the few pure-play foundries left in the United States. Google partnered with them to open-source the PDKs for their 130 nanometer process node.
A PDK, or Process Design Kit, is a set of design rules and physical limitations packaged with simulators, third-party pre-designed IP libraries, design rule checkers, and other design tools. This is the information fabless designers need from their manufacturer to design hardware.
They are also working on open sourcing Skywater's 90 nanometer Fully Depleted Silicon-on-Insulator process node, which is more of a speciality process that uses an insulator to prevent electrons from inappropriately burrowing through under the gate.
Now this latest announcement in August 2022 open sources the PDK for GlobalFoundries' 180 nanometer process node, and makes it possible for people to apply to fab their open source designs.
It is a pretty big deal since GlobalFoundries is quite a significant foundry. To me, it says that Google's efforts are making headway in the larger semiconductor manufacturing industry.
Obviously, Google is not doing this out of the goodness of their hearts. What is the business strategy behind what they are doing here?
Right now, there exists no significant open-source silicon ecosystem that designers and customers can use to produce data center and consumer applications.
This means that if Google wants to design and produce hardware for internal or external use - like the famed Google TPU, for instance - most of that design has to be done with proprietary IP.
This in part has contributed to hardware design getting exponentially more expensive. So Google wants to build a community of world-class, open source hardware design projects in an attempt to bend that cost curve.
And at Google's scale, using billions of chips or whatever, every little project IP that gets good enough to replace a proprietary or internally developed item has significant financial or performance payback benefits.
So yes, Google benefits from it, but so will everyone else.
Ease of Use
Alright then, so what is Google's grand strategic plan for building this ecosystem?
In my previous video about open source hardware design, I mentioned the goal of improving "ultimate ease of use" rather than achieving "ultimate performance".
There is no intention to produce the next A-series Apple chip or Nvidia H100 super-GPU. You can't do that with a 130nm process fab anyway.
Instead these early stages are more about making it easier for ordinary folks to download and use tools to make and share their own chip designs. Perhaps even fabbing that design.
And critically, doing this without having to buy or sign NDAs for expensive Electronic Design Automation or EDA software.
So the team's execution steps have been to:
First, open up and unify the various softwares and tool chains for designing hardware.
This includes the aforementioned open-sourcing of the various PDKs from semiconductor foundries, GlobalFoundries, Skywater, etc.
On the tooling side, Google is backing things like OpenLane. OpenLane is an automated flow that bundles together various tools like Yosys, Magic, and OpenROAD so to take you right from a design abstraction like RTL to a foundry-ready GDSII file.
And second, creating onramps and carrots to get talents to start using them. This includes tutorials and clone-able projects that are easy to start with.
As well as this free shuttle fab program so that people can actually get real chips from their designs.
Alright, the first thing that you might be wondering. What can you do with a 130 nanometer process node?
The process debuted in 2001 and 2002, which makes it about 18-20 years old. The first question that might come to mind - as it did for me - what can we do with this?
The 130 nanometer process produced the IBM PowerPC 970 chips, a 118 square millimeter chip with 58 million transistors.
These chips in turn powered the Power Mac G5, introduced by Steve Jobs in June 2003. It had clock speeds from 1.6 to 2.0 gigahertz. Sounds pretty cutting edge to me! What more can you want?
Just kidding. You are not going to get performance on par with today's desktop-class computers. But then again, that was never the intention nor the goal.
But 130 makes a lot of sense with IoT projects, sensors systems, or microcontroller spaces. You can make certain things that are as good as anything that is on the market today - especially if that thing needs to be low power.
Side Bar: Awarding an Asianometry Deer Award
I want to pause here to award an Asianometry Deer Favorite award to my favorite 130 nanometer published project. This one comes from a team at the University of Michigan.
People know that I am a sucker for neural network hardware. I have done a lot of videos about the memory and power consumption challenges that neural networks present to existing Von Neumann architectures.
So this team proposed an ultra-low power, compute-in-memory neural network accelerator made with analog principles. The concept is similar to the approach of the silicon photonic-based neural network accelerator I profiled in an older video.
Anyway this accelerator concept stores each of the model's weights as a threshold voltage within an on-chip volatile memory.
Then in order to do the necessary linear multiplication calculations for neural network inference, an access transistor then receives a neural network input in the form of a voltage. The voltage goes through the chip and the current that comes out at the other end can be mapped to the multiplication output of the input and the weight.
Nifty, right?! Well, the mad lads actually built the thing using a SONOS 130 nanometer process. So yes, this is what you are capable of doing with a 130 nanometer process! Okay sorry let me get back on track.
Right now, many of these projects are still toys or academic concepts. But is that not how several major open source software projects started too?
I remember speaking with Matthew Venn - who runs the Zero to ASIC course - about this and he cited the example of Linux, the open source operating system, and GCC, the open source compiler.
For a long time, Linux and GCC were also just toys that nobody really paid attention to. Then suddenly people came to the collective realization that these toys had gotten good enough to be used for real-world, business applications.
When would such a thing happen in hardware? Who knows. I don't like to make predictions.
But in a recent SemiAnalysis post, Dylan and his team mentions that Apple is working to convert at least a dozen non-customer facing cores from an ARM-based design to RISC-V. So I thought that was interesting.
Okay, one last thing to add. I think the take-up on the academia side for open source hardware design has been really impressive. There is some real traction here. Professors and their teams want to be able to share their code so that anyone anywhere can replicate their work. That’s not possible if those teams have to sign NDAs for closed source EDA software and PDKs.
For example, a team in Brazil presented a hardware accelerator design for encoding in the open and royalty-free AV1 video coding format. The whole work relied on open-source tools like OpenLane and the Skywater 130 nm PDK.
Getting Into It
So if you have now gotten interested in hacking around with this sort of stuff, how would you get started? I asked Proppy this exact question and he pointed me to the Notebooks.
The team maintains a number of iPython or Jupyter Notebooks - which are neat, self-contained files that let you execute arbitrary lines of Python code.
Google has a service called Colaboratory that lets you run these notebooks right in your browser without having to install anything.
Once you get the beginner’s project running in the browser, you can start learning by changing things and seeing how they affect the final thing. That is generally how I have hacked around whenever I tried to learn new libraries and languages.
It teaches you the lingo and gets you familiar enough for the more advanced projects available on the "Build Custom Silicon with Google" website.
Try some stuff! If your custom silicon gets accepted by the shuttle program, shoot me an email and tell me about your experience. I would love to hear from you.