Intel, Marvell, Qualcomm Pledge Support for Glow AI Compiler
Earlier this year, Facebook introduced Glow, a new open source machine learning compiler intended for heterogeneous systems. The goal is to provide superior performance and improved energy efficiency via generating more efficient code. Here’s how the team behind Glow described the project in its initial whitepaper:
In the Glow project, we focus on the lower parts of the software stack. We work to provide PyTorch and other frameworks with a low-level graph and a code generator for neural networks. The name Glow is an abbreviation for Graph-Lowering, which is the main technique that the compiler uses for generating efficient code. The Glow low-level graph will not replace the machine learning high-level graph, in the same way that the low-level intermediate representation in compilers does not replace the abstract syntax tree. We aim to provide a useful compiler toolkit that will allow hardware developers to focus on implementing efficient acceleration hardware, each of which likely differ in capabilities, and use Glow for automating compilation tasks such as instruction selection, memory allocation and graph scheduling. The full compiler toolkit is open-source and publicly available.
What Facebook is announcing now is a new suite of hardware partners that have pledged to support Glow in their own products. Cadence, Esperanto, Intel, Marvell, and Qualcomm have all committed to supporting Glow in silicon in future projects. The software isn’t just designed to generate code for a single given architecture — Facebook intends to support a range of specialized machine accelerators from multiple companies, with corresponding performance improvements for multiple vendors. This support for hardware accelerators isn’t limited to a single type of operation, either. FB’s press release notes that the hardware-independent aspects of the compiler focus on math optimizations that aren’t tied to any specific model. Glow also ships with a linear algebra optimizer, CPU-based reference implementation (for testing hardware accuracy), and various test suites. The goal is to reduce the amount of time it takes hardware manufacturers to bring new devices to market.
FB is putting serious effort behind Glow. The company launched version 1.0 of its PyTorch deep learning framework earlier this year, new object detection models, libraries for language translation, and Tensor Comprehensions for automatically synthesizing machine learning kernels. There’s been a tremendous effort in recent years to build common frameworks for AI and ML that will run on a wide range of hardware, and Glow wants to be a part of it.
It’s interesting to see the two companies not on this list: AMD and Nvidia. Both take a keen interest in AI/ML — AMD as a newcomer to the industry that wants to make its mark with a 7nm Vega data center product later this year and Nvidia as a leader in the AI/ML space. AMD has participated in Facebook’s Open Compute Project before, so it’s possible we’ll see some activity on this front at a later date.
Continue reading
Nvidia, Google to Support Cloud Gaming on iPhone Via Web Apps
Both Nvidia and Google have announced iOS support for their respective cloud gaming platforms via progressive web applications. Apple can't block that.
Asus Announces Chromebox 4 With Support for 10th Gen Core Processors
Chromebooks are so plentiful these days they might as well grow on trees. There are fewer Chromeboxes, but Asus has been keeping its line updated and just announced its latest version.
AMD Will Bring Smart Access Memory Support to Intel, Nvidia Hardware
AMD is reportedly working with Nvidia and Intel to bring hardware support for Smart Access Memory to other GPU and CPU platforms.
Intel Is Spreading FUD About Supposedly Huge Ryzen 4000 Performance Drops on Battery
Intel believes it has presented evidence that negates the value of AMD's Ryzen 4000 product stack. Intel is mistaken.