Intel took the wraps off its 8th Generation CPU core with AMD Radeon integrated graphics today. It’s a historical event for more than one reason. First, the new CPUs are likely to set a new high water mark for integrated graphics performance. Second, the new chips represent the first time AMD and Intel have ever collaborated in this type initiative. It’s a testament to how much the market has changed that this happened in the first place — even 10 years ago, the idea of an AMD-Intel alliance would’ve been unthinkable.
Intel is launching a suite of five new CPUs to introduce its new combined GPU+CPU core. All of these chips are Kaby Lake-derived, with four cores and eight threads.
There are some really interesting takeaways in this chart. First, check out the ROP counts on the two Vega variants on display here. The i7-8809G and i7-8709G pack 1,536 GPU cores, clock speeds just a bit under 1.2GHz, and 64 ROPs. That’s a huge number of ROPs for an integrated core — significantly more, in fact, than AMD uses in its own Ryzen Mobile 2700U. As expected, overall memory bandwidth is above 200GB/s, which puts the chip in Polaris’ weight class in that regard.
The CPUs themselves are no slouch. The 3.1GHz base clock is relatively low, but boosting up to 4.1GHz should improve overall performance and most games don’t scale particularly well with higher CPU clocks.
The GPU is attached to the CPU by an x8 PCIe 3.0 connection. Unlike the interposer technology that has made working with HBM and HBM2 difficult, Intel’s solution uses its own EMIB mounting technology that doesn’t require an interposer layer. The 4GB HBM2 stack (common to all of these chips) cuts power consumption by 80 percent compared with GDDR5.
While they aren’t based on Coffee Lake, these new chips are still a big step forward for Intel. Integrating EMIB on consumer products, building a 24 CU GPU core with 1,536 GPU cores in total is a task Intel hasn’t taken on before. And these chips now support nine displays thanks to the combined Radeon+Intel graphics hardware. Intel is leaving its own graphics core enabled for consumers who want to use QuickSync or need to build a giant wall of monitors. As for performance, Intel’s numbers look extremely strong.
Don’t just look at the bars — check what Intel is comparing against. The GTX 1060 Max-Q is a potent mobile GPU, but Intel claims the Vega GPU onboard its own NUC systems and in rigs coming from companies like HP and Dell can beat it.
There are two flavors of Vega onboard these Intel chips. The GH (Graphics High) version beats out the GTX 1060 Max-Q, according to Intel. The GL (Graphics Low) version, meanwhile, is capable of taking on the GTX 1050.
These chips are a watershed moment, not just for Intel or AMD, but for the entire concept of integrated graphics. Ever since GPUs went on-die, we’ve seen two schools of thought emerge. One of them (generally favored by Intel) is that integrated graphics should still improve over time, but consumers won’t particularly favor it. Most people want inexpensive, acceptable graphics and don’t care if they get anything else. People who want higher-end solutions will buy them in the form of discrete GPUs.
The other argument, generally favored by AMD, is that putting higher-end graphics solutions into CPUs and focusing on improving performance over time will eventually make these chips appealing to segments of the market that would currently favor a discrete GPU. We haven’t really gotten to see this theory tested, however — AMD’s Bulldozer-derived APUs were so weak on the CPU front, they killed any chance of a reasonable market test.
If these chips prove popular with enthusiasts looking for a midrange, low-cost, fire-and-forget solution, it’ll say a great deal about what the market for these higher-TDP parts looks like and whether HBM2 can be a viable solution in this space. Expect Intel and AMD to keep a very close eye on how this pans out.
Intel’s Desktop TDPs No Longer Useful to Predict CPU Power Consumption
Intel's higher-end desktop CPU TDPs no longer communicate anything useful about the CPUs power consumption under load.
VIA Technologies, Zhaoxin Strengthen x86 CPU Development Ties
VIA and Zhaoxin are deepening their strategic partnership with additional IP transfers, intended to accelerate long-term product development.
Nvidia Unveils ‘Grace’ Deep-Learning CPU for Supercomputing Applications
Nvidia is already capitalizing on its ARM acquisition with a massively powerful new CPU-plus-GPU combination that it claims will speed up the training of large machine-learning models by a factor of 10.
How L1 and L2 CPU Caches Work, and Why They’re an Essential Part of Modern Chips
Ever been curious how L1 and L2 cache work? We're glad you asked. Here, we deep dive into the structure and nature of one of computing's most fundamental designs and innovations.