Apple uses high-resolution Retina displays on all of its products, and there's no way a 3.6 teraflops GPU with 100 GB/s of bandwidth will be able to handle native 2560 x 1664 gaming without some help. So far, Intel is the only PC graphics company with AV1 encoding support, while AMD and Nvidia support AV1 decoding on their latest RDNA 2 (except Navi 24) and Ampere GPUs.Īpple also detailed its upcoming MetalFX Upscaling algorithm, which makes perfect sense to include. AVI is backed by some major companies, including Amazon, Google, Intel, Microsoft, and Netflix. One other interesting item of note is that Apple makes no mention of AV1 encode/decode support. But it's not going to be an awesome gaming solution. It's much faster than Intel's existing integrated graphics solutions, and totally blows away the 8th Gen Intel Core GPUs used in the last Intel-based MacBooks. It also uses shared DDR5 memory on a dual-channel 128-bit bus, so LPDDR5-6400 like that in the Asus Zenbook S 13 OLED will provide 102.4 GB/s of bandwidth.ĪMD Rembrandt Die Shot (Image credit: AMD)Īnd that's basically the level of performance we expect from Apple's M2 GPU, again, give or take. That processor has 12 compute units (CUs) and clocks at up to 2.2 GHz, giving it 3.4 teraflops. The closest comparison would be AMD's Ryzen 7 6800U with RDNA 2 graphics. Granted, Apple is doing integrated graphics, and 3.6 teraflops is pretty decent as far as integrated solutions go. It's not the end of the world for gaming, but we don't expect the M2 GPU to power through 1080p at maxed out settings and 60 fps. That's less than half as fast as the RX 6600 and RTX 3050, and also lands below AMD's much maligned RX 6500 XT (5.8 teraflops and 144 GB/s of bandwidth). The M2 GPU is rated at just 3.6 teraflops. That means we can focus on the teraflops and bandwidth and get at least a ballpark estimate of performance (give or take 15%). Neither AMD nor Apple have Nvidia's dual FP32 pipelines (with one also handling INT32 calculations), and AMD has Infinity Cache that should at least be similar in practice to Apple's "larger L2 cache" claims. We don't anticipate any massive architectural updates to the M2 GPU, so it should be relatively similar to AMD's RDNA 2 GPUs. That's about half the teraflops and one third the bandwidth of AMD's RX 5500 XT, and in graphics benchmarks the M1 typically runs about half as fast. The M1 for example was rated at a theoretical 2.6 teraflops and had 68 GB/s of bandwidth. (Note that the RTX 3050 is about 15% faster in our ray tracing test suite.)Īrchitecturally, Apple's GPUs look similar to AMD's in terms of real-world performance based on teraflops. In our GPU benchmarks hierarchy, however, the RX 6600 is 30% faster at 1080p and 22% faster at 1440p. On paper, the two GPUs appear relatively equal, and they even have similar memory bandwidth - 224 GB/s for both cards, courtesy of a 128-bit memory interface with 14Gbps GDDR6. Not all teraflops are created equal, as architectural design decisions certainly come into play, but we can still get some reasonable estimates by looking at what we do know.Īs an example, Nvidia has a theoretical 9.0 teraflops of single precision performance on its RTX 3050 GPU, while AMD's RX 6600 has a theoretical 8.9 teraflops. Let's start with the raw performance figures. Yes, the M2 will have fast graphics for an integrated solution, but what exactly does that mean, and how does it compare with the best graphics cards? Without hardware in hand for testing, we can't say exactly how it will perform, but we do have some reasonable comparisons that we can make. Still, I'm more interested in the GPU capabilities, and frankly, they're underwhelming. Overall, Apple claims CPU performance will be up to 18% faster than its previous M1 chip, and the GPU will be 35% faster - note that Apple's not including the M1 Pro, M1 Max, or M1 Ultra in this discussion. The M1 had 16 billion transistors, and the M2 will bump that up to 20 billion. Two years later, TSMC's next-generation 3nm technology isn't quite ready, so Apple has to make do with an optimized N5P node, a "second-generation 5nm process." That means transistor density hasn't really changed much, which means Apple has to use larger chips to get more transistors and performance. The M1 was the first 5nm-class processor to hit the market back in 2020. Except Apple has to play by the same rules as all the other chip designers. Still, the M1 has been a good chip, especially for MacBook laptops, and the M2 looks to improve on the design and take it to the next level. The reveal was full of the usual Apple hyperbole, including comparisons with PC hardware that failed to disclose exactly what was being tested. Apple has revealed most of the major details for its new M2 processor.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |