What if Apple decided to release their “M” series processors a desktop CPUs? How would that change the market?
It would also be interesting to see Samsung Foundry release desktop Exynos chips or maybe Qualcomm “X” processors for desktop that are more powerful than the laptop chips.
p.s. I know they would never do anything like that, but it would be interesting to imagine how the market would change with more competitors
Dell is already releasing Qualcomm SoC Latitudes. There are bound to be compatibility issues, but performance wise it’s kinda undeniable that this is where the market is going. It is far more energy efficient than an Intel or AMD x86 CPU and holds up just fine. The main downsides you’ll see could likely be resolved with ASICs, which is how Intel keeps 4k video from being choppy on low end APUs for example. Compared to M4, Qualcomm’s offering is slightly better at multithreaded performance and slightly worse at single thread. The real downside to them is really the reliance on raw throughput for tasks that both brands of CPUs have purpose built daughter chips for.
Is that actually true, when comparing node for node?
In the mobile and tablet space Apple’s A series chips have always been a generation ahead of Qualcomm’s Snapdragon chips in terms of performance per watt. Meanwhile, Samsung’s Exynos has always been behind even more. That’s obviously not an instruction set issue, since all 3 lines are on ARM.
Much of Apple’s advantage has been a willingness to pay for early runs on each new TSMC node, and a willingness to dedicate a lot of square millimeters of silicon to their gigantic chips.
But when comparing node for node, last I checked AMD’s lower power chips designed for laptop TDPs, have similar performance and power compared to the Apple chips on that same TSMC node.
Do you have a source for AMD chips being especially energy efficient? I don’t consider them to be even close. M3 is 190 cinebench points per watt whereas Ryzen 7 7840U is 100. My ppw data doesn’t contain snapdragon x yet, but it’s generally considered to be a multithreading king on the market and it runs as signifcantly lower tdp than AMD. SoCs are inherently more energy efficient. My memory of why is the instruction sets on x86 allow for more complicated process but ARM is hard restricted to using less complicated processes as building blocks if complexity is required.
Like I mentioned though, there are tasks that x86 cannot be beat on but it’s because they use ASICs on-chip for hardware accelerated encoding/decoding and nothing is more efficient at a task than a (purpose-built, task specific*) ASIC /FPGA.
I remember reviews of the HX 370 commenting on that. Problem is that chip was produced on TSMC’s N4P node, which doesn’t have an Apple comparator (M2 was on N5P and M3 was on N3B). The Ryzen 7 7840U was N4, one year behind that. It just shows that AMD can’t get on a TSMC node even within a year or two of Apple.
Still, I haven’t seen anything really putting these chips through the paces and actually measuring real world energy usage while running a variety of benchmarks. And the fact that benchmarks themselves only correlate to specific ways that computers are used, aren’t necessarily supported on all hardware or OSes, and it’s hard to get a real comparison.
I agree. But that’s a separate issue from instruction set, though. The AMD HX 370 is a SoC (well, technically, SiP as pieces are all packaged together but not actually printed on the same piece of silicon).
And in terms of actual chip architectures, as you allude, the design dictates how specific instructions are processed. That’s why the RISC versus CISC concepts are basically obsolete. These chip designers are making engineering choices on how much silicon area to devote to specific functions, based on their modeling of how that chip might be used: multi threading, different cores optimized for efficiency or power, speculative execution, various specialized tasks related to hardware accelerated video or cryptography or AI or whatever else, etc., and then deciding how that fits into the broader chip design.
Ultimately, I’d think that the main reason why something like x86 would die off is licensing reasons, not anything inherent to the instruction set architecture.