AMD published their take on the new FX series in a blog on their website. Obviously it’s a bit of an apologetic piece. I am disappointed that it was not written by John Fruehe however. Sure, he may want to hide behind the fact that he’s working for the server/workstation division again, but that never stopped him when desktop Bulldozers were discussed on consumer forums. No, after all the lies he spread on forums everywhere, John Fruehe is now entirely silent, and lets other people do the dirty work for him. Bad form John, very bad form!
Anyway, moving on with the contents of this blog, I’m not entirely sure if they are aware of it themselves, but they explain their mistake, if you put the following two passages together:
In our design considerations, AMD focused on applications and environments that we believe our customers use – and which we expect them to use in the future. The architecture focuses on high-frequency and resource sharing to achieve optimal throughput and speed in next generation applications and high-resolution gaming.
If you are running lightly threaded apps most of the time, then there are plenty of other solutions out there. But if you’re like me and use your desktop for high resolution gaming and want to tackle time intensive tasks with newer multi-threaded applications, the AMD FX processor won’t let you down.
So there you have it: What AMD believed its customers want is ‘time intensive tasks with newer multi-threaded applications’, instead of ‘running lightly threaded apps most of the time’.
If that is what really happened, then clearly AMD is completely out-of-touch with what their customers want. I could have told you that customers will generally mostly use lightly-threaded apps. And yes, most games fall in that category, resolution has little to do with that. Higher resolutions may make you GPU-limited, but that doesn’t make your CPU any more of a game-oriented design… next year’s GPUs may no longer be limited at the same resolutions, and you won’t have a new CPU architecture ready when that happens. With today’s GPUs in SLI/CrossFire setup, Bulldozer is already in trouble.
In fact, I *did* tell you on various occasions that single-threaded performance is still the most important factor (given that we can get at least 4 cores onto any modern CPU die, so we have multi-threading covered well enough). So I wonder then, where did you go wrong, AMD? I’m not the only person in the world who knows this. Plently of experienced (assembly) programmers should be able to tell you exactly what kind of CPU is required for most software. So I wonder: do you just not have skilled software people on your team? And if so, why did you never hire them, or at least talk to software developers from some of the larger development companies? Or if you do, then why do we not see any of their input in the final result? Is this a problem caused by management?
Surely AMD can not be so naive as to think that an architecture can be a success if it can not run current and older software well enough? Intel tried that with both the Pentium 4 and the Itanium. Both were cases of CPUs that could run current and older x86 code, but were not very good at it, often not even as good as their predecessors. They only showed their full potential in specifically optimized applications, but in the consumer market, not enough of such applications surfaced, so the CPUs never really overcame their performance problems. So I wonder who at AMD could possibly have thought that they could pull it off, when Intel cannot?
The closing statement then is hard to believe:
We are a company committed to our customers and we’re constantly listening and working to improve our products. Please let us know what questions you have and we’ll do our best to respond.
If you are really committed to your customers, you’re doing it wrong. But I think it’s more of a case of a company that is NOT concerned about customers at all. They are just developing products in their own isolated world, and think that “If we build it, they will come”. We’ve seen the same on the GPU side, where OpenCL has yet to take off (not to mention GPU-accelerated physics). Meanwhile nVidia is actively working with developers to support CUDA, and it pays off: major applications such as Adobe PhotoShop and Premiere have CUDA acceleration. That is what customers want.
“Meanwhile nVidia is actively working with developers to support CUDA, and it pays off: major applications such as Adobe PhotoShop and Premiere have CUDA acceleration. That is what customers want.”
Again i say don’t dissing Amd video cards i love them. 🙂 As for the CPU your right.
you keep bringing up the Pentium 4 design which at first is not so great but isn’t true that sandy-bridge has some things that connect it to a Pentium 4? I thought i read this some where.
Well, I’m not dissing AMD video cards, but it is a fact that there are more applications with nVidia GPU acceleration out there than with AMD GPU acceleration.
Well yes, the Pentium 4 as a whole was not that good, but that doesn’t mean it’s all bad.
For example, the last Pentium 4/D CPUs were built on the same 65 nm process as the first Core2 series. Those Core2’s had very low power consumption, and could overclock to incredible speeds at the time. So the 65 nm process was just fine.
Pentium 4 also introduced the SSE2 instructionset. This made it very fast in heavy floating point calculations and things like video encoding/decoding (one of the few areas where Athlons couldn’t beat it, and even Core2 lost a few benchmarks in that area to Pentium 4/D). SSE2 is still very successful in both Intel and AMD CPUs today.
Some other things that Pentium 4 introduced were HyperThreading and trace cache. Core2 did not have those, but since Nehalem, Intel uses these technologies again. HT is now partly responsible for Intel’s good multithreading performance, Intel only needs 4 cores to beat AMD’s 6-core and 8-core CPUs.
And a variation of trace cache is now used to buffer decoded instructions when a loop is detected. This saves decoding overhead in loops, where applications spend most of their time. AMD still does not have HyperThreading or trace cache (apparently CMT was their attempt to come up with something to compete with HT, but clearly it does not work).
My view is that AMD has more or less decided to write off the enthusiast market as a priority. Bulldozer in its current form is essentially a server CPU that received the bare minimum of changes needed to work on the desktop. On servers, heavily-threaded usage is the norm, and single-threaded performance doesn’t really matter. On desktops, both are equally important. However, the profit margin on servers is much higher, and AMD wants back some of the market share they lost. At one time they were near 25% in the server market and they are currently under 5%. Interlagos may help bridge that gap. Keep in mind that GloFo’s 32nm process seems to have its worst thermal and power consumption issues at high clock speeds; the Interlagos Opterons run at ~2.5 GHz and should be much more efficient than the desktop FX chips. (Leaked TDP figures back this up.) In fact, it wouldn’t surprise me if they were also doing binning, and the desktop FX chips were basically dud Opterons that couldn’t meet their standards.
From a business perspective, all this makes sense, but it indicates a contempt for their enthusiast customers that most AMD fans are unable or unwilling to perceive.
It remains to be seen.
Intel has the same strategy as AMD, where a single architecture is used both for desktop/notebook and server/workstation markets. Somehow Intel managed to keep the enthusiasts happy as well, and it remains to be seen how well AMD’s ‘server’ design actually works against Intel’s server chips.
AMD’s CPU had better be pretty darn great for servers, because the desktop/notebook market will likely stick to Phenom and Llano for now. But somehow I doubt that Bulldozer is THAT good for servers.
If an 8-core Bulldozer struggles to keep up with a 4-core Sandy Bridge even in the most heavily-threaded applications, then how will a 16 core compete with Intel’s 10-core offerings?
AMD also have to fix that BSOD-inducing interrupt handling issue.
No company would buy/upgrade a server farm just to find its internal system(s) crashes on it.
It seems you are right.
AMD has announced on September 7th that the first Bulldozer-based Opterons were shipping. Where did they go? I have not seen any kind of Bulldozer servers on sale yet, let alone any server/workstation reviews.
Could it be that OEMs ran into stability issues as well, and have delayed their Opteron products as a result (again, just like with Barcelona)?
All I got after searching around is this :
As to why the price list suddenly being pulled off from the source website leaves much to assume about.
And this article :
…almost makes Bulldozer-based Opterons smell just like its long lost Barcelona parts.
I’ve sent my Bulldozer back, it’s being swapped for a Phenom II x4 980.
Sad to do this, it’s the first time I’ve had to consider doing this with AMD ever, but this core is just very poorly. Ludicrous BSODs in Windows was the last straw.