AMD Richland, another improvement that is not an improvement

Only moments after Intel introduced its new Haswell, AMD also introduced its successor to Trinity, codenamed Richland.

Well, I can be short about this one. Anandtech did some nice coverage on it, as usual. The focus is entirely on the GPUs, which makes sense, since that is the biggest improvement in Richland anyway, and GPUs is where the battle is with Intel.

Namely, Richland is not an entirely new architecture, but merely an update of Trinity, reducing power consumption somewhat, and increasing clockspeeds. It also officially supports DDR3 memory up to 2133 MHz now, where Trinity only went up to 1866 MHz. This gives the GPU a bit more room to breathe in bandwidth-heavy scenarios.

One thing to note in the comparison is that Anandtech only used the desktop Haswells here, with a HD4600 iGPU. These are considerably slower than the Iris Pro 5200 iGPU in the previous Haswell iGPU article (which are only available in mobile parts, at least for now). The 5200 has the special 128 MB of high-speed EDRAM memory (‘Crystalwell’), the 4600 does not. And unlike the 5200, the 4600 can not quite keep up with AMD’s APUs yet.

As I said earlier though, both in my Trinity and Haswell articles, AMD will have trouble scaling up GPU performance. And Anandtech’s charts show exactly that. The difference between the 5800 and the 6800 is negligible. In fact, in one instance the 6800 is even slightly slower:

Richland and Trinity are THAT close. I couldn’t have asked for a better demonstration of my claims that AMD will have trouble scaling GPU performance in real-life applications.

At any rate, since Intel is keeping Iris Pro limited to mobile parts for now, AMD can still hold on to their GPU advantage on the desktop. Sadly Richland has not really managed to widen that advantage.

Advertisements
This entry was posted in Hardware news and tagged , , , , , , , . Bookmark the permalink.

15 Responses to AMD Richland, another improvement that is not an improvement

  1. snemarch says:

    It’s interesting that Intel are keeping the 128meg (meg, right, not kilo? :P) cache exclusive to the laptop versions – I guess the reasoning (if not silly-marketing-drones or being able to supply enough parts?) could be that high-end desktop CPUs are pretty much always matched with discrete GPUs…

    Even then, because Crystalwell functions as L3 cache, it should be pretty sweet even for a discrete-GPU system (any benchmarks around yet?), and probably not too shabby either if you’re doing GPGPU calculations off the side on the iGPU.

    I think it’s pretty cool how Intel have ramped up their iGPU efforts – it’s taken them long enough (probably a marketing decision rather than lack of talent), but they’ve been showing real progress on the last couple of generations.

    Still, while I do believe AMD deserves a fair amount of mocking for their performance the last bunch of years, I don’t want to *gloat* about it – I’d love to see them pull another “Athlon64”, it’s a shame if we end up with only Intel offering viable high-performance x86 CPUs.

    • Scali says:

      Yea, 128 MB… I guess too much retroprogramming is having its effects… 😛
      Technically it’s an L4 cache though, since the CPU already has L1, L2 and L3 on-die.
      I think the reason why they don’t include it on the desktop models has to do with how they view the market. It is probably a relatively expensive option, since it requires a second die and special packaging.
      They probably figured that the demand for such a CPU would not be high enough to make it interesting. Most desktop users would still use a discrete GPU anyway, since it is still much faster, in which case they’d go for the cheaper non-Crystalwell option.
      And then there are users who don’t care about graphics performance at all (regular office work etc), who would not need the extra performance that Crystalwell offers in the first place.

      If Crystalwell is like any previous multi-die solutions (like having separate L2 caches on Pentium Pro/II, or more recently, having separate GPU dies on Sandy Bridge), Intel will probably integrate it into a single die in a future generation, at which point it will probably become available to all desktop users as well.

      “it’s a shame if we end up with only Intel offering viable high-performance x86 CPUs.”

      Hasn’t it been like that since 2006 though? I mean, Athlon X2/FX could still hang with the slower Core2 Duo processors, but Intel also had its Core2 Quad, which was completely untouchable (even with AMD’s failed FX Quad attempt).
      AMD’s initial Barcelona was a complete disaster, and by the time they had their not-so-disastrous Phenom II available, which was almost competitive with Core2 Quad, Intel introduced Nehalem, and raised the bar of high-end x86 again.

      AMD would wish that they were as close to Intel’s high-end now as Intel was with the Pentium 4 to their Athlon64. That gap was peanuts compared to today’s situation. I’d say that since 2006, Intel’s high-end has always been at least 20% faster than AMD’s high-end, and often more than 40%. And that’s just talking about performance. If we were to measure in performance-per-watt, things look even more bleak.
      Today we have AMD’s FX8350, which falls somewhere between the i5 2500 and the i7 2600. And yes, we’re talking about Sandy Bridge, which is:
      1) Intel’s mainstream line, not its performance line
      2) Two generations old
      It seems that most people especially aren’t aware of 1), and what it really means.

      But I suppose it’s pretty cool in a way. Because Intel is not bothered about competing with AMD on x86 performance anymore, and taking things slowly with their performance parts, we see interesting stuff like Xeon Phi, and the new Iris Pro iGPUs with Crystalwell. Looking to compete with nVidia and ARM SoCs. Which I think is a whole lot cooler than faster x86 processors anyway. x86 is boring.

      • nickysn says:

        Not surprising, given the fact that Richland is still a Piledriver CPU with a VLIW4 GPU. What do you think about Kaveri?

      • Scali says:

        I think it is somewhat surprising that despite the higher clockspeeds on CPU and GPU, the performance is still pretty much the same as the 5800.

        What do I think about Kaveri? What kind of a question is that?

  2. nickysn says:

    Well, the clock increase is only about 5%. The gaming framerates improve by 2.5-3.5% on average as far as I see, which isn’t that surprising given that AMD’s GPU is probably bottlenecked by memory bandwidth. Note that they tested Trinity with the same DDR3-2133 memory they used for Richland, so the effect of the memory bandwidth increase isn’t seen in these tests.

    • Scali says:

      The CPU went up nearly 8% base, nearly 5% turbo. GPU went up 5.5%.
      And as Anandtech points out, AMD has improved turbo management (or at least the CPU should run less hot, allowing it to turbo longer), so it should have more of a clockspeed advantage more of the time. Given the actual improvements in framerates being under 3% in nearly all cases (in some cases even 0 or below), the scaling seems to be rather poor.
      It sounded better on paper.

  3. Haswell says:

    Intel trounces AMD in OpenCL by such a huge margin, it’s not even funny.

    • It’s not exactly Apples-to-Apples though is it?

      The A10-6800k (Black Edition) costs £119 from Overclockers.co.uk, whereas the Haswell chip the i7-4950HQ can’t actually be purchased at all and is only available on laptops. The Core i7-3770k, an Ivybridge chip, costs £269.99. The haswell i7-4770k is also £269.99 (although there is a special offer today!)

      So you can buy something 41% faster than the A10-6800k but it’s 2.26 times it’s price.

      • Scali says:

        Depends on how you look at it. Price-wise it’s not a good match. But it is the fastest AMD iGPU compared to various Intel iGPU in OpenCL. Even the i3-3225 is slightly faster than the AMDs, and that is in roughly the same pricerange as the A10-6800K.

        I would argue that priceranges are very artificial in this particular area though, since Intel doesn’t offer the high-end iGPUs on its low-end CPUs.
        I prefer to look at this benchmark purely as a way to see the state-of-the-art of iGPUs from both manufacturers. And that shows that Intel is well ahead in OpenCL (which is no small feat, since AMD actually bought an entire GPU company to try and get an advantage in iGPUs, where Intel did it on its own strength, and is rather late to the party with OpenCL).

      • Scali, I can’t seem to reply to your post directly.

        The comparison without context though isn’t valid either, we really are comparing quite different things. The i3 performance is slightly better at OpenCL, but much worse for actual OpenGL / DirectX, i.e; as a GPU. So it’a mixed bag. Plus the A10-6800K is way ahead of the i3 in CPU performance terms.

        It’s not like it’s state-of-the-art as a GPU either. It’s pretty clear that they could integrate a faster GPU from the rest of their linup. Not doing so is simply their choice.
        That means that really all we’re comparing is the results of what the two companies marketing departments have told their engineers that they should target.

        Not very revealing.
        Personally I don’t care beyond not seeing biased reporting on the issue.

        So if it was my money, and I am thinking of building a small HTPC soon, then the AMD chips are priced much better than the Intel ones. The A10-6800k is the top Richland CPU/GPU, and it’s priced almost equal to the bottom end intel offering, the i3-range, which it beats handily expect in one benchmark where it’s only very slightly slower. That makes it a very simple win for my wallet.

        If I wanted more GPU performance then I’d still go AMD this time because they’re so cheap, but I’d pick up a cheap discrete GPU. I should add that my main desktop and my laptop are both Intel i7’s with nVidia GPUs so it’s not exactly brand loyalty affecting my decision or argument here 😉

      • Scali says:

        Now you are pulling in OpenGL and DirectX, while we were merely comparing OpenCL performance, nothing more.

        As I already said, it’s all very artificial (read: biased). Intel pairs its high-end GPUs with far more powerful CPUs than AMD does/can. So yes, obviously different strategies mean they target different priceranges.

        But looking solely at the GPU/OpenCL performance that Intel can get out of their latest iGPU architecture, they seem to have a clear lead over AMD’s latest iGPU architecture. That is something you can’t argue with (although it looks like you’re trying hard to deflect attention from this point, for whatever reason).

        I’m not all that interested in price comparisons or buyers advice myself. I just like to look at the state-of-the-art of the technology.
        And Intel had to come from very far, but they have managed to develop some GPUs that are actually competitive in terms of features and performance (not price perhaps, but as I say, that’s not a technical issue, and does not interest me). And two things really stand out: OpenCL and tessellation performance are considerably better than AMD’s latest offerings. So it’s good to see Intel finally making decent all-round iGPUs. They have all the latest features, and they’re not just a case of checkbox features either (like the DX10 support on my Intel X3100 for example… it works, but don’t try to actually use it, because it’s incredibly slow, much slower than DX9, which already does not allow you to actually play many games). With these iGPUs you can actually play recent games properly, and OpenCL will actually accelerate applications.

      • Oh and the Intel OpenCL comment: They didn’t buy a GPU company but they’ve licensed a sackload ($250M per year) of patents from nVidia (http://www.dailytech.com/NVIDIA+to+License+Kepler+GPU+Core+for+Mobile+ThirdParty+SOCs/article31792.htm) so I think it’s fair to say that they’re still having to pay their way back into the iGPU performance game.

      • Scali says:

        That’s no surprise. nVidia has a huge portfolio of patents, partly from their own developments, and partly from acquiring 3dfx. All GPU vendors are ‘in debt to nVidia’ in that sense. Intel probably has to pay AMD for a number of GPU patents as well. And Intel in turn probably owns a number patents that nVidia and AMD need to license in order to build functional and competitive GPUs.

        That doesn’t take away the fact that Intel developed its own GPUs in-house, where AMD acquired ATi.
        (I hope you understand that licensing a patent is not the same as licensing a GPU design… that link made no sense).

      • Not trying to deflect attention just trying to point out that pulling out these particular facets is, as someone who really likes reading your blog, really quite bizarre to read.

        I’m also not trying to detract from Intels GPU advancements, they’re very welcome after the rubbish they started out with. All I am saying is that after reading this and the Haswell article it reads like pure bias, there’s no technical value in it.

      • Scali says:

        How is that bizarre? I wasn’t the one who posted those OpenCL benchmarks. Heck, this blog isn’t even about Intel/Haswell/Iris Pro at all, it is about AMD’s Richland.
        I merely responded to those benchmarks, and since the posted benchmarks are only an OpenCL test, I stuck to discussing merely the OpenCL performance, as generalizing these OpenCL benchmarks to overall performance would be bizarre indeed (OpenCL does not factor in CPU or graphics performance at all).

        So what are you saying? That my Richland and Haswell articles are biased (and if so, why)? Or just the comments I made on the reply with the OpenCL benchmarks? Because I don’t see those as biased, within the limited context of OpenCL (which I believe I clearly stated, and never stepped out of).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s