AMD introduces Trinity: When is an improvement not an improvement?

The first reviews of AMD’s new Trinity line of APUs are coming in. The successor of AMD’s Llano replaces the Stars-based CPU logic with a Piledriver-based CPU. Piledriver, as you may know, is an update of the Bulldozer architecture.

Bulldozer was not exactly a strong performer in the performance-per-watt area. Since Trinity, like Llano is mainly aimed at mobile platforms, it needs to do a lot better than Bulldozer here. Battery life is all important for mobile systems.

And AMD managed to improve on the power consumption issues. One big improvement is that Bulldozer used soft-edge flip-flops everywhere (which aren’t very sensitive to clock jitter). Piledriver has replaced these with hard-edge flip-flops where possible, which reduces power considerably. Sounds a bit like what ex-AMD engineer Cliff A. Maier was talking about: Bulldozer was not hand-optimized, but just a ‘bruteforce’ automated design.

AMD also silently introduced the F16C instructionset extension. This extension offers instructions to convert between 16-bit (half precision) floating point and 32-bit (single precision) floating point. They could be useful for interoperation with the GPU, which can read and write to 16-bit float buffers. And that is what an APU should be all about: heterogeneous computing.

So how is the improved Piledriver not an improvement then? Well, Bulldozer was a very large and powerhungry chip. In order to stay within the same power envelope as Llano, Trinity can only use a 2-module (4-core) CPU at most. Since Llano also used a quadcore CPU, and the Stars architecture was more efficient than Bulldozer, the new Trinity isn’t all that spectacular.

The power consumption in general more or less the same as it was with Llano. Power management has improved for some scenarios, so you will get slightly better battery life when idle (but when are you ever completely idle for a long period of time?) On the other hand, under workloads such as h264 playback, battery life is still worse than Llano. It is certainly not a leap forward, and AMD was already trailing well behind Intel here.

And performance? Well, it is faster than Llano, just not by a whole lot. And ironically enough, for a CPU that was designed more for multithreaded performance than for single-threaded tasks, it even loses to Llano in the Cinebench multithreaded benchmark:

So Piledriver is cutting it pretty close on the CPU part. Which is probably what lets it down in the next part: GPU performance.

On paper, the GPU is much faster than Llano’s. Up to 50%. And in some tests, you may get close to these results, such as 3DMark Vantage in Performance mode:

However, in other tests it is a different story. As I already pointed out in the comments when Ivy Bridge launched: it’s not as simple as just strapping on a faster GPU. The CPU needs to be fast enough to drive that GPU. In fact, 3DMark Vantage already shows us the elephant in the room: The ASUS N56VM scores about the same as Trinity, with Intel’s HD4000 GPU. Now it goes without saying that the HD4000 GPU is much slower than AMD’s Trinity GPU. But since the Core i7-3720QM CPU is much faster than Trinity’s, it makes up for it.

We see the same in some actual games, such as Batman: Arkham City:

Or Dirt 3:

Or Skyrim:

The Intel system is faster, despite having a slower GPU. Note also the very small differences between Trinity and Llano here. It looks like the CPU is holding Trinity back. The GPU may be faster, but the CPU cannot leverage it.

It seems that both Intel and AMD have very much unbalanced APUs. AMD could do with a better CPU, and Intel could do with a better GPU. At the very least it shows that games are not all about GPU performance. The CPU is equally important. For pretty much anything else, Intel is still the winner hands-down. They offer much better performance coupled with much better battery life. Llano and Piledriver were supposed to offer value and better gaming performance mostly. But Intel has already caught up on the gaming performance enough to be competitive in at least some titles.

This entry was posted in Hardware news and tagged , , , , , , , , , , , , , , . Bookmark the permalink.

37 Responses to AMD introduces Trinity: When is an improvement not an improvement?

  1. forrest says:

    Anand article is wrong in the gaming side. I have an HP laptop with an A10-4600M APU. It’s not AMD reference design, but It is the same in the hardware side. The new Radeon IGP is pure awesome and really faster than the HD Graphics 4000. The only thing that you need is to test in high graphics settings.
    Some test from my own:
    Batman Arkham City:
    1366×768 medium no AA – 42 fps (just like Anandtech)
    1366×768 high no AA – 38 fps
    1366×768 high + FXAA – 37 fps
    1366×768 high + MLAA 2.0 (driver) – 36 fps

    Compare with Intel i7-3620QM:
    1366×768 medium no AA – 49 fps (just like Anandtech)
    1366×768 high no AA – 29 fps
    1366×768 high + FXAA – 26 fps

    As for Skyrim. It is still playable at Trinity with medium level and 4xAA/16xAF (around ~30 fps). With HD Graphics 4000 you get ~10 fps.

    I tested Just Cause 2 too. In 1366×768 at low settings the A10-4600M able to achieve 36 fps. In high setting with 2x AA it get 33 fps.

    I think these are important things, but I know that you are an AMD hater. So just delete my post if you want. 🙂

    • Scali says:

      I think you’re contradicting yourself by saying Anandtech is wrong, then admitting that you got the same framerates as Anandtech using the same settings they used.
      Aside from that, I don’t think you understood what I wrote. I never said the HD4000 is as fast as the Trinity GPU. On the contrary (“Now it goes without saying that the HD4000 GPU is much slower than AMD’s Trinity GPU”). I just pointed out that the Trinity GPU quickly becomes CPU-limited. To a point where even the much slower HD4000 can outperform it in some cases (something many AMD fans deemed impossible).
      You also ignore Llano completely. I also pointed out that in CPU-limited scenario’s, it barely performs better than Llano, despite the GPU being considerably faster. It’s not just an AMD vs Intel thing, let alone AMD hating.

      • forrest says:

        No, Anandtech just use a pointless test with medium quality settings. I know that earlier IGPs are not as fast to play games in high settings, but things are changing. They/we need to change the testplatform as well.

        What you said is still not true. The HD Graphics 4000 is still a much-much slower IGP.
        If you get CPU-limit, then scale up the graphics. This new IGP can handle it. On the Ivy Brdige you just get unplayable frame rates. These are important, but I said, that I understand that you will never highlight where AMD are really bashing Intel. There are no problem, with fanboyism and brand loyalty … just don’t lie to your readers.

      • Scali says:

        Go complain with Anandtech if you don’t like their settings.
        And what I said is not true? I literally said the HD4000 is much slower!
        Don’t accuse me of lying.
        Also, it’s not as simple as just scaling up the graphics. Your own figures show that framerates go down at higher settings. ‘Playable framerates’ is a very personal thing. Not everyone likes to play with ~30 fps.
        I don’t care about any of that. I’m just pointing out that Trinity’s CPU is holding back its GPU. Just as I’m pointing out that Ivy Bridge has the exact opposite problem (how’s that for fanboyism and brand loyalty?).
        In case you didn’t realize, I don’t do hardware reviews. I just point out things that other reviewers miss (heck, I even point out the new F16C extensions!). I care about hardware capabilities and forward-looking designs. Not games. This blog is not for gamers, it’s for developers.

  2. forrest says:

    Why don’t you link those pictures where Trinity get two times more fps then the i7-3720QM? This is a really serious question. 🙂

    • Scali says:

      1) I link to the Anandtech review, and merely comment on some peculiarities.
      2) It wouldn’t be ‘fair use’ to copy virtually Anand’s entire article. There are many other pictures I haven’t linked to.

      The fact that I link to the Anandtech review, and my choice of words (I don’t generalize anything, I specifically say this only goes for “some tests” and “some actual games”), give people enough information to know that there is more to the review than just the few points I focus on, and they can read it for themselves.
      I just use Anandtech’s article as proof of the performance predictions I made earlier, about the integrated GPU being limited and not being able to reach the suggested 50% performance improvement in most cases (which in itself is a continuation of the predictions I made about Llano’s integrated GPU not being as fast as a discrete card with similar specs).

      Now here’s a question for you: Why do you keep harping on this issue? You seem to have huge problems with the fact that Trinity does not win in every benchmark. Why?

    • Tom says:

      Fuck you Forrest your obviously the AMD fanboy.

  3. Nick says:

    F16C is already supported by Ivy Bridge.

  4. Nick says:

    You state that both AMD and Intel have unbalanced APUs, leading to suboptimal performance for both. In other words, an Intel CPU with an AMD iGPU would theoretically offer substantially higher game performance?

    I’m open to this view, but I was wondering if maybe the real cause of disappointing Trinity graphics performance improvement is simply bandwidth? Or perhaps a combination of both?

    I just wonder about this because soon we’ll have AVX2 with twice the throughput and gather support. This is a serious threat to heterogeneous computing since it’s far easier to develop for (compilers can easily auto-vectorize for a vertical SIMD instruction set like AVX2). However, heterogeneous computing proponents claim that iGPUs will still have substantially higher throughput. Regardless of how much die space that might cost them, I’m afraid they’ll just run out of bandwidth. And mitigating that with a cache or eDRAM would cost even more.

    Unfortunately I don’t have any APU to test the bandwidth bottleneck hypothesis myself. Either way I’m betting on AVX2 for my throughput computing needs since it will be easier and become more ubiquitous…

    • Scali says:

      Well, bandwidth is probably at least part of the problem… but I don’t think we can say it’s the real cause. Ivy Bridge has to work within the same limits in terms of bandwidth: they both use dual channel DDR3 shared between CPU and GPU. So they both should have more or less the same bandwidth limit, give or take a few % for whichever solution is more efficient.
      I think the real test would be the fillrate tests from 3DMark or such. That would show how limited the memory on the APUs is. But as usual, reviewers never run the tests you REALLY want to see 😛
      I think however that a game like Portal 2 is the closest thing to that, as it is a relatively simple game depending mostly on texturing performance, and having only limited shader complexity. And that is exactly the sort of game where Trinity pulls ahead.

      I would logically deduce this: AMD is smart enough to not put a GPU into Trinity that is limited to the point where it can’t even outperform Llano and HD4000. That wouldn’t make any sense at all. They’d waste all this die area and power on a powerful GPU, and run into a brick wall of memory bandwidth.
      The CPU explanation makes sense too: AMD simply can’t make better CPUs than this. This improved Bulldozer architecture is the best they have.

      As for heterogeneous computing… well, it depends mostly on the balance between CPU and GPU. Intel’s current configuration would favour the CPU in most cases, no doubt. AMD’s CPU is considerably weaker, while its GPU is stronger, so it probably makes more sense to use the GPU for some tasks.
      I personally don’t really believe in APUs for heterogeneous computing at all. The GPU is considerably weaker than a discrete card. GPGPU doesn’t really get interesting until you have a big GPU with thousands of shader processors and its own dedicated high-bandwidth memory subsystem.
      iGPUs are still orders of magnitude slower than high-end GPUs, so it’s just a case of getting lucky here and there, if a certain task still runs faster on GPU than on CPU. I saw one review that compared CPU and GPU tasks (can’t recall which), and although AMD’s GPU was slightly faster than AMD’s CPU, Intel’s CPU beat both.

      • Nick says:

        A fillrate test would be bottlenecked by the number of ROPs, and probably not exhaust the bandwidth. Shaders with a high TEX ratio could do the trick, but the CPU side still needs bandwidth too to do something useful. So the iGPU might actually run out of bandwidth sooner in a real world application.

        As the compute density keeps increasing exponentially, do you expect the bandwidth bottleneck to be addressed with local memory (cache / eDRAM), aggressively increasing bus width and speed, improving data locality with out-of-order execution (using fewer threads), or some combination of these?

        Out-of-order execution may seem power inefficient, but it has been suggested that the successor of AVX2 will extend it to 1024-bit instructions (there’s room for that in the VEX encoding) and execute them in four cycles on the 256-bit units. This lowers the instruction rate without lowering the throughput, saving lots of power while keeping all advantages.

        And that would basically mean the end of heterogeneous computing since power efficient AVX-1024 units could be suitable for any high throughput workload, including graphics. Do you have any thoughts on the long term strategy of AMD versus Intel, heterogeneous versus homogeneous?

      • Scali says:

        Perhaps I should have been more specific: the fillrate tests in 3DMark include multitexturing, so they stress both the texture units and the rops, making it the perfect synthetic bandwidth test.
        Indeed, it’d still be best-case, since the CPU won’t be taking any bandwidth during the test… but I’d be highly surprised if Trinity wasn’t considerably faster than Llano, and Llano considerably faster than the HD4000 in that test.

        GPGPU already uses local memory and out-of-order execution. Just in a different way from CPUs.
        GPUs have a shared cache, over which you have more direct control than the L1/L2 cache of a x86 CPU. And the scheduling is done on thread-basis. All threads run the exact same code (not entirely true anymore with Fermi/Kepler, which can run multiple kernels at a time), and there is only one program counter for the entire thread group. The scheduler just picks the first thread that has all its operands ready (based on a simple scoreboarding approach).

        If and how CPUs and GPUs will converge on this matter, I have no idea.
        I would say that it makes sense to have two different approaches to parallelism, because each type of processor can address a different type of problems.
        Taking Larrabee as an example, it appears that even Intel agrees that there’s more than one way that leads to Rome.
        But how this fits in with APUs, that’s a different story. Currently Intel only dedicates a very small area to its GPU, and AMD does more or less the opposite. It’s hard to really reap the benefits of either approach if you have to compromise like this.

        AVX2 and beyond is only as powerful as the CPU that implements it. I mean, if you take an extreme… say an Atom-class CPU with AVX2+ is combined with a GTX680-class GPU in an APU-package (including a memory subsystem that gives the full bandwidth the GTX680 wants)… Well, then it’d be a pretty tall order for AVX2+ to outperform the GPU, no matter how clever AVX2+ itself is.
        Conversely, as long as Intel combines its high-end CPUs with low-end GPUs, the CPU remains the way to go.

        My guess is that Intel will continue to focus on high-end CPUs, and GPUs will continue to just be ‘good enough’ for the average user, so they will remain relatively poor GPGPU performers. Intel will continue to evolve AVX to try and take the sting out of GPGPU. So far they have it under control reasonably well, especially with features like QuickSync to sweeten the deal. Video encoding/transcoding was long thought to be a key area for GPGPU acceleration, but Intel completely dominates there.
        For AMD the story is probably the other way around: AMD has not been able to keep track with Intel’s CPU developments, so AMD is using its superior GPU technology to try and compensate for that. So I think AMD’s focus will be more and more on the GPU portion of the APU.
        This causes a problem for Intel, as discrete GPUs also benefit from more GPU-accelerated applications. That is why they have Larrabee as the backup plan. If application focus shifts too much from CPU performance to GPU performance, Intel may start to offer their own discrete GPUs to compensate. And they will probably start to beef up the iGPUs as well, from that point on.

      • Scali says:

        Anandtech included some 3dmark texture fill tests in their HD2500 review:

        It does not include Trinity, but it shows clearly that HD4000 has nowhere near the texturing power of Llano. So that supports what I said earlier: Trinity is not just held back by bandwidth. It looks like the CPU is the primary culprit.

  5. Hiram says:

    Scali, I’m impressed that you’re still able to keep your temper after having to deal with so many of these brain-dead psuedo-religious AMD lunatics for so long.

  6. NewImprovedjdwii says:

    “If you get CPU-limit, then scale up the graphics.”
    Uh No, by scaling up the graphics you don’t lower the CPU usage you just put more demand on the GPU. But i do agree With trinity being a pretty good product for a casual gamer, just like llano was. I own a A8 3520 Laptop and i can play games on it just fine with decent settings i don’t need to max out every game to be happy that’s what my desktop is for.

    Also Scali you have a reputation man if its good i don’t know? 🙂

    • Scali says:

      Thing is, playing games with ‘decent settings’ is something you can do on Intel’s IGPs as well now. They even run some games better than AMD. Which I think is not exactly impressive, given that AMD dedicates a much larger portion of the die to their GPU, and the fact that AMD is a GPU company. Trinity isn’t even that much faster than Llano.
      AMD should bring more with their APUs.

      In fact, did you see this?
      Ivy Bridge beating a HD5770 in TessMark at the highest settings.
      And as you can see here:
      Ivy Bridge would also beat the HD5870, because it is just as slow as the HD5770 here.

      • zlatan says:

        This is not true Scali. I bought a Core i5-3570K processor with HD Graphics 4000 graphics, and my girlfriend’s PC (A8-3870K) able to run games with higher graphics settings, and the frame rate is still higher. I admits that the HD Graphics 4000 is maybe faster at low or medium quality settings, but with higher resolution, higher graphics settings and with AA … it’s sucks.
        We love Diablo 3, and actually we play the game a lot nowadays. She got 30+ fps with Full HD at high settings, only the shadows are set to medium. With Ivy Brdige I only got 15-18 fps. For playable frame rate I need to set lower reselution (1440×900), which is really nasty for a 24 inch monitor, and I still get some stuttering. I have noticed even that the shadows are blocky with HD Grahpics 4000. Maybe it’s an application issue, but I have not seen this problem with AMD.
        Minecraft is another example. This is my favorit game, and I don’t able to play it at highest settings with my new PC. But what is shocking that the AMD A8-3870K are able to run the game with 70-80 fps. Four times faster then Ivy Bridge! This is not what I’m expected.
        Synthetic tests are useless I talk about gaming. AMD is still superior at this perspective. Highlighting those benchmark where the Ivy is faster will prove nothing. There are some test at Anandtech where Ivy is only half as fast as Trinity.
        No problem with Ivy I like it, but the HD Graphics 4000 sucks in games. I saw it with my own eyes. But It’s not a problem, because I’m already ordered a Radeon HD 6670 GDDR5. 🙂

      • Scali says:

        What do you mean “not true”?
        I didn’t say it was as good or better than Llano/Trinity in all games/settings/etc.
        Merely that the HD4000 is good enough to play games on. Which it is, as long as you keep the graphics settings reasonable.

        Which means a lot, for an Intel IGP. Just a few years ago, you couldn’t even run most games at all, because they’d either not even start because of driver bugs or just lacking functionality… or if they did run, the game was too slow to play, even at the lowest settings.

        The fact that Intel beats AMD even in just one or two games/benchmarks is already a huge milestone in the history of Intel GPUs. They have solved most of their driver issues, they support full DX11 and OpenGL 4.0, and performance is slowly catching up as well.
        You have to admit, Intel made more of an improvement on the GPU from Sandy Bridge to Ivy Bridge than AMD did from Llano to Trinity.

  7. John Dorman says:

    Gee wizz, I wish AMD would stop producing their obviously completely impotent products and drop out of the CPU market so Intel can have a monopoly that allows it to dramatically slow its rate of innovation and raise its prices on literally every chip. That sounds like a win for the consumer to me, so I will be an Intel shill and cherry pick benchmarks from CPU limited games where everyone already knows AMD mobile chips are weaker than Intel and use them to argue that Ivy Bridge is total kingshit and everyone should buy it, totally ignoring the fact that battery life on Ivy Bridge vs Trinity for many tasks is nearly the same despite being a 22nm chip vs a 32nm chip and that the i7-3720QM is a hyperthreaded quadcore with higher clocks on top of that and thus can never be CPU limited in a game, but which costs three times what the a10-4600m does. Yes, your BMW M3 has a higher top speed than my Miata, but there aren’t many places for you to go over a hundred to show that off, but we both corner well, and you paid three times what I did. A massively CPU limited game like Skyrim is the Nürburgring of games where the i7/M3 trashes the a10/Miata.

    “You have to admit, Intel made more of an improvement on the GPU from Sandy Bridge to Ivy Bridge than AMD did from Llano to Trinity.” – That is what happens when you have an 80% market share due to anti-competitive business practices that you paid your competitor $2B for in a settlement and which allows you to spend more on research and development than your competitor makes in total revenue.

    Sorry for ranting at you like an asshole. I just wish you would have put the comparison into scale price-wise.

    • Scali says:

      Well, excuse me… but this is not a comparison of Trinity vs Ivy Bridge. It is mostly Trinity vs Llano/Bulldozer (looking at where AMD did or didn’t improve over their previous generation of CPU or APU). Ivy Bridge is just included as a reference in one part, which discusses how the faster GPU of Trinity is getting CPU-limited.

      As for comparing on price… well, Trinity and Llano operate in the same market, so pricing will be roughly equal.
      Comparing to Ivy Bridge? Good luck on that. It depends very much on which application(s) you use to compare. CPU and GPU performance are very different, as are other factors (you’re the one cherry-picking here… comparing an i7-3720QM to an A10 chip, while the i7 isn’t remotely in the same market in any way… price, performance, battery life etc. The A10 operates in the market of the i3’s and perhaps some of the low-end i5’s at best).

      Plenty of websites try to make such comparisons already. I don’t see much point in it. I focus more on the technical backgrounds. I suppose you just aren’t the target audience for my blogs.

      By the way, you forgot the most important part in your AMD fanboy rant: For AMD to compete, they actually have to… compete (as opposed to merely existing).

      • John Dorman says:

        I did not cherry pick that i7 chip. It was in the Anandtech charts you showed, and you compared that monster i7 cpu feeding the HD 4000 graphics to the slower a10 cpu feeding the 7660g.

        “The ASUS N56VM scores about the same as Trinity, with Intel’s HD4000 GPU. Now it goes without saying that the HD4000 GPU is much slower than AMD’s Trinity GPU. But since the Core i7-3720QM CPU is much faster than Trinity’s, it makes up for it.”

        You literally say that the super fast i7 CPU makes up for having a slower GPU.

        And I’ve actually read several of your posts at this point and I think you are quite thorough and insightful, if a bit biased. Focusing strictly on the technical backgrounds means that the technically superior Intel for reasons I described earlier always wins. And If I’m not part of your target audience, then you seem off target, because I am not alone in my skepticism of your impartiality, based on a few of the comments.

        It’s not some kind of secret that AMD is relatively weaker than Intel in x86 performance. But using that to justify that the HD 4000 has decent gaming performance is sideways at best, because since the resolution is low, AA and AF are turned down, and the presets are only medium, the certainly passable performance from the HD 4000 in these games now is maxing it out and thus will not be sustained at even the absolute lowest settings in future games, while the 7660g has a little bit more room to stretch its legs in that respect.

      • Scali says:

        You are pulling that statement out of context. I’m not saying that a faster CPU will *always* make up for a slower GPU (obviously). I’m just pointing out that there are various games for which the Trinity is not that well-suited, because although it has a fast GPU, its CPU is not powerful enough in those cases. I demonstrate this by comparing against systems with faster CPUs (but that is the only comparison I make. I do not compare them on price, power consumption or anything, they just serve to demonstrate how certain games scale with CPU performance. I would have picked a faster AMD system if possible, but the tested APUs are already AMD’s fastest offerings, so Intel was the only way to see how things would scale with faster CPUs).
        In the context of which I said it, it is perfectly obvious that the i7’s CPU makes up for the slower GPU, the charts prove as much.

        Ah yes, and because a few posters (such as yourself) call me biased, then it must be true, right? (If anything, the common bias is to recommend AMD’s APUs as the ideal gaming solution in every case…)
        So far all the bias is coming from you. You mistakenly interpret this as a Trinity vs Ivy Bridge comparison, which it obviously isn’t.
        I am not justifying the HD4000 whatsoever. I literally say it’s a slower GPU, and I call both AMD’s and Intel’s APUs unbalanced, albeit at opposite sides of the CPU-GPU balance.
        In fact, I am not even trying to compare the two… In the pricerange of the Trinity, you cannot get Intel CPUs with a HD4000 anyway. I thought that was quite obvious. And where the HD4000 already is quite underpowered for many games, the lower models are just downright pathetic.
        I am merely pointing out that a lot of improvements on the GPU side of Trinity are lost because the CPU side has not improved much. The CPU is holding back the GPU’s potential.

        Aside from that, I’m not sure why you say Intel always wins on technical merits. AMD obviously has the lead in GPU technology, from their acquisition of ATi a few years ago, and their continued development of discrete GPUs and competition with nVidia. Sounds like a pretty simplistic view of the world: “The company with the most money always wins”.

  8. John Dorman says:

    “Sounds like a pretty simplistic view of the world [implied ad hominem]: “The company with the most money always wins”[reductio ad absurdum].”

    I said that Intel spends more on R&D than AMD makes in revenue. That is why they have faster CPU’s. AMD simply cannot afford to keep up in lithography and cannot pay for the chip design talent, which is exactly why they started going into heterogeneous computing, hopefully sidegrading into a more competitive products, which I think they’re doing well enough. I freely admit that I am pretty biased. But I didn’t become biased to AMD because I illogically thought they offered me better products, I am biased because Intel used illegal business practices to muscle them out of the market and that is bullshit I simply will not reward. Athlon struck gold with Athlon 64, which it earned enough market share and revenue from to leverage its acquisition of ATi in 2005, while during this entire time Intel was essentially paying off OEM’s to not use AMD (Dell got $6B over 5 years) so I can only imagine AMD’s market share, resultant revenue, and resultant R&D, and resultant chip performance now if Intel hadn’t cheated. So when others compare them on a presumed equal footing and then praise Intel for making bigger gains in performance, I call people out on it as if they were a racist.

    • Scali says:

      “I said that Intel spends more on R&D than AMD makes in revenue. That is why they have faster CPU’s. AMD simply cannot afford to keep up in lithography and cannot pay for the chip design talent, which is exactly why they started going into heterogeneous computing, hopefully sidegrading into a more competitive products, which I think they’re doing well enough.”

      Still sounds like you’re saying that the company with the most money always wins. Which is obviously overly simplistic. Having more money and/or more resources will give you an advantage, but is never a guarantee. For example, Intel had pretty much the same advantages over AMD as they do today, when they launched the Pentium 4. But it wasn’t always the fastest CPU out there.
      Likewise, Intel has never made any class-leading GPUs, despite their advantages in budget, manufacturing etc (and in that light it is interesting that Intel finally seems to be making steps forward on the GPU-front, in case you missed that).

      On the other hand, smaller companies have often become quite successful by just having a more innovative product, or finding a hole in the market somehow (in fact, one of AMD’s first successes on the x86 market, the K6, was actually based on a design made by NexGen, a smaller company which AMD bought. It was much better than AMD’s first in-house design, the K5).

      As for doing well enough, well I clearly don’t agree. Trinity makes virtually no advances over Llano in the CPU department (and Llano was already yesteryear’s performance, CPU-wise, being comparable with a Phenom II X4). As a result, the GPU, which is much faster on paper, only gets moderate wins in performance compared to Llano. Perhaps AMD should have offered a 6-core version.

      “I am biased because Intel used illegal business practices to muscle them out of the market”

      I wonder if you are even aware of how AMD muscled themselves into the market in the first place…
      AMD reverse-engineered Intel’s 386 and 486 chips (which classifies as illegal business practices). Intel sued AMD, but eventually lost all claims, except for the microcode (yes, AMD’s chips were pretty much 1:1 clones, and even ran Intel’s microcode verbatim), which fell under copyright laws. Intel didn’t lose because they weren’t in their right as a company. No, the judges ruled that x86 had become too important in the industry to be controlled by a single company. So it’s a form of antitrust regulation that AMD eventually got the x86 license as we know it today. Which is rather ironic… If Intel wasn’t as big as they are, AMD would not have been an x86 CPU manufacturer today.
      AMD plays dirty, just like all those other companies out there.

      “so I can only imagine AMD’s market share, resultant revenue, and resultant R&D, and resultant chip performance now if Intel hadn’t cheated.”

      I don’t think it would have made much of a difference. AMD was still selling out their entire production, and their prices were every bit as high as Intel’s (remember the original FX series?), so their profit margins would have been very healthy.
      AMD simply could not win more marketshare because they could not produce the extra chips. In those days, various OEMs simply HAD to sell Pentium 4 systems, because there wasn’t enough supply of AMD chips to keep up with customer demand.

      • Klimax says:

        A tiny correction. Originally IBM wanted at least two sources of CPUs in their PCs, so AFAIK they forced Intel to allow clones. (That’s why there were IBM,AMD,TI and such branded clones) Only after IBM left market Intel gained independency with x86 family of CPUs. Only then there was the problem about licensing x86.

      • Scali says:

        Well, let me correct your correction then:
        I specifically mentioned 386 and 486 clones.
        The earlier CPUs by AMD and others were just second-source (no different from how AMD used to outsource some of its production to Chartered, and how they now outsource to Global Foundries and TSMC). They were not clones, they were Intel chips, built under license (and obviously AMD and the others just got access to Intel’s designs, they did not have to reverse-engineer them).
        However, this contract was only for 8088 and 80286 CPUs. After this contract ran out, Intel never renewed it for the 386 and later CPUs, because IBM no longer played a dominating role in the market. As such, no second-source company ever had access to the 386 or newer designs, nor did they have any rights to produce chips based on that design.
        (Later AMD CPUs were no longer direct clones of Intel chips, but indepedently designed chips merely implementing the same x86 instructionset).

      • John Dorman says:

        Thank you for making me do a bunch of research and read some of your other posts and comment threads in an attempt to one up you on something, because I have now realized that you are in fact, insanely knowledgeable, and that that would basically be futile. I am sorry for calling you an Intel shill and for my short stint as a vitriol spewing AMD lover in your comment thread. And thank you for responding so bluntly, thoroughly, and amazingly politely to me, despite that I was saying the same kind of crap you have already responded to dozens of times to.

        That said, having done that research I mentioned, and though I clearly don’t know all you know, AMD “stealing” the 386 design and the real impact of the Intel OEM discounts are still pretty gray to me. I would love to see a hilariously dramatized “based on a true story” of the AMD-Intel history.

        Click to access AMD%20Complaint%20vs%20Intel%202005.pdf

        The original filing of the 2005 suit from AMD. 12 through 16 are AMD’s version of events for the 286 contract. Interesting.

        My take on AMD’s history at this point is more sad than anything. I think AMD is/was kind of in a catch-22. They don’t have high enough production capacity to meet all possible demand if and when they do have great chips, which means that they can’t really reap economies of scale to increase their margins. And without those higher margins from scale and greater volume from demand for a great chip, they don’t have the money to invest in developing great chips and to pay for greater production capacity. And this on top of this was Intel flexing with OEM’s. At least they are significant enough to keep prices down and force some innovation.

        Oh well, maybe AMD hit a tipping point with heterogeneous computing and strong arm another standard onto Intel like they did with AMD64 (lol), but with the turbulence added by mobiles and ARM, and Apple to please (who’s market cap is now more than the total wealth of most small countries), who knows. Your thoughts?


      • Scali says:

        An interesting quote there: “AMD’s sponsorship helped propel Intel from the chorus line of semiconductor companies into instant stardom”
        In case you didn’t know, Intel is the company that literally invented the microprocessor. They were the first to offer a complete working CPU on a single chip. This was the 4004, years before the 8086. You can’t get much more stardom than that.
        How was it possible that Intel was the first with a microprocessor? They had the most advanced manufacturing facilities, the first to be able to put such complex logic onto a single chip. Sound familiar?

        It’s also funny that the AMD people see a monopoly on an ISA as something unique, while in fact it is quite the opposite: most companies designed and manufactured (or outsourced) their own CPUs, and never opened up the ISA to any third party. Licensing an ISA was the execption, not the rule. It became slightly more common in the 90s, with eg ARM and MIPS becoming open architectures (although they had been around as closed architectures for many years already).
        Nearly every CPU company had a de-facto monopoly on their own architecture. Which is no different from any other company having a monopoly on the products they design and build. Coca-Cola has the monopoly on Coca-Cola (but not all cola’s).

        As for the 386… At the very least it is a fact that Intel never gave AMD the 386 designs (AMD admits as much), and it is a fact that AMD has made the Am386DX and Am386DX chips.
        In fact, AMD actually started reverse-engineering and cloning Intel’s chips at a much earlier stage:
        AMD has always tried to feed on Intel’s success, pretty much.

        Anyway, I’ve commented on the ARM vs x86 issue in other blogs. Initially I was skeptic about x86 in embedded/mobile devices, but since they came up with Medfield I think they have a good chance in that market. It’s now up to ARM to see if they can match Intel’s tick-tock pace.

        The heterogeneous computing market is more or less cornered by nVidia+Intel at this point. An important part of heterogeneous computing is software. AMD does not have a strong enough in-house development team to deliver the right tools and frameworks.
        nVidia is years ahead with Cuda, and as a result, most heterogeneous computing software is either aimed at Cuda exclusively, or at least runs best on nVidia GPUs.
        So far we haven’t quite seen a big shift from Cuda to OpenCL. Until such a shift happens, AMD GPUs and APUs aren’t all that useful in most software.
        If and when such a shift happens, nobody seems to know. People said that an open standard like OpenCL would take the world by storm. But that was years ago. Since Ivy Bridge, Intel now supports OpenCL as well. Perhaps that makes a difference. The problem is that Ivy Bridge has an extremely powerful CPU paired with an extremely poor GPU, so most things will run faster on CPU anyway. And when targeting CPU, it’s generally better to just use conventional programming languages rather than OpenCL.

  9. Pingback: AMD Steamroller | Scali's OpenBlog™

  10. HawkHybrid says:

    Points for Comment
    Trinity has 2Mb L2 versus Llano 1Mb L2 in single threaded tests.
    Notebookcheck Lenovo E525 versus E535.
    power consumption results are similar except 3dMark06: Llano 32.8 versus Trinity 40.6
    Llano score 3815, Trinity score 4208


  11. Pingback: Haswell Hasarrived | Scali's OpenBlog™

  12. Pingback: AMD Richland | Scali's OpenBlog™

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s