In a recent blogpost, after dealing with the nasty antics of a deluded AMD fanboy, I already discussed what we should and should not expect from AMD’s upcoming Radeon RX480.
Today, the NDA was lifted, and reviews appear everywhere on the internet. Cards are also becoming available in shops, and street prices become known. I will make this blogpost very short, because I really can’t be bothered:
I told you so. I told you:
- If AMD rates the cards at 150W TDP, they are not magically going to be significantly below that. They will be in the same range of power as the GTX970 and GTX1070.
- If AMD makes a comparison against the GTX970 and GTX980 in some slides, then that is apparently what they think they will be targeting.
- If AMD does not mention anything about DX12_1 or other fancy new features, it won’t have any such things.
- You only go for aggressive pricing strategy if you don’t have anything else in the sense of a unique selling point.
And indeed, all this rings true. Well, with 3. there is a tiny little surprise that AMD does actually make some vague claims to some ‘foveated rendering’ feature. But at this point it is not entirely clear what it does, how developers should use it, let alone how it performs.
So, all this shows just how good nVidia’s Maxwell really is. As I said, AMD is one step behind, becaue they missed the refresh-cycle that nVidia did on Maxwell. And this becomes painfully clear now: Even though AMD moved to 14nm FinFET, their architecture is so much worse in efficiency that they can only now match Maxwell’s performance-per-watt at 28 nm. Pascal is on a completely different level. Aside from that, Maxwell already has the DX12_1 featureset.
All this adds up to Polaris being too-little-too-late, which has become a time-honoured AMD tradition by now. At first, only in the CPU department. But lately the GPU department appears to have been reduced to the same.
So what do you do? You undercut the prices of the competition. Another time-honoured AMD tradition. This is all well-and-good for the short term. But nVidia is going to launch those GTX1050/1060 cards eventually (and rumour has it that it will be sooner rather than later), and then nVidia will have the full Pascal efficiency at its disposal to compete with AMD on price. This is a similar situation to the CPU department again, where Intel’s CPUs are considerably more efficient, so Intel can reach the same performance/price levels with much smaller CPUs, which are cheaper to produce. So AMD is always on the losing end of a price war.
Sadly, the street prices are currently considerably higher than what AMD promised us a few weeks ago. So even that is not really working out for them.
Right, I think that’s enough for today. We’ll probably pick this up again soon when the GTX1060 surfaces.
lol, it’s good that we got all the gloating out of the way :p. Just messing with you mate. Anyhow, in all honesty, the benchmarks turned out better than what I expected after reading a particular HardOCP article. 🙂
Incidentally, I’m a little surprised that none of the big tech sites bothered to hook up an HMD with their review samples, considering how Polaris is supposed to bring VR to the masses.
Subtlety doesn’t really work with AMD fanboys. I figured we just have to smash the facts into their faces with a big sledgehammer. AMD is *not* ahead of nVidia technologically (lol console wins), not by a long shot. For some reason, I could easily tell that last generation, but many were in denial. Polaris should make it pretty obvious to anyone, I would say. Those 2-year old 28 nm GTX970 cards are still a hard act to follow for AMD with their latest tech. And we’re talking every single area here… performance-per-watt, performance-per-dollar, getting feature-parity…
I wasn’t the one who was being biased or had an agenda. I just told things as they are, and it’s not that pretty for AMD. As I said, the world is not fair, AMD and nVidia are not equals.
It’s the people who desperately try to make AMD look like nVidia’s equal who are biased. But they don’t seem to understand the difference, or don’t want to admit to it, because they have an agenda.
http://www.tomshardware.com/reviews/amd-radeon-rx-480-polaris-10,4616-9.html
All that just so they can market it as only having a single 6pin connector.
Ouch… someone is trying a bit TOO hard to stay in the game…
,,,and “the catch” emerges. They obviously did that to make it appear more efficient than it actually is. Good heavens, an overclocked Tri-SLI system would probably melt the soldering on the mobo. lol
*Tri-CF, I meant. 😦
Well, let’s think of theories why this is…
Let’s assume they have always planned the cards to have the 6-pin connector. Changing the PCB to fit an 8-pin connector at the last moment was not an option.
That would imply that the new 14 nm FinFET dies didn’t perform as well as AMD had hoped, and they had to push the clockspeeds further than originally planned, in order to actually get that 970-level performance. I guess pushing for the ultimate bang-for-the-buck was more important to AMD than you know… staying within PCI-e power specs.
Sadly that means that as usual, you get what you pay for… Yes, it’s cheap, and yes it performs… But it’s also going to push your motherboard/PSU beyond their intended design parameters.
If you also go for bang-for-the-buck motherboard and PSU, they are not likely going to be very tolerant to this.
http://www.tomshardware.com/reviews/amd-radeon-rx-480-polaris-10,4616-9.html
“We skipped long-term overclocking and overvolting tests, since the Radeon RX 480’s power consumption through the PCIe slot jumped to an average of 100W, peaking at 200W. We just didn’t want to do that to our test platform.”
http://www.hardocp.com/article/2016/05/27/from_ati_to_amd_back_journey_in_futility/
Kyle was right after all.
Of course he was. He’s not an idiot. Nor am I. I guess Kyle and I are alike in some ways. If we post something, we stand behind that 100%, because we know it to be true.
Heh, AMD fanboi damage control is running rampant on forums now… Claiming that games such as Tomb Raider aren’t ‘true’ DX12 games because they don’t use async compute and whatever…
Yea, I get it… in the AMD universe, async compute is the *only* DX12 feature you have. nVidia and Intel however also support other stuff, such as conservative rasterization, and rasterizer ordered views. Tomb Raider actually makes use of such features. Well, it would, if the RX480 would support them… And people linking to that AdoredTV trash on YouTube, lol! Also claiming that nVidia is holding up DX12. Oh really? As some of you know, I have posted hard evidence of NV being far more involved with the development of DX12 than AMD was.
I never understood your hate against AMD. Are not we talking about a very small company (in 2016), which, in GPUs, is for 20 % of sales ?
What hate? I’m just stating facts, aren’t I?
If anything, the release of Polaris proves that my earlier statements were not “hate against AMD”, but merely an accurate reflection of reality.
ASYNC COMPUTE!!!
Point 1 was not completely unreasonable as there have been some AMD cards that consumed less than their rated power in games. Unfortunately, now this makes one of AMD’s claims about the Polaris questionable. Namely, they claimed 2.8x energy efficiency when comparing RX 470 to R9 270X. It seems that they just used the cards’ rated power consumption figures in their calculations. However, the R9 270X consumes around 120-130 Watts in most games instead of the rated 180 W. If the RX 470 behaves like the RX 480, i.e. power consumption in games is close to the rated power, the difference in energy efficiency is much less than what was promised.
I think it was unreasonable though… I mean, there were people making claims that it was more like 100-120W TDP in practice. That certainly is not the case. If that were true, AMD would surely have advertised with a figure well below 150W TDP, given that both the 970 and the 1070 operate in that area.
I guess AMD made those slides before they knew about 1070. Was probably a shock to them that it was only 150W TDP. They probably thought “Well, 970 is 145W TDP, we can advertise ours as 150W TDP, and throw in a bit more performance and the 8 GB, then we’ll look good”.
But after 1070, everyone expects a 150W TDP card to perform WAY better than 970. You’d expect something like 120W max for a 970-ish card at 14/16nm. We’ll see what 1050/1060 do soon.
And in fact, as we’ve seen, AMD doesn’t even manage to stay within their power envelope… breaking PCI-e spec. nVidia just put 8-pin or 2×6-pin connectors on the 970 and 1070 so they don’t have the problem in the first place, even if they would overshoot it from time to time.
Given what had been told about 14 and/or 16 nm, one might have expected even a simple Hawaii die shrink to consume well below 150 W.
“I guess pushing for the ultimate bang-for-the-buck was more important to AMD than you know… staying within PCI-e power specs.”
Thanks to their R&D cut-downs they’re reduced to being a one-trick-pony – their Fury-X overshot the thermal envelope, with a similarly low OC headroom. GCN is turning into AMD’s Netburst. The inevitable AiB cards would alleviate most of these engineering faults, but even they can’t add new features or significantly improve power usage.
Funny… Redneckerz was ‘educating’ us on GCN only a few weeks ago, saying it was such a ‘forward looking’ architecture. Where is Redneckerz anyway? He hasn’t posted here in a while… Wonder if he regrets the stupid things he posted earlier 🙂
🙂 You’ll probably see him again, spouting the same nonsense under a different ID. Right now he must be very busy with his cheerleading obligations, in maximum damage control mode, “give me an ‘A’, give me an ‘M’…” lol.
Yea, I like how what they put on the internet stays on the internet. Just like with Randy Allen and John Fruehe for example. AMD fanboys could argue all they wanted, up to the product launch and reviews. But once those were out, boy did it become painfully obvious how moronic they are 🙂 It only helps to strengthen my position. This is another one that Redneckerz can add to his list of “Everybody said A and Scali said B, but Scali was right anyway”.
Not really. This comment of yours clearly has a provocative intent, as is the case with the first alinea of this blog and your blog The damage that AMD marketing does. (And it worked considering i made a response, congratulations) I am all up for taking it up to arms, but in a 1v1 scenario. Not like that people go harass others who have nothing to do with it, and then draw conclusions from there (As you did). It just confirms the image that you always want to ”win” an argument no matter how silly it is. That is where i draw a line.
Conclusion on the card remains the same – Good mid-range performance which is pretty much the majority of gamers out there. The 8 GB is a nice extra.
@ Qwerty: You are right on one thing – He would see me again, with this comment. I dont use alternative names or ID’s tho, unlike the person you seem to agree with 🙂
We tried that before, remember? A discussion over private messages is even more mindnumbing than your public posts.
You didn’t understand the first thing I tried to explain, about how games with ‘GI’ solutions without conservative raster use a lot of static, pre-baked voxelmaps to get acceptable performance, as opposed to a solution like VXGI where all geometry can be fully dynamic in realtime.
I don’t know why you’d even want to try talking to an experienced graphics dev 1:1, you clearly lack the knowledge to have any meaningful conversation on the topic. I do know that no experienced graphics dev wants to talk to you 1:1, because it’s just them trying to explaing things that fall upon deaf ears, and you crying AMD fanboyisms and personal insults.
Says the guy who specifically comes to my own blog to post dozens and dozens of walls-of-text, addressed at me personally.
No, I don’t need to “win” arguments. What I do want is to humiliate you and destroy your online persona forever (or well, you’re doing the destruction yourself, with all the walls-of-text full of retarded/paranoid statements, I’m just enabling you). You’ve earned it, with your behaviour. You need to be taught a lesson.
AMD squad needs to apologize to Kyle, he was right on everything.
I also like how AMD fanboys try to argue about price… “There’s no way they can make the 1060 as cheap as the RX480, because the next step up, the 1070 costs “…
Well, let’s point out the obvious here:
1) The price of the 1070 is not a fixed number. nVidia determined those prices. They can easily lower the 1070 to make the gap with 1060 smaller, if they so choose (they probably had that planned all along, and launched the 1070 and 1080 relatively high-priced simply because they could, and will gradually reduce prices once production matures. That is generally the pattern… 970/980 were a bit of an exception since they were launched on a mature production process, and there was no reason to lower prices anyway).
2) They could also introduce extra GPUs between a 970/RX480-level card and 1070, because even today we have the 980 in between there currently, performance-wise. Eg, they could make a 1060Ti that sits between the two.
3) They could also just have the 1060 at competitive prices with RX480, the 1070 where it is now, and nothing in between. Why the heck not?
At any rate, you can be sure that nVidia will launch a card that will compete directly against the RX480 in terms of price and performance. My guess is that they will go for the same price, but better performance.
I’d wager it will be a bit more expensive than the RX 480, due to current circumstances they can IMHO get away with higher prices.
Depends on whether nVidia wants to just make a lot of money, or whether they want to take away the prestige of having the bang-for-the-buck-crown away from AMD.
The latter would hurt AMD more. Bang-for-the-buck is everything AMD stands for. Intel did exactly that with the Core2Duo E6600. They went straight for AMD’s jugular, and priced it at about half of what AMD charged for a similar performing Athlon64. AMD is still feeling that one.
“At any rate, you can be sure that nVidia will launch a card that will compete directly against the RX480 in terms of price and performance. My guess is that they will go for the same price, but better performance.”
Knowing Nvidia’s MO, the 1060 would be Y% better than the 480 and priced XY% higher, where X ≥ 1.2 :p. Well because, they can. I wonder where would we be today in terms of graphics, if Nvidia had to compete against a worthier adversary, like saaay Samsung owning ATI.
It seems nVidia is still going all-out in terms of R&D (unless AMD and other GPU devs are really that weak). So I guess the main thing would be that nVidia would find itself in a pricewar, and would have to lower prices.
Mind you, this implies that:
1) nVidia would have less marketshare, so lower volumes.
2) nVidia would have less profit margin per sold unit.
Which means that they’d have a lot less money coming in for R&D. In which case, if I’m right about nVidia actually trying their best to develop the most awesome GPUs possible, we may actually have gotten less advanced GPUs by now. Because you’d have NVidia on a lower budget, and their competitor basically trying to re-invent the same wheel as nVidia, so progress will be less effective than a single company having the total budget of both.
“1) The price of the 1070 is not a fixed number. nVidia determined those prices. They can easily lower the 1070 to make the gap with 1060 smaller, if they so choose [snip]”
Very true. Also, comparing prices at launch is actually pointless, because you’re basically guaranteed *not* to get a freshly-released card at MSRP. I’ve browsed a few online outlets and in all of them the RX 480 is way more expensive than most 970 models. If you want to get 970 levels of performance, why not getting a cheaper 970 instead? Maybe some people are simply impulse buyers, or just fanboys.
And if they concede you that it’s more expensive, they’ll point to VR. As if that’s a deal-breaker in the eyes of most people.
With that said, I believe the 480 will eventually become a nice card to upgrade an aging rig, but nothing more.
Let’s hope that Zen CPUs are at least passable, but seeing how it’s going so far, I doubt they will
Seems like you were right about the DX12 feature level, wiki only mentions DX 12.0
https://en.wikipedia.org/wiki/AMD_Radeon_RX_400_series
Someone put 12.1 there, and [citation needed] was added 🙂
Hope springs eternal I guess.
Heise is the first review where I see them mention it explicitly:
http://www.heise.de/newsticker/meldung/Polaris-ist-da-AMD-Grafikkarte-Radeon-RX-480-mit-hoher-Spieleleistung-und-einem-Makel-3251042.html
I think these guys actually bothered to test it, unlike the 12_1 rumours I find on some other sites.
It’s also not in AMD’s slide-deck. Note how they put FL11_1 in GCN gen 1, and FL12_0 in gen 2, then no mention of FL at all for gen 3 and 4 (presumably because they’re all still on FL12_0).
@Redneckerz
Well, well, speak of the devil… 😀 No pearls of wisdom about the power and overclocking concerns of that good mid-range performance, or the fact that it (Polaris) is still limited to DX12.0? Or do facts not go well with your perception of reality?
I quote: “GTX 980 levels and beyond”…
Also, the value-for-money card seems to fall down a bit at the moment, with retail prices being much higher than AMD’s projected prices.
Sure, it’s still a slightly better deal than some 2-year old 970OC cards (if you don’t mind missing out on DX12_1 of course)… but only slightly. Which is rather depressing for an all-new 14 nm FinFET architecture.
Especially since Pascal was pretty much a landslide in the market segment they launched in. Both prices and power consumption were dropped immensely. Had RX480 done that in the 970/980-bracket, then we had something to be excited about. Now we’re just waiting for nVidia to finish the job with the 1050/1060.
Here’s your power/overclocking drama: https://m.reddit.com/r/Amd/comments/4qfwd4/rx480_fails_pcie_specification/ 🙂
Even one of AMD’s sockpuppets is in there for damage control… heh. Dudes, you should have done that during QA at the factory.
“…with retail prices being much higher than AMD’s projected prices.”
That’s probably a consequence of demand surpassing supply (in a bad way), which isn’t unusual for AMD products.
By the way, I read this beautiful exchange somewhere… Some AMD fanboy was ragging on a review, one of those walls-of-text that went something like: “Your benchmarks are unfair! This game is pro-nVidia because it uses GameWorks, and in that game you used this-and-that setting, which is unfair to AMD… and blahblah, and you didn’t use game X which is more fair for AMD etc etc”.
Some guy responded with a beautiful one-liner:
“Well, reading your post I’m thinking, why buy AMD at all, since it doesn’t seem to handle any games properly” 🙂
I guess there’s truth in that 🙂
There are a bunch of iffy online sources that just feed the kind of half-baked misinformation that the fanboys thrive one. Case in point: “PC Gamer” (the site that works with AMD for E3) gave the reference 480 a score of 94%. For the average consumer, these sources are a fairly reliable “authority”.
http://videocardz.com/61753/nvidia-geforce-gtx-1060-specifications-leaked-faster-than-rx-480
No surprise there.
Will be exciting to see if they can get that in the same price bracket as RX480. That might be quite a challenge for Nvidia, a lot of ”advised” prices end up getting fairly higher. Another problem would be supply of demand. As awesome as the new GTX 1070/1080’s are, they arent in big numbers, and you can debate about the price of the FE.
And we’re back to price again…
Did AMD really make reference desing/cooler that bad. They say it because they wanted to make custom board makers look better. What does it say about AMD. This is how you market your cards to your fans?
Graphics programmers, if you want to do real interesting next-gen with Conservative Rasterization, Rasterizer Ordered Views & Tiled Resource Tier 3 work, forget about it on inferior hardware from AMD because it simply can’t support it.
Ah, that’s the DXCapsViewer money shot we were looking for 🙂
I guess that came from this review: http://www.bitsandchips.it/recensioni/9-hardware/7184-amd-radeon-rx-480-8gb?start=5
Note also that some AMD fanboi actually put 12_1 support in the Wiki article, until it finally got corrected. Spreading lies everywhere.
https://community.amd.com/thread/202410
bwahaha AMD finally admits, in their typically obfuscated style:
http://www.anandtech.com/show/10465/amd-releases-statement-on-radeon-rx-480-power-consumption
“As you know, we continuously tune our GPUs in order to maximize their performance within their given power envelopes and the speed of the memory interface, which in this case is an unprecedented 8Gbps for GDDR5. Recently, we identified select scenarios where the tuning of some RX 480 boards was not optimal. Fortunately, we can adjust the GPU’s tuning via software in order to resolve this issue. We are already testing a driver that implements a fix, and we will provide an update to the community on our progress on Tuesday (July 5, 2016).”
I love how the blame is hinted upon VRAM, and its speed is called “unprecedented”.
Pretty sure the 1070 set the precedent for 8Gbps GDDR5 🙂
How long before AMD goes bankrupt?
No idea, I thought the same thing when Bulldozer launched… somehow they bleed cash, but never die. I wonder though… one of the things Kyle Bennett said in his editorial on AMD-to-ATi was that the RTG wanted to split off from AMD and be bought by another company, and rumour was it that Intel was the prime candidate. If that is true (which I doubt, since Intel’s GPU department is very much up-to-date with the latest technology), then after Polaris they would probably go… “Erm, no thanks, we can do that better ourselves” 🙂
Scali, it seems you’re smart & too deep into these things (the blog is about), so I have one question for you: Does it worth to buy GTX 1080 over GTX 1070? I mean price/performance ratio, does it worth? I sold my ex-card (GTX 780 Ti) [It was too weak already for modern games] & thinking to buy one of those two. Here’s my specs & I use my PC mainly for gaming & I wanna maintain constant 60 fps in modern games:
CPU: Intel Core I7 3770k
MOTHERBOARD: MSI Z77A-G45 Gaming
GPU: Intel HD Graphics (temporary)
RAM: 16GB G.Skill Trident X
STORAGE: Crucial M500 SSD 120GB + 2TB HDD + 4TB HDD
PSU: XFX PRO1050W Black Edition (80+ Gold)
COOLER: Thermaltake water 2.0 (liquid cooling)
CASE: Cooler Master HAF 912 Plus
MONITOR: Philips 298P4 29″ UltraWide
KEYBOARD: Corsair Vengeance K70
MOUSE: A4Tech Bloody V7
OS: Windows 7 64bit
Thx in advance.
That is a very personal question. Strictly speaking the 1070 has quite a bit better performance-per-dollar: https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1070/26.html
However, if you can afford the 1080, perhaps you think the extra money is worth the extra performance, because you get a smoother gaming experience, and your video card may last just that bit longer. But everyone has to decide for themselves how much that is worth.