nVidia gives an answer to AMD

A week after AMD introduced its DirectX 11 hardware, nVidia has given us a briefing on what it is that they have planned for DirectX 11. The answer to that is ‘nothing’. And with that I don’t mean that they don’t have any upcoming DirectX 11 hardware, but rather that what they’re planning goes way beyond the scope of DirectX 11. What they’re going for seems almost like a pre-emptive strike on Larrabee. They’re going for a complete C++ programming model, and a unified memory addressing system. They’re also going for integration of debugging and profiling tools in Visual Studio. So the lines between GPU and CPU are blurring, especially from a development point-of-view (no more archaic, simplified languages and tools). If you thought GPGPU was the name of the game with the 8800 series a few years ago, it seems they were only just warming up. They’re going to take it WAY further this time.

Another really cool feature is that they will now be able to run multiple kernels concurrently on the GPU. I’ve wondered about this in the past, particularly about how unified shaders handled vertex and pixelshading tasks. Unified shaders can do both at the same time, and they can be scheduled as required, but it was never quite clear whether they actually ran concurrently, or if it was all time-division multiplexing, so to say. At the very least, we knew that in Cuda it was this way. You couldn’t run Cuda (or PhysX for that matter) kernels at the same time as graphics, it was strictly one after the other. But not anymore. Multiple kernels can be run concurrently, and the overhead of switching between kernels is also reduced. All this makes GPGPU ever more useful.

So ofcourse they also wanted to show off some GPGPU stuff. They had some raytracing, which wasn’t actually done on the new GPU yet (rumour has it that they only have 7 working chips now, and even less working cards), but this physics demo DID run on the new chip:

I thought this was just amazing. Fluid simulations were something that you could only do on supercomputers a mere 10 years ago, and now you can do it on a single PC, in realtime. That’s the power of GPGPU right there. People who were always whining that physics could just be done on the idle cores of their quadcore CPU, take a good look at this. This is something that CPUs won’t be able to do for a long time. Not until parts of a GPU get integrated into the CPU, at which point it’s not really a CPU anymore, is it?

So what about graphics then, one might ask. Well, nVidia didn’t really say much about graphics. We do know from the raw specifications that it’s going to get a 384-bit bus and plenty of bandwidth through GDDR5 memory, and the chip will be around 3 billion transistors, so they’re going to try and fit as much horsepower on it as they can, just like they did with the G80 and GT200 that went before it. So it should be able to do just fine in graphics as well.

The main question at this point is: Can nVidia pull it off? It sounds like a great chip from a developer’s point-of-view, with lots of programmability and elegant, powerful tools. But, will it also perform, and will it be affordable enough? At this point there is no way to tell. There are plenty of horror-stories going around, based on the trouble TSMC had with its 40 nm process, and nVidia’s chip being so large and complex that it is doomed to fail. We also don’t know exactly when they will be able to put this product on the shelves. But if they pull this one off, they’re going to have a killer chip on their hands, which both AMD and Intel need to seriously worry about. Anandtech said that they think they will have review samples in about two months. I hope they are right, because I can’t wait to see how this chip turns out. nVidia sure knows how to get developers excited about their products.


This entry was posted in Uncategorized. Bookmark the permalink.

2 Responses to nVidia gives an answer to AMD

  1. CarstenS says:

    Hey Scali,One thing’s wrong at least ;-). Yes, you can run multiple kernels on fermi, but they have to belong to the same context. So you cannot mix for example physx and graphx but you can run a fluid and a rigid-body-solver which wasn’t possible before.BTW – i could not answer your PM over at B3D. Is your inbox full or did you turn off PMs?

  2. Scali says:

    Depends on what a context is, doesn’t it?What about D3D for example? Would Compute Shaders belong to the same context as other shaders? From the side of D3D it looks that way, at any rate. Which brings us to the next question: You can tie a Cuda context to a D3D9/10 or OpenCL context, for sharing buffers etc. Wouldn’t that also be handled the same way?I think in the end the hardware doesn’t care. It just runs multiple kernels at the same time. It’s up to the software and how it feeds those kernels.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s