Intel tries its hand at a discrete GPU again?

As you may have heard a few months ago, Intel has employed Raja Koduri, former GPU-designer at AMD’s Radeon division. Back then the statement already read:

In this position, Koduri will expand Intel’s leading position in integrated graphics for the PC market with high-end discrete graphics solutions for a broad range of computing segments.

But at the time it was uncertain what exactly they meant with ‘discrete graphics solutions’, or what timespan we are talking about exactly.
But now there is the news that Intel has also employed Chris Hook, former Senior Director of Global Product Marketing at AMD.
And again, the statement says:

I’ll be assuming a new role in which I’ll be driving the marketing strategy for visual technologies and upcoming discrete graphics products.

So, there really is something discrete coming out of Intel, and probably sooner rather than later, if they are thinking about how to market this technology.

See also this article at Tom’s Hardware.

I am quite excited about this for various reasons. It would be great if NVIDIA would face new competition on the (GP)GPU-front. Also, Intel was and still is Chipzilla. They have the biggest and most advanced chip production facilities in the world. I’ve always wondered what an Intel GPU would be like. Even if their GPU design isn’t quite as good as NV’s, their manufacturing advantage could tilt things to their advantage. I’ve also said that although Intel GPUs aren’t that great in terms of performance, you have to look at what these chips are. Intel always optimized their GPUs for minimum power consumption, and minimum transistor count. So they only had a handful of processing units, compared to the thousands of units found on high-end discrete GPUs. The real question for me has always been: what would happen if you were to take Intel’s GPU design, and scale it up to high-end discrete GPU transistor count?

Perhaps we will be seeing the answer to this in the coming years. One other thing I had pointed out some years ago, was that Intel appeared to have changed course in terms of drivers and feature support. In the late 90s and early 2000s, Intel really had very minimal GPUs in every meaning of the word. However, when DirectX 10 came around, Intel was reasonably quick to introduce GPUs with support for the new featureset. Sadly it still took months to get the first DX10 drivers, but they did eventually arrive. It would appear that Intel had ramped up their driver department. DX11 was a much smoother transition. And when DDX12 came around, Intel was involved with the development of the API, and had development drivers publicly available quite soon (way sooner than AMD). Intel also gave early demonstrations of DX12 on their hardware. And their hardware actually was the most feature-complete at the time (DX12_1, with some higher tier support than NV).

Let’s wait and see what they will come up with.

This entry was posted in Hardware news and tagged , , , , , , . Bookmark the permalink.

4 Responses to Intel tries its hand at a discrete GPU again?

  1. Thomas says:

    I don’t understand the feature-completeness of Intel chips. It’s not like you can use an integrated gpu for the latest games with the most advanced graphics.

    • Scali says:

      Intel may argue that that’s no reason not to support those features.
      Mind you, if their plan was to create a discrete GPU, then the feature-completeness makes a lot of sense.

  2. Roxor128 says:

    I am reminded of the old Project Larrabee that never saw the light of day as the graphics card it was originally intended to be, but instead became the Xeon Phi line of accelerator cards.

    The idea of a card with a bunch of Pentium-like x86 cores implementing nearly everything graphics-related in software (I think texture filtering was one of the few exceptions) was pretty interesting.

    I wonder if Intel is taking a second shot at the idea this time, or are they just scaling up their existing GPU technology to fill a card?

    • Scali says:

      Indeed… Back in the days of Larrabee, they wanted to replace their iGPU-line with Larrabee-derivatives at some point.
      In practice however, Larrabee was performing too poorly, and the iGPUs made a huge step in performance and functionality.

      Intel has always had quite a ‘software’ approach to GPUs anyway. They would perform various operations, such as polygon setup and clipping, via custom shader programs, where other GPUs had hardwired logic.

      I would say there’s a point in the near future where both will meet. GPUs get ever more generic and programmable, and x86 becomes more parallel and SIMD-oriented.
      But I have no idea if that point will be reached now. I would personally expect Intel to take their current GPUs and scale them up to discrete units, and perhaps migrate the technology to an x86-like instructionset over time.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s