Intel tries its hand at a discrete GPU again?

As you may have heard a few months ago, Intel has employed Raja Koduri, former GPU-designer at AMD’s Radeon division. Back then the statement already read:

In this position, Koduri will expand Intel’s leading position in integrated graphics for the PC market with high-end discrete graphics solutions for a broad range of computing segments.

But at the time it was uncertain what exactly they meant with ‘discrete graphics solutions’, or what timespan we are talking about exactly.
But now there is the news that Intel has also employed Chris Hook, former Senior Director of Global Product Marketing at AMD.
And again, the statement says:

I’ll be assuming a new role in which I’ll be driving the marketing strategy for visual technologies and upcoming discrete graphics products.

So, there really is something discrete coming out of Intel, and probably sooner rather than later, if they are thinking about how to market this technology.

See also this article at Tom’s Hardware.

I am quite excited about this for various reasons. It would be great if NVIDIA would face new competition on the (GP)GPU-front. Also, Intel was and still is Chipzilla. They have the biggest and most advanced chip production facilities in the world. I’ve always wondered what an Intel GPU would be like. Even if their GPU design isn’t quite as good as NV’s, their manufacturing advantage could tilt things to their advantage. I’ve also said that although Intel GPUs aren’t that great in terms of performance, you have to look at what these chips are. Intel always optimized their GPUs for minimum power consumption, and minimum transistor count. So they only had a handful of processing units, compared to the thousands of units found on high-end discrete GPUs. The real question for me has always been: what would happen if you were to take Intel’s GPU design, and scale it up to high-end discrete GPU transistor count?

Perhaps we will be seeing the answer to this in the coming years. One other thing I had pointed out some years ago, was that Intel appeared to have changed course in terms of drivers and feature support. In the late 90s and early 2000s, Intel really had very minimal GPUs in every meaning of the word. However, when DirectX 10 came around, Intel was reasonably quick to introduce GPUs with support for the new featureset. Sadly it still took months to get the first DX10 drivers, but they did eventually arrive. It would appear that Intel had ramped up their driver department. DX11 was a much smoother transition. And when DDX12 came around, Intel was involved with the development of the API, and had development drivers publicly available quite soon (way sooner than AMD). Intel also gave early demonstrations of DX12 on their hardware. And their hardware actually was the most feature-complete at the time (DX12_1, with some higher tier support than NV).

Let’s wait and see what they will come up with.

Advertisement
This entry was posted in Hardware news and tagged , , , , , , . Bookmark the permalink.

7 Responses to Intel tries its hand at a discrete GPU again?

  1. Thomas says:

    I don’t understand the feature-completeness of Intel chips. It’s not like you can use an integrated gpu for the latest games with the most advanced graphics.

    • Scali says:

      Intel may argue that that’s no reason not to support those features.
      Mind you, if their plan was to create a discrete GPU, then the feature-completeness makes a lot of sense.

  2. Roxor128 says:

    I am reminded of the old Project Larrabee that never saw the light of day as the graphics card it was originally intended to be, but instead became the Xeon Phi line of accelerator cards.

    The idea of a card with a bunch of Pentium-like x86 cores implementing nearly everything graphics-related in software (I think texture filtering was one of the few exceptions) was pretty interesting.

    I wonder if Intel is taking a second shot at the idea this time, or are they just scaling up their existing GPU technology to fill a card?

    • Scali says:

      Indeed… Back in the days of Larrabee, they wanted to replace their iGPU-line with Larrabee-derivatives at some point.
      In practice however, Larrabee was performing too poorly, and the iGPUs made a huge step in performance and functionality.

      Intel has always had quite a ‘software’ approach to GPUs anyway. They would perform various operations, such as polygon setup and clipping, via custom shader programs, where other GPUs had hardwired logic.

      I would say there’s a point in the near future where both will meet. GPUs get ever more generic and programmable, and x86 becomes more parallel and SIMD-oriented.
      But I have no idea if that point will be reached now. I would personally expect Intel to take their current GPUs and scale them up to discrete units, and perhaps migrate the technology to an x86-like instructionset over time.

  3. Marv says:

    “I am quite excited about this for various reasons. It would be great if NVIDIA would face new competition on the (GP)GPU-front. Also, Intel was and still is Chipzilla. They have the biggest and most advanced chip production facilities in the world.”

    Yeah, I know this is several months late for a response but …

    There’s currently no reason to stake our hope that Intel will become competitive against Nvidia in terms of professional compute or much less if they even desire so. Them having the biggest and most advanced chip production facilities won’t mean much if they don’t have a sane compute API platform to pair that up with. There are several applications that don’t work with their OpenCL implementation. They should at the very least bring out a decent OpenCL implementation or bring out their own proprietary compute API platform like CUDA to have a better chance at realizing that vision. They will quickly find their progress impinged upon in that area if they don’t improve their compute API situation very soon and start offering more developer support for GPU compute acceleration so that they can get more applications running with hardware acceleration …

    That being said, I won’t look down on Intel if they don’t pursue this path down the line because professional compute isn’t just limited to GPU compute given their sub par track record with the only single GPU compute API platform they’ve handled thus far since they already have a plenty good enough solution for professional high-throughput compute and that’s extending their Xeon server processor lines with AVX-512 …

    • Scali says:

      I fully agree that building the hardware is not enough. I have always said that a big part of nVidia’s success is the quality of their software and drivers. Intel certainly has to match or exceed nVidia in that area as well.

      • Marv says:

        The GPU compute space is a very hostile place as it is so never mind the thought of Intel matching or much less exceeding Nvidia’s software stack in the short term and possibly even in the long term as well. They badly need to rethink how they’re going to redesign their new compute vision if they actually have one in the making to be able to deliver a good enough alternative …

        Designing GPUs to be like x86 cores won’t work out again this time when Intel sunsetted Xeon Phi which was Larrabee’s only legacy. I’m also not certain that them continuing to invest in their current OpenCL compute stack is the correct path going forward when just about every important player has abondoned the API for their own proprietary solution. Apple who was the original author of OpenCL now rolls with Metal, AMD recently just brought up HIP not too long ago, and everybody knows that Nvidia pioneered GPU compute with CUDA which is still going strong today. I’m not even sure if Intel can push for OpenCL to be reconsidered among the deep learning crowd or other segments who won’t touch OpenCL with a ten foot pole as well. They could just offer technical support to make the applications only work on their specific OpenCL implementation but what would be the point of doing vendor lock-in like said when instead they could just design a superior compute stack with a new API that fully serves their own interests ?

        Again, if their potential GPU compute ambitions don’t pan out they always have x86 with AVX-512 as their fallback. In fact that alone should absolve most of the shortcomings that Intel had with higher throughput compute that could’ve been solved with GPU compute. It is my belief that this is their path to least resistance in that specific segment and I genuinely think that they are taking their discrete GPU designs to mainly serve where they already under served which is higher graphics performance since they already have a relatively decent graphics stack

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s