RISC vs CISC, round #213898234

With the news of Windows 8 supporting ARM architectures, the RISC vs CISC discussions are back on the internet. Funny, as the newer generation doesn’t appear to have much of a clue about anything. As I said before, it’s all about knowing the history. It’s funny to see that x86 actually appears to have ‘fanboys’, who will try to defend ‘their’ x86 architecture against the new competition from ARM.

For example, they try to name advantages for CISC. That is pretty ironic. For starters, CISC was not a conscious design philosophy. The acronym ‘CISC’ did not exist until ‘RISC’ was coined (it is in fact a retronym). RISC was a conscious design philosophy, trying to reduce the complexity in the instruction set, in order to make the architecture simpler and more efficient. This was a response to developments in both software (programming languages and compilers) and hardware (ever higher transistor density and propagation speeds). As a result of this new philosophy, everything that went before it, was referred to as CISC from then on. The family of CISC architectures therefore is far less coherent than the family of RISC architectures. After all, RISC follows a specific design philosophy, where CISC does not.

What people appear to be missing is that any advantages that a certain architecture may have, are very much related to the era in which that architecture was developed. The RISC philosophy responded to the demands of that era, just as the philosophy for various CISC architectures responded to the demands of their era (and in some cases also their intended purpose). While the x86 archictecture may have had some advantages at the time, that does not mean that these advantages are still valid today.

For example, many early CISC architectures, such as the x86, were designed with (semi-)variable instruction length. The advantage was that the most common instructions could be encoded with the shortest machine code sequences, which led to smaller code (a variation of entropy encoding/compression, if you will). This was a very valid consideration at the time, since memory was still a very limited resource. We are talking about computer systems that may have had in the range of 1KB to 256KB of memory. So every byte that you could save, mattered.

However, fast-forward to the RISC era, and memory was not such a limitation anymore. Besides, with developments in software, such as the GUI, memory demands changed. As programs became ever more graphical, the memory usage became less dependent on code, and more dependent on data (bitmaps, widgets, controls, that sort of thing). The instructionset of a CPU has little or no effect on this data, so the savings based on different instructionsets became less and less.

At the same time, memory buses became wider. A side effect of these wider buses was that they could generally only address words on word-alignment (and I mean words in the proper sense, the maximum addressable unit of the architecture… not the x86 definition of a word which is frozen-in-time as 16-bits, because that was the word size on early x86 architectures, which have since been replaced by architectures with 32-bit words and 64-bit words). 32-bit words were common in those days… so as an example: If you have a 32-bit word in memory, it has to be aligned on a 32-bit boundary. If you want to read a 32-bit word that is not aligned, this results in the memory controller reading two 32-bit aligned words and re-assembling the requested 32-bit word from these two words. A lot less efficient, obviously. Suddenly, having variable-length instructions seemed a lot less attractive, as you would constantly run into unaligned data. So what used to be an advantage of a CISC architecture, has now turned into a disadvantage, because time (and technology) has caught up with the original idea behind it. RISC would address this by forcing word-sized instructions, and forcing every instruction to be aligned. This made for a much simpler and more efficient instruction fetcher and decoder.

Some more irony is in the fact that x86 have been RISC processors as well, since the Pentium Pro/AMD K6 era (and the last iteration of that other legendary CISC architecture, the 68060, was also using a RISC backend). Namely, as x86 evolved generation after generation, it became too complex to implement every instruction in hardware directly. Instead, the decoder would first decode complex x86 instructions into a series of simpler instructions, and then execute these instructions one-at-a-time (it could also reorder the instructions for even more efficiency and instruction-level parallelism. A technique known as out-of-order execution). Intel named these internal instructions micro-ops. They were effectively an internal RISC instructionset. Since this internal instructionset was simple and efficient, it allowed the Pentium Pro architecture to reach much higher clockspeeds than before, while also reaching higher instruction throughput than ever before.

The trick was in the old 90-10 rule: 90% of the time is spent in 10% of the code. Or in other words: it’s mostly about loops in the code. While the Pentium Pro still had to fetch and decode the complex, unaligned x86 code first, and translate it, most of the time you would be running the same code. Since technology had now developed far enough to allow reasonably large caches in the CPU itself, there was some opportunity here to make the x86 decoding less of a problem. The CPU would decode the instructions in two stages:

  1. Determine the instruction boundaries (in other words, determine the start and the length of each instruction).
  2. Decode the instructions into micro-ops and store them in an internal buffer.

The instruction boundaries could easily be stored in the code cache on a per-page basis. This meant that this step would only be required the first time a memory page was fetched into the code cache. Because 90% of the time you are running a loop, you can skip this step most of the time.

Since the decoded instructions could be buffered internally, this meant that the decoder could work asynchronously. It could just continue decoding as fast as it could, until the buffer was full, not having to wait until the instructions were actually being executed.

As a result, x86 had now achieved near-RISC performance, without having to sacrifice compatibility with the original instructionset. A side-effect was that it became more of a RISC CPU to program for. Optimizing for x86 architectures now mostly entailed tweaking the code so that it could decode as quickly as possible, and use as few micro-ops as possible internally. A large set of archaic complex instructions was still supported by the latest x86 processors, but would have a considerable performance penalty because they would take long to decode and/or generate a large sequence of micro-ops, which took a while to execute.

Which brings me to another common misconception about CISC vs RISC: people seem to think that complexity is the same as the number of instructions. That CISC CPUs having more instructions means that they are more flexible and more powerful to program for. The complexity is not about the number of instructions, but about how they are encoded. RISC instructionsets tend to have a single layout for all instructions, which makes them easy to decode with just a simple table lookup. RISC processors don’t necessarily need to have less instructions than CISC processors, and as such they are not necessarily less flexible and less powerful. A good example was the PowerPC G4 processor. It introduced a very powerful SIMD instructionset, better than Intel’s MMX/SSE and AMD’s 3DNow! attempts at SIMD. The G4 was a good example of how RISC was superior to CISC at the time. The G4 could outperform Pentium III processors which were clocked several hundred MHz higher.

But, as this was all still in the mid-to-late 90s, you may have already guessed that history has repeated itself since: Time and technology have also caught up with the RISC philosophy to a certain extent. As a result, we see RISC going through a similar evolution as x86 (the only surviving CISC architecture). Modern RISC architectures have more instructions and complexity added to them, and they do not necessarily decode and execute instructions directly anymore either. They may also break up certain instructions in smaller pieces and buffer (and reorder) them first, or sometimes even resort to software emulation of certain legacy portions of the instructionset. This is often referred to as post-RISC. And some more irony in my example from the previous paragraph: Apple has abandoned the PowerPC architecture a few years ago, in favour of Intel’s x86, since Intel now offered more performance and lower power consumption.

So if CISC isn’t truly CISC, and if RISC isn’t truly RISC, the whole debate is rather silly anyway, isn’t it? Well, in a way it is… However, there is still some difference between them. Namely, the instructionset still dictates what a processor can do, and although the instructionset is translated to an internal RISC-like instructionset anyway, all translation is not equal. So let’s get back to the original ARM vs x86 debate that kicked this off in the first place. Namely, x86 has evolved into an architecture that is aimed at PCs, workstations and servers. ARM on the other hand has been adopted by the embedded market, and evolved mainly into a compact and energy-efficient architecture for mobile devices.

As a result, ARM CPUs tend to be small and low-power. Intel’s attempts at entering the mobile and embedded market with their x86-based Atom show the gap quite well. ARM processors deliver much better battery life, and can be used in small devices such as phones and mp3 players. Atom is mainly interesting for netbooks, but smaller devices are a bridge too far. Building a competitor to the iPad on an Atom basis is also going to prove difficult, as there is no way you can match the iPad’s battery life in the same form factor. An important factor here is that the decoding logic for an x86 instructionset is considerably larger than for an ARM instructionset.

On the other hand, x86 processors deliver a lot more performance than these ARM processors. So for high-end PCs, workstations and servers, we will likely continue to use x86 for a while. With larger, more powerful CPU designs, the x86 decoding logic becomes relatively less of a factor. More execution units, larger caches, more powerful memory controllers and such will also take up a lot of die space on a high-performance processor, and these components are independent of the instructionset used.

However, there may be some exceptions… Namely, the rise of two technologies in recent years may put ARM into a more competitive position in terms of performance. Firstly we have multicore processing. Since ARM CPUs tend to be very small, you can fit more cores into the same die-area as with an x86-based architecture. Only recently have we seen the first dualcore ARM processors (after all, size and power consumption are more important for the mobile devices that ARM is mainly aimed at), but nVidia has also demonstrated their upcoming quadcore ARM processor for their upcoming Tegra 3. So the performance gap is rapidly being closed here.

And speaking of nVidia, we also arrive at the second upcoming technology: GPGPU processing. The mobile-oriented ARM architectures we know today may not be all that powerful when it comes to floating-point and SIMD processing… But that is exactly the area in which GPGPU excels! So the GPU can compensate for this weakness in the CPU architecture.

And this is just short-term… Both technologies are an example of how existing ARM cores can be used in more powerful configurations. Just like how x86 proves over and over again that you can make a 70s CPU instructionset evolve into pretty much anything, future ARM architectures might also be aimed more at desktop/high-performance computing, and have more powerful floating-point/SIMD processing units, just like their x86 cousins (even for x86, the FPU was an optional co-processor until Intel integrated it in the 486DX, back in 1989. And even then the x87 FPU wasn’t such a great performer compared to other architectures. SIMD was first added to the Pentium MMX in 1993, and later refined for the Pentium III with SSE, which is still added to with every new generation). There is nothing that would prevent CPU designers from making a high-performance ARM variation with a featureset that rivals, or even exceeds, the x86.

The last bit of irony comes from the fact that Intel had an ARM division itself (bought from Digital). In the early days of pocket PCs and Windows CE (Compaq’s iPaq and such), a lot of units were powered with Intel’s StrongARM processor line, and later with its successor, the XScale. Funny enough Intel sold this division to Marvell in 2006. Only shortly before mobile devices really started taking off. And then we find Intel trying to re-enter that market with the x86-based Atom. Intel actually still owns an ARM license too.

At any rate, it would be interesting to see ARM and x86 go head-to-head on the Windows platform. I don’t see it as a CISC vs RISC battle myself, as I said. I mostly see it as two instructionsets battling it out. Which could be interesting. A lot more interesting than AMD and Intel, both using the same x86. It may bring new insights to the table, new technologies, more pronounced strong and weak points of each competitor. And also: more choice. I find it strange that linux/opensource advocates always talk about openness and freedom, while in reality you are mostly limited to the same x86-based hardware. Personally I find it far more interesting if I can run the same OS on a variety of different CPUs, than running a different OS on the same hardware.

Advertisements
This entry was posted in Hardware news, Software news, Uncategorized. Bookmark the permalink.

23 Responses to RISC vs CISC, round #213898234

  1. Mark Davis says:

    “And speaking of nVidia, we also arrive at the second upcoming technology: GPGPU processing. The mobile-oriented ARM architectures we know today may not be all that powerful when it comes to floating-point and SIMD processing… But that is exactly the area in which GPGPU excels! So the GPU can compensate for this weakness in the CPU architecture.”

    If your still talking about the gpu’s that are in Tegra 2 and 3, I’ve read that they are not stream processors but pixel/vertex pipelines.
    Tegra 2: http://www.anandtech.com/show/4098/nvidias-tegra-2-take-two-more-architectural-details-and-design-wins/3

    • Scali says:

      I was talking about the upcoming quadcore chip code-named Kal-El (assumed to be Tegra 3). Rumour has it, it will have a GeForce 8/9-class GPU, with Cuda (and nVidia claims to get Core2 Duo-like performance from it, so its CPU would close the gap with low-end x86 PCs).
      Since the claims are that the GPU is 5x faster than Tegra 2, it’s probably going to be a significantly improved architecture.
      But even if it doesn’t support Cuda, you could still offload some tasks to the GPU. People have been experimenting with GPGPU since the first SM1.x programmable hardware. And then there’s things like HD video playback, which the GPU can also accelerate.
      Besides, even if Kal-El won’t give us Cuda yet, Cuda on nVidia’s ARM devices is pretty much inevitable. If it’s not Tegra 3, then it will be some future Tegra architecture. But it’s going to happen eventually.

      And if ARM + Windows 8 make it into the desktop/notebook market, then perhaps we will just see ARM CPUs bundled with discrete GPUs, like in the x86 world.

  2. Pingback: Yes, AMD fanboys *are* idiots | Scali's blog

  3. Pingback: Intel Medfield vs ARM | Scali's blog

  4. malih says:

    wow, that’s one eye opener, thanks for a good read

  5. Lionel Alva says:

    Hypothetically – as an ISA , which one do you think is suited better for HPC?…ARM or perhaps something like MIPS?

  6. Fabien Lusseau says:

    One of the last thing you said was: “I find it strange that linux/opensource advocates always talk about openness and freedom, while in reality you are mostly limited to the same x86-based hardware.”

    That’s so not the case ! A lot of Linux users are not even using it on a PC anymore … But on there phone (Arm) with Android !

    And I, as a linux user, is really interested in choice and competition in the hardware department. I use Linux on a Mips router, on an Arm based NAS, and on my x86 based Desktop. And I will be very happy if there was more competition, like very strong arm (ouch a pun …) desktop machine, or the arrival of workstations based on the Loongson / Godson chinene multicore Mips CPU.

    • Scali says:

      You think I didn’t know that linux is used on phones, routers etc?
      I guess you don’t understand what I mean with ‘linux/opensource advocates’.

      • Marvin says:

        Ouch! Scali, I love your article though not a techy, I am doing a paper now on this CISC vs RISC ‘debate’; ugh!

  7. davramal says:

    RISC is awesome for asm programming, loved programming for MIPS compared to x86 asm. Everything was so much easier to implement and understand, perfect for an introduction course on computer arhitecture at my university. Too bad MIPS cpus couldn’t keep up with Intel and AMD in the ’90s.

    So do you really think ARM could replace x86 and become truly ubiquitous?

    • Scali says:

      So do you really think ARM could replace x86 and become truly ubiquitous?

      Yes, the architecture itself is not that important. x86 has taught us that much. It’s more about how much you can invest into development, and how good your manufacturing is.
      When I originally wrote this, I thought ARM may have had a chance to get into the notebook/desktop market via tablets and Chromebooks and such.
      However, Intel has since produced quite competitive Atom SoCs, and major manufacturers such as TSMC have had trouble scaling down their process to 20 nm and smaller, while Intel is already mass-producing on 14 nm.
      So it is equally possible that x86 will start replacing SoCs in traditional ARM-markets.

      Things could go either way.

      • davramal says:

        That’s just the idealist in you speaking. I’m sure if you have looked at it from all perspectives, you would’ve concluded just like me that ARM simply has no chance. Since I’m really pragmatic, it’s pretty obvious to see why is.

        First, like you said it youself ARM vendors depend on TSMC and Samsung fabs, which sadly are just far behind Intel even though they market their process node as 16/14nm.

        From Intel’s past history, it’s pretty obvious that Intel can greatly influence OEM companies in cutting ARM’s attempts in that market. That’s what almost 50 years of success can bring to the table.

        But those are not what makes it an impossible task, the problem of emulating x86 applications is. Sure, open-source software and non-native code will easily be ported to ARM with some effort. For closed-source ones, that aren’t still being supported won’t manage to achieve that. So there’s no choice but having to emulating those. I don’t see ARM SoCs capable of running hypervisors and other very complex applications even remotely efficient. Even most x86 cpus have trouble doing that, that’s why cloud platforms are usually running on Xeons. How do you imagine ARM running 2 or more VMs?

        Just like Itanium, ARM stands no chance of running something else than ChromeOS systems. But since I believe in the philosophy “never say never…”, there’s still a one in a million chance of that happening. The critical point is when CMOS-based manufacturing exausts all options and something revolutionary will have to replace it. That, or maybe if quantum computing will be more than vaporware.

      • Scali says:

        That’s just the idealist in you speaking

        That would imply that I see ARM as the ‘ideal’, which I don’t.

        I’m sure if you have looked at it from all perspectives, you would’ve concluded just like me that ARM simply has no chance. Since I’m really pragmatic, it’s pretty obvious to see why is.

        I think you miss one obvious perspective: Intel could make ARM processors (again).

        But those are not what makes it an impossible task, the problem of emulating x86 applications is.

        Not necessarily. Intel is facing the opposite problem on Android with their Atoms, and they have developed an ARM-to-x86 recompiler, which is part of the runtime.
        The difference in performance is not very large once recompiled (and compiled results are cached, so you only get the performance hit the first time you run the application). Nothing that can’t be overcome by just having a slightly faster chip than the competition.

        I don’t see ARM SoCs capable of running hypervisors and other very complex applications even remotely efficient.

        Who says ARMs have to be SoCs? That’s a rather arbitrary restriction, is it not? Especially since x86 generally are not SoCs.

        Just like Itanium, ARM stands no chance of running something else than ChromeOS systems.

        Itanium didn’t fail because of technical reasons. It failed because there were cheaper x86 CPUs available. This meant that Intel had to continue to invest most of their resources in competing x86 CPUs, rather than switching over to Itanium completely. For this reason, Itanium was always a ‘second-class citizen’, didn’t always make use of Intel’s latest process technology, and could only be aimed at high-margin markets (workstations, servers, HPC), not the consumer market.
        If it wasn’t for AMD’s x86-64, we’d probably all be running Itaniums now.

  8. davramal says:

    That would imply that I see ARM as the ‘ideal’, which I don’t.

    So you don’t think ARM is the best arhitecture yet? What’s your ideal computer argitecture then?

    I think you miss one obvious perspective: Intel could make ARM processors (again).

    Well, they do posses an ARM arch license so yeah ARM+Intel fabs could result in a great combo. But, I doubt Intel will choose to make a ARM cpu since that, at least from a marketing perspective would be a pretty bad move. It would make Intel seem like a loser and probably make people to choose someone like Samsung, who are at the top of ARM world atm.

    Who says ARMs have to be SoCs? That’s a rather arbitrary restriction, is it not? Especially since x86 generally are not SoCs.

    Jumping from SoCs to CPUs would be pretty hard, at least not viable economically due to R&D costs. Otherwise, I’d imagine, Samsung or Qualcomm would’ve jumped at the occasion.

    Itanium didn’t fail because of technical reasons. It failed because there were cheaper x86 CPUs available. This meant that Intel had to continue to invest most of their resources in competing x86 CPUs, rather than switching over to Itanium completely. For this reason, Itanium was always a ‘second-class citizen’, didn’t always make use of Intel’s latest process technology, and could only be aimed at high-margin markets (workstations, servers, HPC), not the consumer market.
    If it wasn’t for AMD’s x86-64, we’d probably all be running Itaniums now.

    Any evidence that was the reason? Afaik, Itanium failed because it was too late to the market and the arhitecture was not modern enough compared to K6 or PII had. But the killing blow was if Wikipedia is right, the downright awful x86 emulation that was comparable to a 486. So I don’t think price mattered too much.

    • Scali says:

      So you don’t think ARM is the best arhitecture yet? What’s your ideal computer argitecture then?

      An ‘ideal’ implies something somewhat unobtainable.

      It would make Intel seem like a loser and probably make people to choose someone like Samsung, who are at the top of ARM world atm.

      I don’t think the market works that way. They will just go for the best value for money.

      Jumping from SoCs to CPUs would be pretty hard, at least not viable economically due to R&D costs. Otherwise, I’d imagine, Samsung or Qualcomm would’ve jumped at the occasion.

      I was talking about the long-term, not an instant jump. x86 was just a low-end microprocessor when it started out as well, but over time it grew into workstation, server and HPC markets as well, and currently it’s also making inroads in the embedded world.

      Likewise, I’m not sure if you know the history of ARM, but it started out as a regular microprocessor (see Acorn Archimedes), and only moved to embedded markets and SoCs over time to find its ‘niche’.

      Any evidence that was the reason? Afaik, Itanium failed because it was too late to the market and the arhitecture was not modern enough compared to K6 or PII had. But the killing blow was if Wikipedia is right, the downright awful x86 emulation that was comparable to a 486. So I don’t think price mattered too much.

      This just shows how little you know about Itanium. It was not meant to compete with K6 or PII. It was a server/workstation solution, offering 64-bit and far superior floating point performance mainly. It was meant to compete with the likes of POWER, SPARC and PA-RISC (actually, it would be the successor of, since PA-RISC is HP’s architecture, and Itanium was an Intel/HP joint-venture). And it actually did that quite well.
      x86 emulation was not relevant in that market. Besides, the x86 emulation was fixed in Windows XP sp2 anyway. For some reason everyone seems to ignore that.
      The only problem was cost, really. Which could have been solved by larger volumes, but x86 was in the way. If it wasn’t for AMD, Intel could simply stop making x86 CPUs altogether, and move all their resources to Itanium. Very similar to how Motorola did the transition from 68k to PPC. Moving to a different CPU architecture isn’t that difficult. Apple already did it twice.

      Note also that the trend in the last 15-20 years has been to move away from native code. We have Java, .NET, JavaScript, PHP, Perl, Ruby, Lua, and tons of other languages where you don’t need to modify any code to run it on a different architecture.

  9. davramal says:

    An ‘ideal’ implies something somewhat unobtainable.

    Ok, so now that you’ve accepted the demise of the 6800, whay would you think us the optimal arhitecture today that provides best advantages? What’s the downside of x86 other than legacy stuff?

    I don’t think the market works that way. They will just go for the best value for money.

    We both now how much the public image can decide sales. The Halo Effect is huge in the IT market.

    I was talking about the long-term, not an instant jump. x86 was just a low-end microprocessor when it started out as well, but over time it grew into workstation, server and HPC markets as well, and currently it’s also making inroads in the embedded world.

    Again, I ask, if ARM has such huge potential why none of the ARM players at least made an attempt? After all, CPU markets offer much bigger margins than SoCs.

    Note also that the trend in the last 15-20 years has been to move away from native code. We have Java, .NET, JavaScript, PHP, Perl, Ruby, Lua, and tons of other languages where you don’t need to modify any code to run it on a different architecture.

    I’m well aware of that, since the languages I use most of the time are C# and Python.

    Regarding your blog, did you consider a better comments system with some editing options? It’s a bit cumbersome having to manually add html tags whenever you need to quote some thing. You wouldn’t even need a plugin for such basic functionality.

    Anyways, I’m enjoying your blog a lot. Most of your articles are rich with extensive knowledge. There’s a lot to digest, I’ve barely scratched the surface.

    • Scali says:

      We both now how much the public image can decide sales. The Halo Effect is huge in the IT market.

      Problem is that the CPU choice is not decided by the customer, but by the OEM. Just like people don’t care and probably don’t even know who made the CPU in their phone. So no Halo Effect there, just the OEM wanting the best value for money, so they can build the most competitive products.

      Again, I ask, if ARM has such huge potential why none of the ARM players at least made an attempt? After all, CPU markets offer much bigger margins than SoCs.

      Because these ARM players are nowhere near as large as Intel or AMD are. They can’t just take that step. Just like you don’t see any new players coming into the GPU-market. It will have to build up gradually (as I said, ARM could build its way up from tablets/netbooks to more powerful laptops and small desktops). But you see that AMD started making ARM chips for servers now.

      • davramal says:

        Problem is that the CPU choice is not decided by the customer, but by the OEM

        What nonsense. Let’s ignore the system builders which obviously choose their CPUs, I’m sure you’re aware of that. Even though, consumers can’t choose any CPU they want in their laptops, they still can choose a laptop based on Intel’s i3, i5, i7 or AMD Carrizo CPUs. Similarly, in an ARM world you’ll probably also be able to choose between Intel, Samsung, Huawei or Apple based laptops. So the choice will always be there. Hence, the Halo Effect is still present and would affect the sales of Intel based devices if they were to give up on their trademark x86 platform.

        Because these ARM players are nowhere near as large as Intel or AMD are.

        What? Did you just ignored now the existence of Samsung, Apple, Qualcomm and even Huawei? Wow.

      • Scali says:

        What nonsense. Let’s ignore the system builders which obviously choose their CPUs, I’m sure you’re aware of that. Even though, consumers can’t choose any CPU they want in their laptops, they still can choose a laptop based on Intel’s i3, i5, i7 or AMD Carrizo CPUs. Similarly, in an ARM world you’ll probably also be able to choose between Intel, Samsung, Huawei or Apple based laptops. So the choice will always be there. Hence, the Halo Effect is still present and would affect the sales of Intel based devices if they were to give up on their trademark x86 platform.

        Think about it some more: consumers can only buy what system builders build for them. Which is what AMD has been struggling with for years: It is difficult for them to find OEMs who are willing to integrate AMD solutions. A big part in that is that AMD’s solutions are not very competitive in terms of performance/watt. Battery life is a very practical consideration for most laptop buyers. And also things related to that: if laptop A and B have the same battery life, but laptop B gets that from a smaller battery, and thus is a lighter, more compact unit, most people will be more interested in laptop B than in laptop A.

        The people who actually know anything about CPU brands and types are a minority, so that is not something that would drive OEM’s choices for CPUs.

        What? Did you just ignored now the existence of Samsung, Apple, Qualcomm and even Huawei? Wow.

        I didn’t ignore anything. Just look at what you just said: you mention a ton of different companies. The ARM-market is fragmented. Even worse: most of these companies work fabless, and as such depend on others to manufacture their chips.
        That is completely different from Intel, which is a single company that by itself can take on all ARM players together, and doesn’t depend on any other company for manufacturing or anything else. They don’t call Intel Chipzilla for nothing, you know.
        Not to mention that chips are Intel’s core business, while for Samsung, Apple and Huawei it is just a small part of what they do.

  10. davramal says:

    Seems your blog has some issues with posting comments. Even though, it reported that was submitted, it didn’t show up. To verify this, I tried re-posting the same content and it flagged the ‘duplicate’ status.

    That’s why I really dislike WordPess and PHP in general.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s