Are all linux users idiots?

You get that impression sometimes… They may just be a vocal minority, but it’s always the same clueless rhetoric… Today I read this blog:

http://www.itworld.com/security/75601/why-windows-security-awful

It just makes me feel like all these linux people just started running linux because it’s the ‘cool’ thing to do these days. They haven’t actually used linux in the early days, let alone any actual unix predecessors. How else could you talk the nonsense that this guy is talking in his blog?

I’m talking mainly about this passage here:

“Again, it all comes down to all of Windows security improvements amounting to being just layer over another of security over its fatal single-user, non-networked genetics.

That’s why Linux and Mac OS X, which is based on BSD Unix at its heart, are fundamentally safer. Their design forefathers were multi-user, networked systems. From their very beginning, they were built to deal with a potentially hostile world. Windows wasn’t. It’s really that simple.”

“Fundamentally safer”? “Built to deal with a potentially hostile world”? Oh please… The unix world was just as naive about security as the Windows world, and as a result, they learnt the hard way (and I don’t even want to get into the incredible display of ignorance with his “single-user, non-networked genetics”. I suppose he just doesn’t know what Windows NT is all about). Now, this may come as a shock, linux people, but I’m actually going to post some FACTS to prove you wrong.

From http://en.wikipedia.org/wiki/Firewall:

“Firewall technology emerged in the late 1980s when the Internet was a fairly new technology in terms of its global use and connectivity.”

“The first paper published on firewall technology was in 1988, when engineers from Digital Equipment Corporation (DEC) developed filter systems known as packet filter firewalls. This fairly basic system was the first generation of what would become a highly evolved and technical internet security feature.”

Now, in case you didn’t quite manage to put 2 + 2 together yourself, let me point out the obvious: Unix was originally developed in 1969, with the original networking technology that eventually led to the internet developed not much later.

Okay, so it took them about 19 years to come up with the concept of a firewall/packet filter, which is now one of the most basic measures of network security… In fact, the firewall actually BREAKS many rules of the official Internet Protocol in order to improve security. In other words, the Internet Protocol has some security problems *by design*.

That’s not all, however… When I started using unix systems in the mid-90s at university, they were pretty much still wide open. Lots of daemons running by default, wide open to the world, including nasty ones like fingerd, talkd and telnetd (not just linux, but also commercial variations like HP-UX). A standard linux distribution or FreeBSD would also install with pretty much all common daemons running by default. Wide open. Even the rpc daemon was running… all security holes waiting to happen. In fact, Windows systems were arguably more secure back in those days, because networking didn’t come standard, let alone network daemons/services.

I guess today’s linux advocates aren’t quite familiar with this history of linux, or unix in general. Doesn’t matter, their target audience likely won’t know anything about it either, and just accept their uninformed babble. It just makes them look like idiots to people who DO know how things really evolved. It’s bad enough that they don’t have a clue about Windows, their favourite target… but when they don’t even know about linux or unix itself, it gets pathetic.

Luckily, I also spotted this article:

http://www.itworld.com/open-source/77409/why-users-dumped-your-open-source-app-proprietary-software

Now that’s refreshing… I’ve been saying it as well, bashing your competitors will only get you so far. You should also be looking at what it is that makes them successful, and try to beat them at their own game. Many opensource advocates seem to think that it’s a given that people would prefer free/open software to proprietary/commercial solutions. They think that if you use something else, you must be an idiot. But obviously that’s not the case. People are perfectly capable of making their own decisions, and price or freedom aren’t the only criteria. It’s good to see people from within the community saying it as well, because they will never accept it from ‘outsiders’.

To conclude, a nice blog of someone who also discovered the idiocy of linux users:

http://www.commandlineidiot.com/blog/2007/linux-users-are-all-crazy-fundamentalists/

This entry was posted in Uncategorized. Bookmark the permalink.

33 Responses to Are all linux users idiots?

  1. Unknown says:

    First off…"Now, in case you didn’t quite manage to put 2 + 2 together yourself, let me point out the obvious: Unix was originally developed in 1969, with the original networking technology that eventually led to the internet developed not much later. "

    You fail to mention that the original windows designs and thoughts were from MAC, derived from Unix, thus windows first designs were from…. What? Was that technically unix? Because unix started ALL the standard system specs. Where do you think we got delete (del) remove (rem) copy (cp) move (mv) in dos? Oh my, that’s very interesting…. Next… (I know this is out of order, but I’m first proving that windows had roots in unix, and then going to prove why unix’s roots are still fundamentally better) “Fundamentally safer”? “Built to deal with a potentially hostile world”? Oh please… The unix world was just as naive about security as the Windows world, and as a result, they learnt the hard way. Now, this may come as a shock, linux people, but I’m actually going to post some FACTS to prove you wrong. From http://en.wikipedia.org/wiki/Firewall: “Firewall technology emerged in the late 1980s when the Internet was a fairly new technology in terms of its global use and connectivity." “The first paper published on firewall technology was in 1988, when engineers from Digital Equipment Corporation (DEC) developed filter systems known as packet filter firewalls. This fairly basic system was the first generation of what would become a highly evolved and technical internet security feature."
    Really now? Really, your going to be that retarded? No, I guess you are. http://www.cisco.com/web/about/ac123/ac147/ac174/ac200/about_cisco_ipj_archive_article09186a00800c85ae.html Yes, that’s cisco… What does Cisco run? No, really what does it run? UNI UNI X UNIX! Remember that for this next quote.

    "The first network firewalls appeared in the late 1980s and were routers used to separate a network into smaller LANs"
    What does that mean? The first firewalls were in essence routers, just like the WRT54G router (though older). Thus the first experience with packets was probably still done under Unix. The oldest of oldest systems.

    "The first security firewalls were used in the early 1990s. They were IP routers with filtering rules. "

    Again, routers… which if you find a router that uses any DOS/NT based kernel I’d be very happy to hear from you. Now for the general security. You didn’t even mention to put some of the nice quotes in this….

    "Most of these problems come down to Windows has IPCs (interprocess communications), procedures that move information from one program to another, that were never designed with security in mind. Windows and Windows applications rely on these procedures to get work done. Over the years they’ve included DLLs (dynamic link libraries), OCXs (Object Linking and Embedding (OLE) Control Extension), and ActiveX. No matter what they’re called, they do the same kind of work and they do it without any regard to security. Making matters worse is that they can be activated by user-level scripts, such as Word macros, or by programs simply viewing data, such Outlook’s view window. These IPCs can then run programs or make fundamental changes to Windows. "
    Now this is the real fun…
    "It also doesn’t help any that Microsoft’s data formats can be used to hold active programming code. Microsoft Office formats are commonly used to transmit malware. Microsoft’s latest Office 2010 tries to deal with this by blocking all but read access to documents or ‘sandboxing’ them.. Since you can’t edit a sandboxed document, I’m sure that’s going to go over really well. Of course, what will actually happen is that users won’t use the sandbox utility, and they’ll just spread malware instead. "

    You don’t even BOTHER to mention that. Infact you only talk about the networking part of security. You don’t even cover the Admin/User flaws they have. When a user can run code at the same level or higher than an admin you start to have severe problems. Infact a completely secure system does not allow any malware code to run. All I can see that the new UAC does is "ask" the user to run X code. Infact, a true system wouldn’t run the code at all. Now ironically that is a flaw in Unix/Linux… Code has to be more specific and has to run much more smoothly than in windows.
    "Besides that, Windows, again harking back to its single-user, stand-alone ancestry all too often defaults to requiring the user to run as the all-powerful PC administrator. Microsoft has tried to rid Windows of this, with such attempts as UAC (user account control) in Vista. They’ve failed. Even in Windows 7, it’s still easy to bypass all of UAC’s security. Microsoft has claimed they fixed some of those bugs."
    I like how he talks about UAC, as I said earlier you shouldn’t need a program to say "oh hey do you want to run this". Well yes I want to run it, why else would I click on it? I mean really? I’m constantly installing, uninstalling, checking out software or doing something that requires "elivated privlidges" and that’s retarded. Heck, the only real security they have done recently is the new 64bit kernels, which they finially decided that "signed" code should be the only code allowed in the kernel. Although, in all honesty any modules (drivers) that should be run at kernel level should ALWAYS be ran through the admin first. Unix doesn’t allow users to isntall modules, it HAS to be done with an admin. So I’m not even going to finish disecting any of this, you can come up with something better of an article and I’ll read more and maybe come up with a better response.

  2. Scali says:

    "You fail to mention that the original windows designs and thoughts were from MAC, derived from Unix, thus windows first designs were from…. What? Was that technically unix? Because unix started ALL the standard system specs. Where do you think we got delete (del) remove (rem) copy (cp) move (mv) in dos? Oh my, that’s very interesting…."

    No shit, Sherlock. To be exact, DOS was based on CP/M. For Windows NT, Microsoft hired a lot of UNIX veterans, mostly from the DEC VMS division. It’s not exactly a secret that Windows, like most OSes, uses a lot of technology rooted in UNIX. That however doesn’t have much to do with what my article is about, which is to debunk the claim that UNIX is ‘fundamentally safer’ than Windows.

    "Yes, that’s cisco… What does Cisco run? No, really what does it run? UNI UNI X UNIX!"
    Again, not exactly a secret.

    "Again, routers… which if you find a router that uses any DOS/NT based kernel I’d be very happy to hear from you. "
    I fail to see the relevance. Are you claiming that an OS can only be secure if it is used by routers?

    "You don’t even BOTHER to mention that. Infact you only talk about the networking part of security. You don’t even cover the Admin/User flaws they have. When a user can run code at the same level or higher than an admin you start to have severe problems."
    That is not the topic I was discussing. But, since you seem to be one of those linux idiots, let me try to educate you in this matter… A user can run code at the same level as admin only if the system is set up this way. This goes for both Windows and most *nix variations. Let a linux user login as root, and all code can be just as dangerous as on Windows. Windows is obviously perfectly capable of allowing only restricted user logins. In fact, where the security in *nix is generally not much more than the flags on the filesystem, Windows goes much further. In *nix, generally if something is ‘dangerous’, the library or executable will be made available only to a specific user or group. That’s all the security you have. In the past, usually this meant it was assigned to root. These days, they are a tad more careful, and things like Apache or mysql will create their own user/group, and run under that. Most of the time, they are still started from a root process, and are only set to limited user accounts later though, because of a design restriction in *nix: sockets below 1024 can only be opened by root. So the socket has to be opened first, as root, and then the process can be forked to a user.
    Windows on the other hand has the concept of Access Control Lists. This is far more flexible than the owner/group/world access bits on *nix, obviously… You can add any user or group to an allow or deny list. What’s more, it doesn’t stop at the filesystem. All ‘objects’ in Windows (files, pipes, sockets, processes, shared memory blocks etc) support ACLs. So you can finetune your security measures very well.

    "All I can see that the new UAC does is "ask" the user to run X code. Infact, a true system wouldn’t run the code at all."
    Obviously UAC is not a security feature. UAC is there to allow a user to easily *override* the security features, as UAC pops up whenever an application tries to do something that, according to its current ACLs, it is not allowed to do. So it asks for elevated privileges. This is a concession that Microsoft has made in order to allow more userfriendliness. If you want it more secure, you should turn UAC off. And I mean turning off the popup (not setting yourself to elevated privileges permanently), so that it cannot elevate privileges, but instead the operation will fail. If you want, you can also have it ask for the admin password instead of just a yes/no. It’s all perfectly possible in Windows. Ofcourse people like you just don’t get it. But that’s what makes you idiots.

  3. Unknown says:

    No, UAC really doesn’t stop malicious code. Infact, I’ll embed sub7 into something, send it to you, and then we’ll see if it still works. I’m betting it does, and I’m betting it does it well. The highest security you can have is the user/admin split. You infact can fun mysql, apache, and any other program as the current user. You can also allow that user read/write permissions to any file you want essentially shutting them out. That’s the same thing windows does. If you wanted to set up multiple groups for users on nix you can, but most distro’s don’t by default because it’s not needed. Whoever is setting up apache/sql or whatever services should know what they are doing.

    For the UAC option… Obviously I do like linux, but I run Win7 as well. I run Admin all the time, with all the security off. I don’t run a firewall, I don’t run AV, infact I hate all of that. You know how many problems I’ve had? None, yeah not a single problem. I’m actually very happy with how they have been handling 64bit code and the kernel interface. Since it’s a lot harder now to exploit the 64bit kernel they use. And the best part is all the 32bit code is emulated. Well I’m not sure how they did the syswow, if it’s an API or emulation I don’t know. I only know how they do it on linux. (oh and I haven’t ran AV or a firewall since win98 cause I don’t buy into the media or hype of needing one, it’s BS, it’s BS, and what? It’s BS)

    Oh and the whole cisco debate is relavent. You shouldn’t quote about firewalls and such without actually talking about the roots and how firewalls came to be. You can’t brag about windows greatness, how it’s amazing, and has learned so much more than linux/unix without really touching on who found out these fundamental flaws. Most unix systems for networking are based widely on security and a huge user base. Though you also seem to like to deny that at the base windows seems to do the same thing. What is the Systems user? How are all the system services run under? Oh my, is that a separate user? Not admin or user? Infact, I would seem to think that all windows does is manage secret users and groups to keep it’s securities together. Now where would they get that idea? And those policies you talk about? Access control lists? Just another fancy way to say group privlidges. You can do everything in unix as you can do in windows, it’s just that all those layers cause things to be a little slower. And I’m pretty sure if they removed a lot of the overhead on windows it’d be a better system for it. In all honesty, if a user can’t handle safely using something (such as running certain programs, browsing, installing programs) then they shouldn’t do it, peroid.

    And to finally cover what you keep running around and not giving unix/linux credit for is the amount of malware you can get by browsing or running mislabled filetypes. Infact, most of the security issues we have today are due to that. Why MS believes that letting outsiders run code on your computer is a good idea is beyond me. If I wanted something ran I’d run an exe, and if I wanted to have my mouse change I’d install a theme. If I wanted any visual modifications on my desktop I’d do it myself, I don’t think it’s right or fair to allow somebody else create a "theme" for my entertainment. ActiveX was a joke, and now you guys defend it by saying that MS has better security than Unix/Linux?And yes I understand where your comming from, but I do a lot of fixing people’s computers. You know the biggest mistake I’ve seen? In fact it’s the most common…. User error. Oh my, yes user error. Everything from malware, viruses, spyware, improper drivers, ect… User error. So what does that tell you about most of the people who complain, cause I’m the on dealing with them I’ll answer. They are stupid. They don’t read, they don’t want to read, they don’t even want to know the gyst of what is to be read. They want it to work and they want it working now. And I re-state this: If you can’t handle doing operation X then don’t do it, peroid.

  4. Unknown says:

    Oh sorry, I forgot all UAC does is tell you you need to be admin. I understand that, and don’t care. It should have been removed before Vista was ever published.

  5. Scali says:

    "Oh sorry, I forgot all UAC does is tell you you need to be admin. I understand that, and don’t care. It should have been removed before Vista was ever published."
    I don’t think you quite get what is going on here…Windows NT has had user/group and ACL security features since its first version in 1992. In many office situations these were also used, so that only the system administrator had admin access, and regular users had only limited access, and generally needed to ask the administrator to deploy software.

    The problems arose when the Windows NT-branch was deployed on the consumer market as well, in the form of Windows XP. The average Windows user had absolutely no concept of security or user accounts… Windows XP just installed with an admin user as default, but end-users didn’t know that they should make their own user-account, with restricted privileges. After all, they were not exactly experts in system administration.This was only exacerbated by the fact that a lot of software developed for Windows 9x also was very liberal in terms of access, writing to system directories, accessing system areas of the registry etc… so even for experienced users it was often a hassle to run under a limited account.

    That’s where UAC comes in. It makes it a lot easier for end-users to install software and do other things that require elevated privileges. Without UAC, Microsoft could never have gotten away with giving users restricted accounts by default, like they do in Vista/Windows 7, because like in Windows XP, people would just try to run software, and it won’t work, for reasons they don’t understand. It’s not perfect, but it’s a lot better than everyone running as admin, and it’s a good first step in educating the userbase in terms of privileges.So the problem isn’t in Windows NT itself… the security features have always been there… they just were too difficult to use for laymen. Since *nix is generally not run by laymen, they simply don’t have that problem.

    "Oh and the whole cisco debate is relavent. You shouldn’t quote about firewalls and such without actually talking about the roots and how firewalls came to be. You can’t brag about windows greatness, how it’s amazing, and has learned so much more than linux/unix without really touching on who found out these fundamental flaws."
    I didn’t brag about Windows at all, I said nothing about its greatness. My article focuses solely on the *nix-side of the debate. I also don’t see how this is relevant. I simply pointed out that the TCP/IP standard had some security flaws which weren’t addressed until many years later, so the claims of UNIX being ‘fundamentally safer’ and ‘Built to deal with a potentially hostile world’ are pretty hard to sustain, given this evidence.

    "Access control lists? Just another fancy way to say group privlidges. You can do everything in unix as you can do in windows, it’s just that all those layers cause things to be a little slower."
    Ouch, if you don’t know the difference between ACLs and owner/group/world bits, you shouldn’t speak… They are NOT the same at all. It’s pretty obvious why the owner/group/world thing is flawed… Namely, if I want to give someone access to a certain operation, I add him to that particular group. The problem here is that I can now acesss EVERYTHING from that group. There is no granularity.

    It is actually a common wisdom in the unix world that you should be careful about the nobody user and the nogroup group. If you try to ‘secure’ your system by making everything belong to that user/group, you are effectively giving that user/group a lot of power. So when someone then manages to get access as nobody/nogroup, he has access to a large part of the system. With ACLs you don’t actually have to add a user to a group in order to give him access to a certain directory, program or such. You can just add that user explicitly. He still doesn’t get the privileges of the group, just access to that particular object. Likewise, you can also DENY access to a certain object even if a person is member of a privileged group.

    I’ve seen situations in practice where the unix way just doesn’t cut it… Eg people in administration had a certain subtree in the filesystem that contained information about salaries and such. This subtree should ONLY be accessible by those people, and not by other employees, not even when they technically had more rights (management), as this was sensitive information. With ACLs it was very easy to deny everyone access, without ACLs it was impossible.

    And yes, I know there are ACL solutions available for various *nix implementations… the problem is that these don’t come standard in most linux or *BSD distributions, and as such they are not a fundamental part of the OS design like in Windows, but rather an ‘afterthought’. So if we look at ACLs, Windows is ‘fundamentally safer’ than *nix.

  6. Unknown says:

    No acl’s aren’t just group policy. But most of what windows uses for security can be done with groups and permissions. though the need for them seems redundant. cause the whole point of the lax of linux security is the ability to modify anything you want. if we had the same security as we do in windows what would be the point, might as well buy windows if I don’t want control. In all honesty the idea of windows is creating dumber and dumber users. My case still being if you can’t handle doing it without somebody holding your hand then maybe it’s just not your thing. And yeah I understand that an easier os allows people to focus on other things, like photo editing or something of the sort.

    Though since the early 90’s we’ve had distro’s like debian, susy, redhat, mandrake ect ect… that have always been much more secure than windows. About the same time as windows 3.0, and well… The main problem with linux hasn’t really been security it’s been about how friendly the os is. (though I didn’t get heavy into nix until I’d say 98, about 7 years after it’s release IMO xfree86 had many issues)

    In all honesty, if you have physical access to any computer the system can always be compromised. Other than that I’m pretty sure that unix/linux has had net security far greater than windows. And no, I don’t care about ACL, really I could just set up multiple groups to interact with the system. And a user could belong to various groups, thus giving the illusion that certain objects give him certain abilities when really it’s certain group privlidges give him control of object x. Like trusted installers…. Or why not set up scripts, to monitor the system.

    Essentially what linux lacks is the multitude of file permissions windows has. As files are given various attributes that linux doesn’t allow. It’s just read/write/execute. And I don’t see how that needs to be more. Given the fact that run32 can spawn new threads or run new programs at admin level thus giving a user admin rights doesn’t really seem much like a secure system. The same goes if you give a user in linux sudo control to cp or mv. There are flaws in every system, but your wrong to call all linux users stupid. Maybe windows users are too retarded to set up a proper secure linux system. Why can’t that be the case?

  7. Scali says:

    "No acl’s aren’t just group policy. But most of what windows uses for security can be done with groups and permissions."
    So basically you’ve now conceded that there are some things that Windows can do, that most *nix can’t… which means you agree with point of my article: *nix is not ‘fundamentally safer/more secure’ than Windows.
    I find your phrasing a tad strange though… Windows doesn’t ‘use security’… Windows *offers* security features, and it is up to third-party developers and system administrators to employ these properly. Although Vista/Windows 7 install a user with limited privileges by default, a user can still choose to run as administrator all the time, and won’t be warned by UAC anymore when potentially dangerous operations are being performed… But is that Windows’ fault? You can do the same in *nix.

    "if we had the same security as we do in windows what would be the point, might as well buy windows if I don’t want control."
    ACLs in Windows actually give you MORE control over security, as I already explained in my previous reply.

    "In all honesty the idea of windows is creating dumber and dumber users."
    The users were always ‘dumb’ to begin with. You have to realize that Windows is used by 90% of all computer users. *nix is used by about 4-5% perhaps… Now I have no doubt that among the 90% of Windows users, there are also at least 4-5% with a decent grasp of computers and security. But a large part of these people just use a computer ‘because they have to’. Even grandma and grandpa need a computer these days, to email and Skype with their grandchildren, and do their banking online etc. Clearly these people aren’t security experts… And they shouldn’t need to be.

    "Though since the early 90’s we’ve had distro’s like debian, susy, redhat, mandrake ect ect… that have always been much more secure than windows. About the same time as windows 3.0, and well… The main problem with linux hasn’t really been security it’s been about how friendly the os is."
    Don’t get me started on Windows 3.x and 9x. Obviously those were crap… but that’s ancient history now. Windows NT is about as old as linux and 386BSD, and has been the only Windows flavour on the market since 2002 (XP). However, as I said in the article, in the early days, most linux and *BSD installations would just set things wide open by default (I actually used this back in the uni days. You could use telnet, finger, talk and such on every lab PC, and get in touch with your fellow students from home). Back then it was pretty dangerous for a layman to install *nix on their PC. Then again, laymen had never heard of these OSes anyway, and if they did, they probably wouldn’t even get through the installation procedure… And since these OSes were so new, there weren’t many virii, worms or other malware.

    These days however, *nix is under constant attack. I get bruteforce dictionary attacks on my FTP/SSH/IMAP about 10 times a day, usually originating from compromised linux systems. Don’t think that a system cannot be exploited when the hacker doesn’t have root. In this case, they are just zombie bots that don’t need to do much more than scanning popular services. Since generally any user is allowed to use mail, FTP and SSH, that is enough for a zombie network. I get more email from Fail2Ban than spam these days. All it takes for a successful attack is one user who has a password that happens to be in the dictionary.The irony is that Windows is not affected by these attacks, because Windows doesn’t come with SSH, FTP, email and other services by default.

    "And no, I don’t care about ACL, really I could just set up multiple groups to interact with the system. And a user could belong to various groups, thus giving the illusion that certain objects give him certain abilities when really it’s certain group privlidges give him control of object x."
    The problem is that it scales poorly. On a single system it is manageable… if you only have a handful of users, it’s not a big deal… However, *nix isn’t used in larger office networks often… but Windows is… so picture a Windows domain where you have hundreds of users on the domain, who all have access to various services such as email, network storage, printers and such. If you have to solve all combinations of privileges with groups, it becomes a management nightmare. *nix wasn’t designed for such environments, Windows NT was.

  8. Scali says:

    Now, it’s easy to call me biased, and write off whatever I say because of that…But the people at SuSE basically say the exact same thing: http://www.suse.de/~agruen/acl/linux-acls/online/ So perhaps there is some truth to what I said, and who knows, I might not even be all that biased… (which obviously I am not, because I am a FreeBSD user myself, and FreeBSD also has rather limited support of ACL…).

  9. Facade says:

    The answer to your question, the vast majority of them are. Nearly everything they say in support of their “OS” really becomes “Open Source PWNS”… and it’s impossible to explain how something they say is impossible because they typically don’t know how it works in the first place.

    
    

    It’s insane, there are people claiming windows starts too slowly, but admit to having a pointless virus scan on boot (this naturally freezes up the system; and no virus scanners for linux, no… linux is perfect and virus free!). I have XP booting in ~20 – 30 which is more than fast enough, yet they’ll claim that my “optimizing” the system is something you shouldn’t have to do and thus is discounted (these are people who are ‘smarter’ than the average computer user? who compile their own OS?).

    
    

    I mean, even the argument that ext2/3 cannot suffer from fragmentation is so absurd that they started saying that fragmentation speeds up your system… or make loopholes saying “well, people all have 1 TB drives QED no frag!!! M$ SUCKS!!!one” They’re willing to admit that the argument they’ve made over and over is preposterous, but unwilling to concede that the argument is pointless.

    
    

    It’s absurd, linux fanatics tell lie after lie after lie such that you wonder if anything they say is true… and when you have people hacking systems, such as the recent PS3 scandal… all in the name of “putting linux back where it belongs” (nevermind that all you’ve done is opened the door for mass piracy, and that you’re obviously just insulting Sony for some imaginary wrong… you released a video telling people how to put linux (pirate games) on a PS3… HOORAA).

    
    

    Are all linux users idiots? When you decide to ignore all their arguments of superiority all you find are people who HATE people who SELL software and are utterly puerile in their actions (Rather like anarchists, I might say).

    
    

    There are “intelligent” linux users… but those people are very few and very far apart.

    • Scali says:

      Ah yes, good of you to bring up the fragmentation issue. That one has bothered me for a long time as well. It’s the same broken logic as with malware:
      – Linux doesn’t have anti-malware software, ergo there isn’t any malware for linux.
      – Linux doesn’t have defragmentation software, ergo there is no fragmentation.
      Obviously neither are true. I can easily prove the former, since I recently intercepted an attack on my FreeBSD system. A worm was attempting to connect to FTP and SSH and infect the system by uploading itself, and starting it. I have captured and saved the file that it uploaded, if anyone wants to analyze it.
      The latter, does it even require proof? If we just look at the cause of fragmentation, it should be obvious that no system can ever be free of fragmentation. Deleting files leaves gaps, which will eventually lead to files being fragmented over multiple gaps (or pre-emptively ‘defragmenting’ by moving files around to create a gap large enough for a new file). Appending to an existing file, when newer files have already been placed directly behind it, will lead to the file being fragmented (or again pre-emptively ‘defragmenting’ by moving these files around to create a gap for the existing file to be appended to).
      It’s a classic ordering problem. No matter what filesystem you use, no matter what allocation scheme you think of. At some point fragmentation will occur, either the files themselves, or in the free space being fragmented). It is just a fact of life.

  10. Mr. McAwesome says:

    Seriously, who cares?

    I’ve been on windows and on Linux, they both have their problems. So far, no one gives a rats ass, as long as they work.

    Get a life.

    • Scali says:

      I certainly don’t claim that other OSes are better than linux, let alone that they don’t have their own share of problems. I am merely addressing this misplaced feeling of superiority that a lot of linux users appear to have.

      Also, pretty funny the way you talk about Windows and linux as if they are the only two OSes in the world (although the original article does some direct comparisons, I don’t. I merely address the incorrect statements regarding linux and Unix in general. I don’t draw any new comparisons with Windows, I merely comment on the ones the original article makes). Heck, when I started with computers, neither OS even existed. I’ve used tons of different systems and OSes before x86-compatibles became the standard platform, and Windows became the standard OS of choice, and linux the standard OS of choice for Microsoft-haters. There’s a lot more out there than just Windows and linux.

  11. Matt says:

    It always amuses me; people who attempt to argue relying on second-hand information and assumed truths merely because they have no understanding of the subject themself. See: “Unknown” user.

    It’s especially amusing when you can just see the inferences and laboured conjectures that are almost certainly derived from Wikipedia.

    • Scali says:

      Well, that’s the fun part, isn’t it?
      People who have an understanding of the subject, will likely not bother to argue, since they will agree with what I said.

      So all you get is people arguing from an emotional standpoint: they just ‘feel’ that *nix is the best option. Then they try to find arguments to support that view. They’ve already lost the debate before they began. Just like Jed Smith.

  12. Penguin Eater says:

    I really do agree with you that Linux developers are smug assholes, and the religious zealotism of their fanboys gets to be grating in short time. The devs are mostly rich kids living on trust funds, if not on their parents estates (who else would have the time for so much unpaid work), therefore they are driven by very fragile egos. They are quicker to berate you for not reading through reams of what doesn’t help than they are to help you when they can, and when they do the latter they are often wrong. They can be extremely arrogant when criticized on “innovations” to the basic system interface when nobody but them and their few friends (who dominate their Linux distro forum and gang up on anybody who says anything inconvenient). Their snooty taste in GUI graphics and colors do much more to confirm my theory on their background than to make it usable (their spin on fashion almost always comes before the need to make highlighted text readable with high-contrast colors).

    All of the above notwithstanding, it”a a free system, and ugliness aside, it works. Not perfectly, but then neither does Windows. Window can only read it’s own NTFS disk format, while Linux reads them all, making your Windows partition files accessible from Linux. Windows suffers horribly for it’s IAN stack flaws, but Linux never fails here. I only regurgitate what I’ve heard from independent experts, but I have confirmed it all through experience.

    On security, this may really come a a shock, but the Linux people don’t seem to believe much in firewalls.
    There is one available, but it seems to be a neglected afterthought which exist chiefly because new users complained. Their anti-malqare projects are almost equally half-assed. How can they be so arrogant? There’s a number of causes which they cute, which make more sense the more you look at them.

    Starting with the access protocol, nobody gets to log into the root of a Linux system, which is by mandate password- protected. No operations can be performed on your system or your protected files by anyone who does not know your username and pssword either.

    The windows systems all have very centralized file systems know as the registry, which had to be developed beginning with WINDOWS 95 just to make that system function at tolerable speed, but somehow or another this never became an issue with Linux. The security benefits abound from this, among other factors. Installation of software has always been a bit more complicated in Linux, as anyone who installed programs on this system in the nineties or earlier would attest to, and this doesn’t harm the security at all.

    Finally there is the diversity factor in Linux software development. MS has only one system to sell you, bit nobody’s controlling the development of Linux, and this has allowed dozens of new Linux variations to hatch and compete in the last 20 years. 3b fact, there is variation on two basic levels – there is the kernel, and thee are at least six of these (Debian and Slackware to name a couple). Many of the hundred or so distros which make use of these popular kernels (some just write their own) add their own tweaks. Furthermore, one single distro may offer as many as there or more user interface systems, each designed for the interests of different user groups. So, if you write malware, it would be tough trying to get it past a system which acts much like a firewall without adding any software forgo that purpose. you could never develop anything which could be welfare on all of the Linux users, and then even if UBUNTU, with GNOME,

    • Scali says:

      Not sure if I agree with everything there.
      For starters, you say “Windows can only read its own NTFS format”. Well, no. That’s the problem with open source. Everything is free, so everyone can bundle anything they want. So for some reason, people are now delusional to the point where they think anything that isn’t bundled with a product, can’t be done with that product.
      Microsoft sells a commercial product, so they can only bundle code that they have the copyright to (or which is free of copyright). If you bother to dig a little deeper however, you will find that Windows has a very modular design (far more modular than linux), and it is quite easy for third parties to develop a wide variety of system drivers, including file system drivers. Here is a Windows driver for ext2 support, for example: http://www.fs-driver.org/
      No, it may not be shipped with Windows out-of-the-box, but it does exist (and note how it integrates perfectly with the entire system, Explorer treating it just like the other filesystems that Windows supports out-of-the-box. Which is more than just NTFS by the way. There’s also FAT and CDFS for example, so even out-of-the-box Windows already demonstrates how it supports multiple filesystems seamlessly) . If the Windows community was anything like the linux community, people may have bundled such freeware tools in some kind of distribution and make them easily available online or through free CDs. Because there are lots of (free) utilities and add-ons for Windows, allowing you to do all sorts of things. But for some reason, this never really happened (instead, we got sites like TUCOWS). But saying that Windows only supports NTFS is clearly wrong.

      I don’t quite understand your complaints about the registry either. Windows doesn’t *need* it, but they added it because a centralized database gives fast acess to your data. On linux systems you have a lot of common files as well, which actually ARE security hazards (which is why the passwd file has been superceded with a shadow system… which still is vulnerable). And many linux applications also install their own centralized database system for quick access (because yes, it WAS an issue). The difference is that Windows already comes with a standard database: the registry. With linux, you get a collection of many kinds of databases. I’d be more worried about all the security features that linux lacks.

      Also, I don’t quite see why competing linux distributions are a good thing. They are all the same kernel, where people mainly disagree on what kind of software to bundle (KDE vs Gnome, exclusively GPL-licenced software, that sort of thing). I wouldn’t even go as far as saying they are different OSes. For the advanced user, none of this matters. They can apply the small tweaks themselves, and install missing software, or install software under a license that the distro-maker does not support, etc. It’s one system, from a technical point-of-view. I think Ubuntu is a fine example that people don’t really WANT all this diversity and tweaking. Ubuntu quickly became the most popular linux distro, mainly because it is easy to use, and doesn’t have too many strange fundamental restrictions (some distros like to prevent you from even playing an MP3 because of licensing issues. People just want to play their music, they don’t care about that sort of nonsense). The trend is clear: Ubuntu’s popularity keeps rising, at the cost of all those other distributions. Ubuntu is fast becoming the Microsoft of the linux world.

      But anyway, there’s my point again: you don’t really know what you’re talking about. Trying to argue Windows vs linux without a basic understanding of the cultural or technical differences. Especially the filesystem thing should have been obvious.

  13. Pingback: OS X–Safer by design? | Scali's blog

  14. Xexy says:

    Hi,
    I love OS. That’s it.

    Bye.

  15. Pingback: The kernel.org hack | Scali's blog

  16. Nigel says:

    Linux is computing for dummies

    • Tim says:

      How so? I mainly like it because I can get things done quickly with the console, and can boot into a text mode. its also nice not getting viruses on my computer after somebody uses it to browse the web, however most linux users are indeed immature and freetards, and it has been offputting recently.

  17. Qwertyuiop924 says:

    Windows, even NT, was fundementally designed to be single user. That leads to security problems. They can try and fix them, but in many cases, they go far too deep to do so and still maintain backwards compat. This is what I know, I will not try and convince you. You will never listen.

    • Scali says:

      Windows, even NT, was fundementally designed to be single user

      Windows NT was not designed to be single-user. Get a clue! NT was designed as a complete office solution consisting of a server connected to a number of workstations (hence the NT server and NT workstation products).
      As such it has always been multi-user, since the file-sharing was done on a per-user basis. Servers would also be able to handle network authentication and store user profiles (domain controllers).

      This should be common knowledge, but well, linux users are idiots. They don’t even bother to read up on basic Windows technology, even though all the information is widely available.

      This is what I know

      Apparently you don’t know anything.

      Windows NT also solved backward compatibility in a very nice way. Windows NT got an entirely new Win32 API, with security attributes built in for all relevant objects (something that most *NIX-derivates still don’t have even as of today… you create an object first, then you can change the security attributes later with some bolt-on API… but the core APIs are still oblivious to this rights management, and create objects with default rights).

      The backward compatibility was handled by a separate 16-bit subsystem, essentially a virtual machine, kwown as the NTVDM. This subsystem would run DOS and Win16 tasks, safely isolated from the rest of the system.

  18. Craig says:

    Childish post complaining about another childish post. None of you cretins arguing have likely ever contributed a single idea or line of code to either codebase, so why take so much personal pride in other people’s work?

    Geek battles of pride are already quite cringeworthy. Doing it in the abstract over something the real authors would never let you touch is worse still.

  19. Jeff says:

    Yeah, I’ve run into this many times discussing Linux with the so-called “advocates” in IRC and such – one of the main reasons I no longer go to any Unix/Linux IRC channels. Seems like a lot of the script kiddies of the early 2000s grew up and never changed. Many don’t even work in the IT world but still claim to be some sort of whizzes.

    Like you mention, none of them seem to be aware of the Unix’s history of swiss-cheese security, never had to deal with horrendous legacy Unix systems and their never-ending security patch rollups, or seem to be aware that some of the first massively infections internet worms targeted Unix systems.

    I remember how much time was wasted patching up a new Solaris deployment before a host could be put into service without worrying about being rooted.

    Yeah, Unix security has gone leaps and bounds in the last few decades. So has Windows, but don’t ask these guys. As you mentioned, most don’t know jack about Windows security or architecture.

    Windows NT, being a from-scratch architecture derived from concepts taken from VMS’ designer Dave Cutler (the Linux user will say “WTF is VMS?”), was without a doubt designed as a multi-user networked OS with excellent fine-grained security. It had all of those features out of the box from the very first version of NT 3.1.

    It just so happens that NT was ABI/API compatible and shared a common GUI with Windows 3.1/9x, but that’s about as far as the similarities between 3.1/9x and any NT-based OS goes. Oh but don’t bother asking Linux jerks about it. Many still think Windows 8 is based on DOS.

    I am not a Windows fanboy… I’ve been using Unix and NT since the mid 90s, and see the good and bad in both.

  20. Annoyed Person says:

    Yeah, I get the feeling that they are.

    The “biggest” feature of ext3 that they claim removes the need to defragment the drive is placing files randomly as far apart as possible. That’s it; get a huge drive and you won’t face fragmentation -__-;

    Just to put 2 + 2 together, the argument is “if you waste half of your drive on “buffer space”, ext* is a superior file system.” But, what remains the most idiotic of the argument, is that by creating many small files you reduce the overall space for large files; drives will fragment over time, and the initial guesses by the file system can always be improved upon. (The reluctance to write low level tools for this shows how little they actually understand their file system, compacting small, unchanging, files to make more continuous space for large files is “smart design”, the FS has no idea until after the files are written that they’re going to be unchanging.)

    But, perhaps, the worst of it is that I feel that I could write a virus EXTREMELY easy for *nix.

    Let’s face facts, UAC IS Sudo; it may be weaker and have bugs but that is all it really is… and doing sudo rm -rf / –no-preserve-root WILL delete your OS. So why not add that line in one of the make files on user repositories that people just let run while they build applications? Oops. You can talk about trust, speed of removal, blah blah, but people will be all too willing to TRUST the file and give it sudo privileges JUST like in windows.

    Hell, the insecurity of Windows XP is that Microsoft trusted the end user to use a limited account and to “not download viruses”. I don’t care what an idiot says, you do not need a virus scanner if you are careful, and really the only time XP becomes a security risk is when you’re specifically targeted (i.e. corporation, university) [or use the default firewall, good firewall protection > antivirus].

    But yeah… they’re like idiots. Reading arguments supporting using GB over GiB (as in 1000 over 1024, I don’t care about the ‘i’ too much) only makes me yell, “why not just turn 1 byte into 10 bits! you don’t have 8 fingers, you have 10! IGNORE everything about computers why don’t you”. And only an idiot would think Apple switched because of “standards”. Apple produces everything in home, they wanted to join in on the false advertisement and have an OS that lies to the typical person until they see what is really going on.

    (Never mind that SSDs (Non-volatile RAM) are similar to RAM and that by making them in units of 10 you are intentionally butchering the interface to save money)

    (And FYI, while the seektime isn’t nearly as bad as a HDD; it is not 0. Reading sequentially IS much faster than reading randomly, this is true for your on-board RAM as well. You can benefit from defraging a SSD, just that you need to defrag smarter than you would a HDD)

    Really. what it comes down to are a bunch of people who spend more time spitting on commercialism that it gets really annoying.

    • Mário says:

      > and doing sudo rm -rf / –no-preserve-root WILL delete your OS. So why not add that line in one of the make files on user repositories that people just let run while they build applications?

      And it’s as simple to capture that as it is to make so. What about a virus made by some Russian, that do really magical stuff, and nobody knows how to get rid of it, because you don’t know how your operating system works under the hood. Huge amount of money is spent in anti-virus software, and still they can’t capture all of those nasty viruses.

      If you make a virus for some linux distro, it will be much easier to prevent it from running than it would be on Windows.

      • Scali says:

        If you make a virus for some linux distro, it will be much easier to prevent it from running than it would be on Windows.

        Pray tell, why would it be much easier to prevent it from running on linux?

  21. MacOS9 says:

    Hi Scali, I had a few quick questions regarding PPAs, packages, the whole confusing topic of software upgrades in Linux, etc., so I decided to post them here in the Linux/Mac debates section.

    Thinking about Linux Mint a few days ago, I was struck by an idea that didn’t come to mind in my few years of tinkering with it in VirtualBox.

    Is it the case that there is no such thing as “program updates” in linux distros, that can be installed separately of the OS…? What I mean is that on Windows or OS X, you can choose not to install a new operating system, but for many years (usually up to 5 years, even 10 years on Windows) – you can choose to install newer versions of programs (Firefox, Email readers, Office suite programs, anything really). In Linux I realized that while there are many thousands of “packages” to install in the Synaptic Package Manager (on Ubuntu- and Debian-based distros) – those “packages” are largely locked into the version of distro you are running. (Yes I realize that you can go on the internet and find binary blobs that are not available in Synaptic – but this is not a good way to install things in Linux.)

    Does this mean that, without enabling “backports,” one has no (safe) choice in the Linux world but to roll the whole OS to a newer version (or upgrade to a new version) – to have access to newer versions of programs???

    What about backports? Does this solve the problem, or are backports a kind of hack that doesn’t work the same way as self-enclosed program installers on Windows and Mac (exe files, etc.).??

    Thanks in advance for any info. on this – I find the topic somewhat confusing, especially since something like Firefox seems to get regular updates even via the “packages” in Synaptic, while other things only get obscure updates to dependencies, obscure files with strange names, without the version of the program (or app) moving forward…and then there’s the whole backports thing…I’ve been doing some Google searches for a nice, categorical interpretation of all of this – but it’s hard to track down a clear, categorical explanation of these things.

    • k1net1cs says:

      Not really an expert on linux but I’ll try to help explaining.

      You can install and run any app just fine on any distro if you have the required dependency installed.
      The ‘only’ problem is that each distro has different default components installed, and even within the same distro, differing versions may have different defaults installed.
      But these are usually quickly remedied via using the common repository for said app…or at least its dependency.

      If Windows have an equivalent problem then that’d be VC++ Runtime and/or DirectX Runtime, although these days it’s a thing of the past.
      There will be cases where specific version of VC++ Runtime needs to be installed but they’re not that common anymore.

      The common way to update a software in linux is usually through a repository system using a distro’s package manager, which is basically a GUI for executing commands so you don’t have to manually type and run apt-get in a terminal window.
      There are also cases where you need to download the installer or update dependencies manually, but for the most part using the package manager is enough.

      But yeah…the problem with so many flavors of linux distro means that software projects on linux either:
      a. using dependencies that are common across (major) distros.
      b. supporting only a limited number of distros.

      Not to mention the differing installer package formats between major distros…like .rpm, .deb, and .pkg.
      You can still count them with one hand, but if you decide to make 32 and 64-bit version of your app then the number of versions you need to maintain would double…and that still doesn’t count the backports.

      This is why Valve decided to roll their own linux distro, SteamOS.
      It’s easier than having to maintain Steam clients’ compatibility across different linux distros, all the while requiring game devs to also maintain their games’ compatibility.
      With SteamOS, game devs could just aim compatibility for a single distro, while putting compatibility for the other distros in the backburner…at least after SteamOS matures enough.

      Anyway, a ‘backport’ in general is putting (new) features and/or security fixes from the newest version to a previous (major) version…usually it’s more about security fixes, though.
      Backporting is not really exclusive to linux since Adobe also doing it with their Windows Flash players…e.g any security fixes applied to their newest 17.x version are still backported to the 13.x version because 13.x is considered the stable one.

      Also, backporting is more about the source code rather than the whole installer package, which is a separate issue (like above, with .rpm, .deb, and whatnot).
      A backport would still have the same installer package, or at least not in the form of separate files you need to copy manually, as the newest version…it’s just that they’re being compiled from a different source code.
      It is a ‘hack’ per say, but it’s mainly to keep older versions new.
      And no, I don’t have a better way to say that. =b

      Like the example I gave before, Flash player 13.x is the older version and/or feature-wise compared to 17.x, but in terms of security updates it’s about as new as 17.x, unless there’s a specific condition like when a vulnerability exists due to a new feature on 17.x.

      HTH, and CMIIW.

      • MacOS9 says:

        Thank you for the thorough reply. I will read it a few times to digest all of the info. – but it clarifies many of the questions I had.

  22. Bo says:

    I am a Linux user. Most distros are to big to me. I prefer under 700 Mb. Then I can add whatever I like. I have avoided the prompt because I am not interested. Is it faster? So what? I am a DE fucking Linux user that wants programs to work for me. I am an outdoor person so I want 4G and a light weight laptop with a light OS and it works for me. Solaris 10 was allright to me. Windows is to much work all the time and I find too slow and resource hungry for a small laptop.

Leave a comment