Random thoughts on programming, culture and such

I recently stumbled upon the blog of Mike Taylor, The Reinvigorated Programmer. He touches on various subjects that I can identify with (such as retro-computing and sushi), and some of which I have wanted to write about for a while as well, but wasn’t quite sure how to go about doing that.

I still don’t quite know how to make a fully coherent story out of the things I would like to talk about. Perhaps this is not even possible at all. So instead I’ve decided to just refer to a few of the articles I’ve been reading on Mike Taylor’s blog, then give my thoughts on it, covering at least some of the things I’ve been wanting to share.

Take one

The first article I would like to cover is this one: http://reprog.wordpress.com/2010/03/03/whatever-happened-to-programming/

Now I wonder how the younger generation of programmers feels about this. It seems that Mike and I are from the same generation of programmers. Our generation started out with writing things mostly from scratch. We didn’t have very advanced OSes packed with lots of libraries that could do anything you could ever wish for. We would write our own libraries to do the things we want, often even interfacing with the hardware directly. Not just because we had to, but also because we WANTED to. We liked to know exactly how things work, and would often just write our own version of some function/algorithm just to see how well we would do.

Writing software today is completely different, and indeed it generally consists of little more than glueing libraries together. Which is incredibly boring for people like us, who grew up with doing things themselves. When speaking to younger programmers, I notice a distinct difference. They grew up with OSes and libraries, and don’t even know how to do things themselves. They are a completely different type of programmer, it seems. They are probably happy to just use libraries, because that is what they’ve always done. While for us, the whole notion of what programming is, has moved around 180 degrees.

It’s a shame that the skills and experience of people like us is not generally being used anymore. Perhaps we could create great libraries and frameworks custom-made for the job at hand, but people just expect you to use off-the-shelf solutions instead. It also means that our jobs have become incredibly boring and unsatisfying. We can no longer use our skills and our creativity. The worst part is that we have to study useless details of new libraries, frameworks and technologies all the time, which basically do exactly the same as things we’ve already built and/or used before. I have generally had some ‘side projects’ going in my spare time, if only because they would allow me to program things that I consider to be fun, while the daily work itself was boring.

That is also more or less why I started with my retro-programming a while ago. I mean, Direct3D and OpenGL can be fun and all… But isn’t it more fun just to do everything yourself? Especially on low-end machines that don’t even have an FPU or anything? But, what I noticed about the younger generation… I spoke to some assembly programmer who did not even know how to do fixedpoint arithmetic. Can you imagine that? By the way, for people who have mainly been following this blog for the oldskool Amiga/DOS stuff… I am still working on that code on and off, and I will probably do a new article in the near future with some more updates. As a small teaser: I have since moved from rendering a cube to rendering a donut (which presents some new rendering problems with polygon sorting). I have also added a polygon clipper, and I have optimized the code to a point where no FPU is required anymore, and performance of a donut with 128 flatshaded polys is acceptable on a fast 286:

Another, perhaps slightly less retro thing I have been doing recently, was to dust off my trusty old VideoLogic Apocalypse 3Dx card. It is a very early 3D accelerator, using a PowerVR PCX2 chip, which is a close relative to the chip used in the Sega DreamCast console. Yes, that’s right, that is the same PowerVR that is still around today, powering iPhones, iPads and various other mobile devices everywhere. They still use their unique tile-based deferred rendering technology as well. And yes, VideoLogic is actually the same company, but they are currently known as Imagination Technologies.

It is a very unique card, since it uses the PCI-bus to transfer the image to the 2D host card (it is purely a 3D accelerator, like early 3DFX VooDoo cards), rather than using some kind of piggyback VGA cable. It only has drivers for Windows 9x, and only supports Direct3D up to version 7, and has a very limited MiniGL driver for GLQuake-based games.

So I decided to write a Direct3D 1.0 renderer for it, and play around with it somewhat (no, John Carmack, execute buffers don’t scare me… what does scare me is releasing GLQuake when only a very expensive Rendition Verite can run it):

(Oh by the way, back then it was actually rather special that it could run in windowed mode. Piggyback-style cards such as 3DFX VooDoo could not do this, but the PCX2’s unique PCI-based approach made it easy)
I have also been able to track down the SDK for this card. Like most early cards, it also has its own proprietary 3D API. In this case, that API is PowerSGL. I plan to make a native PowerSGL version of this rotating donut, and see how much faster it is than standard Direct3D. Seeing as this card is a TBDR, it has some ‘special needs’. The Sega DreamCast showed that the TBDR approach can work very well, when the code is optimized for it. I hope that PowerSGL will provide similar results under Windows.

At some point in the near future I want to move to C64, and see if I can make a good polygon rendering routine on it as well. I have seen a few good ones, such as the one in Desert Dream:

Take two

Right, on to the second article I would like to discuss: http://reprog.wordpress.com/2010/03/02/learning-a-language-vs-learning-a-culture/

This culture-thing is very true. Not just between different languages, but also between the same languages on different platforms. A C++ programmer on Windows will generally use entirely different tools than a C++ programmer on linux or FreeBSD for example. And then there is the coding style itself (such as Hungarian notation, which is quite popular in the Windows world, but rarely seen in *nix code), and the APIs used, and all that.

Culture goes beyond just what tools you use and what style you write your code in. For example, on home computers such as the C64 it became popular to crack software and add cracktros to them, which eventually evolved into the demoscene as we know it today. Especially the C64 and Amiga were hugely popular choices of machines for intros/demos. On other machines, such as the PC, there certainly was a lot of gaming/cracking/pirating being done, much like on C64 and Amiga, but cracktros were extremely rare, let alone demos.

Now, the cost of the PC may have factored in in its popularity somewhat, or the fact that the PC had relatively limited hardware compared to Amiga and even C64… But still, I don’t think that fully explains why there were so few intros and demos for PC in the early days. After all, there were plenty of games that had reasonably impressive graphics on PC. And the PC market was a huge market, so there should have been plenty of talented young programmers on PC, just as on other platforms. But no, there weren’t really any demos on PC before 1990. It just wasn’t in their culture, I suppose. In fact, even when the demoscene finally started to take off on PC in the early 90s, it was at least partly driven by people who had used C64 and/or Amiga earlier, and had upgraded to PC now that the other platforms had become mostly obsolete. So to a certain extent, it was the result of people taking their culture to the PC (for example, some Triton members were previously active on Amiga, and some Future Crew members had a C64 past).

Some early attempts at PC demos were quite horrible, and the PC scene was not really taken seriously by Amiga sceners. But the more skilled coders would quickly show that there was quite a bit of hardware trickery that you could pull on a PC to create quite impressive demos. Besides, in a way, isn’t the demoscene spirit all about overcoming the limitations of a given platform?

We see the same even today. The demoscene culture on PC is mainly limited to Windows. Apparently it is not in the linux culture to be creative with music and graphics and to build impressive pieces of code. There are only very few demos released for linux, and generally the quality is considerably lower than that of state-of-the-art Windows productions. And they are running on the exact same hardware as Windows, so that can’t be it. They also have access to state-of-the-art OpenGL libraries, so that can’t be it either.

Speaking of linux… Another part of the culture there, or perhaps I should say folklore, is to hate Microsoft, Windows, closed-source software and all that. Windows users on the other hand, generally don’t seem to be all that bothered about alternatives. I don’t recall a whole lot of rivalry back in the C64 days either. Perhaps it was because our C64 was so popular, that it was useless to try and advocate the platform anyway, much like Windows today. In the Amiga days however, it was common to hate the competing Atari ST platform. Then again, perhaps that was partly because the Atari ST actually posed somewhat of a threat to the Amiga. And then there’s the political side to that story:  Jay Miner, one of the main designers of the Amiga, worked for Atari. At the time, Atari was not interested in a new 16-bit machine, so Jay Miner and some co-workers left Atari and founded the Hi-Toro company, which developed the system now known as the Amiga. Initially, Jay Miner got Atari to invest in their company. But eventually it was Commodore that paid off the debt to Atari and bought Hi-Toro. At about the same time, Commodore founder Jack Tramiel left Commodore (Tramiel wanted to develop a new high-end machine, while shareholders did not agree at the time… Then they bought the Amiga, which essentially was more or less what Tramiel was looking for). Tramiel bought up part of Atari not much later, and was heavily involved in the ST project. So it’s one big entangled mess between Commodore and Atari. In the end, the Atari ST was too little, too late. A poor surrogate for the real Amiga. Nevertheless it was still quite a decent machine in its own right, and in the hands of skilled democoders, it could get remarkably close to what an Amiga could do:


But, to get back to linux folkore… A lot of linux people seem to think that using linux gives them some special ‘elite’ status (which I have discussed earlier). They also generally quote the same books and sources, and are expected to be interested in the same things. They all seem to like recursive acronyms. Or take Monty Python for example. It even gave us the ‘spam’ name for unwanted email, and the scripting language Python was named after them as well. Now, surely I am a fan of Monty Python myself… but it is 2012 now. Monty Python is from the 1970s. As is UNIX, probably not a coincidence there. But you’d expect people who enjoy Monty Python to also enjoy newer comedians such as Eddie Izzard.

Funny enough, some of the culture appears to have been lost on the newer linux generations. Namely the culture of Free Open Source Software. The Free Software Foundation and its GNU project were originally a response to UNIX vendors making their systems closed-source. The ideal of the GNU project was to have a completely free and open source implementation of a UNIX-compatible system.

Effectively, GNU/linux has become that system. However, the linux kernel was not developed by the FSF. Also, linux distributions are not maintained by the FSF. I wonder to what extent today’s linux users and developers share or are even aware of the FSF and its philosophies. As I already mentioned, a lot of people’s motivation to use linux these days is because it’s not Microsoft Windows. The FSF was not at all concerned with Microsoft Windows, since firstly, the first version of Windows  had yet to be released when the GNU project was started, and secondly, Windows is not UNIX. The GNU project was aimed at providing UNIX users with a free alternative. These days most linux users seem to come from the Windows world instead (which is an entirely different culture, and linux seems to have become more of a counter-culture than the UNIX culture that it stems from).

And then there are commercial applications of linux, in for example the TiVo, or Android devices, which don’t really seem to support the freedom ideal all that well (but that doesn’t stop linux zealots from quoting them as another chapter in the linux success story. Even the fact that Apple’s OS X isn’t based on linux doesn’t stop them from quoting it as another linux success!)

Take three

And finally, the third article that caught my eye: http://reprog.wordpress.com/2010/03/21/the-hacker-the-architect-and-the-superhero-three-completely-different-ways-to-be-an-excellent-programmer/

I think the most valuable insight here is that different people have different strengths and weaknesses, and it is good to mix-and-match those people in a programming team, so that one person’s strengths can support another person’s weaknesses.

When I read it, I also started thinking about which of the three I would mostly resemble. I could recognize a bit of myself in each of them. For example, ‘the hacker’ in me would be when I patched nVidia’s Endless City, or fixed Triton’s Crystal Dream.

‘The architect’ is what I try to do on a daily basis, being the lead programmer. I try to keep our code stable, maintainable and extensible. The Direct3D framework that is the basis for our renderer is a project that I originally started around 2001 I believe, with Direct3D8. It has since been updated to Direct3D9, and then extended to also included Direct3D10 and 11, and more than 10 years later, it is still going strong.

As for the ‘superhero’ part, well, I’m not sure about that. But perhaps the Java demos I’ve done would qualify, or the current video processing software we’re developing. For the Java demos I wrote nearly everything myself, starting with a framework just to get a virtual framebuffer working in Java, then building a 3d software renderer on top of that, then scene management, exporting tools for 3dsmax (designing a custom file format in the process), sound player and sync etc.

But it’s hard to for myself to judge which of these I’m really good at, let alone which would be my strongest skills. I generally seem to enjoy the ‘hacking’ part most. Well, not necessarily reverse-engineering other people’s code, but rather ‘hacking’ in the proper sense: tweaking and optimizing code to make it as fast and efficient as possible. I have also met the type of programmer that sees it as a sport to memorize as much trivia as possible about programming languages and such. I am definitely not that type myself. I have a decent working knowledge of the tools I use, but I leave some of the more esoteric details on a need-to-know basis, as in, if I ever need to know it, I will look it up. I guess I am more into esoteric details of hardware and the like. Which makes sense, seeing as I like to tweak and optimize code.

Well, I think that’s it for today’s random thoughts… that is, until I think of other things to add.

Advertisements
This entry was posted in Software development, Software news and tagged , , , , , , , , . Bookmark the permalink.

12 Responses to Random thoughts on programming, culture and such

  1. Matt says:

    Some years ago I made the same observation — the drift of the programmer archetype — joking that someday a programmer’s primary tool will be a dialog wherein their design is expressed by selecting items from some massively distributed store of software components.

    Personally I’ve refrained from using external libraries (at one point not even the standard C library) ever since I was 12 when I started programming. Athough I’ve mellowed these days I still prefer writing everything myself as I find nothing more exciting than having a blank workspace to express my design and implement it from scratch. A good example of a haven for others who share this preference is in operating system development; something I’m currently involved in as it’s the epitome of freedom.

    • Scali says:

      Ah yes, thinking about it… When I wanted to use GIF images in my code, I wrote a GIF decompressor. When I wanted to use JPG images, I wrote a JPG decompressor. When I wanted to play mod music, I wrote a mod player… For my Java demo I even wrote an XML parser.
      Thing is, there usually was some level of pragmatism involved… I either wrote those things because there was no easy access to a library that did what I wanted (they either weren’t available at all on my platform, or they were too large, too slow, or didn’t specifically do what I needed… something like that), or because I wanted to learn how these things worked.

      I have never been all that interested in OS development. On older platforms you generally had to do everything yourself anyway, so in a way you developed your own ‘OS’ as part of your application.
      But being pragmatic, I don’t see OS development as a goal in itself. The biggest problem with developing an OS is that nobody uses your OS, and therefore nobody will use any applications you may develop for that OS. So I just stick to developing things for popular OSes.

  2. Bosstiger says:

    Reblogged this on Gigable – Tech Blog.

  3. Pingback: More thoughts… | Scali's OpenBlog™

  4. wfw311 says:

    How did you manage to find the PowerSGL SDK? I tried to find it, but no luck so far.
    I suppose you are talking of the SDK for Windows 9x?

    • Scali says:

      I just contacted Imagination Technologies (formerly VideoLogic) on their developer support address listed on the website (devtech@imgtec.com).
      As luck would have it, one of the guys who worked on that old SDK replied, and said he still had a copy from 2000. He put it on the company’s FTP for me.
      So I guess you can just contact them and ask for a copy. I’m not sure if I’m allowed to spread my copy.

      Oh and yes, the SDK I have is for Win9x only (no DOS libraries included, and as you probably know, there were no drivers for NT-versions of Windows at all).
      I have been using it under Win98SE with Visual Studio 6.0, and I could compile all but one example (missing a header for a resource file it seems, could be related to the D3D SDK version). I have also integrated some simple PowerSGL code in my own DirectDraw framework. So it works like a charm.

    • Scali says:

      Here’s a version of my textured donut running with the PowerSGL Direct API (as opposed to the Direct3D version I posted earlier)

  5. Pingback: Just keeping it real, part 6 | Scali's OpenBlog™

  6. Pingback: Just keeping it real, part 6 | Scali's OpenBlog™

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s