I just spotted a number of hits from Ars Technica to my blog. It is a regular event that one of my blog posts gets posted in some online discussion, causing a noticeable spike in my statistics. When it does, I usually check out that discussion. This was a rare occasion where I actually enjoyed the discussion. It also reminds me directly of a post I made only a few weeks ago: The Pessimist Pamphlet.
You can find this particular discussion here on Ars Technica. In short, it is about a news item on one of Microsoft’s recent patches, namely to the Equation Editor. The remarkable thing here is that they did a direct binary patch, rather than patching the source code and rebuilding the application.
The discussion that ensued, seemed to split the crowd into two camps: One camp that was blown away by the fact that you can actually do that. And another camp that had done the same thing on a regular basis. My blog was linked because I have discussed patching binaries on various occasions as well. In this particular case, the Commander Keen 4 patch was brought up (which was done by VileR, not myself).
Anyway, the latter camp seemed to be the ‘old warrior’/oldskool type of software developer, which I could identify with. As such, I could also identify with various statements made in the thread. Some of them closely related to what I said in the aforementioned Pessimist Pamphlet. I will pick out a few relevant quotes:
(In response to someone mentioning various currently popular processes/’best practices’ such as unit tests, removing any compiler warnings etc):
I know people who do all this and still produce shitty code, as in it doesn’t do what its supposed to do or there are some holes that users’ can exploit, etc. There’s no easy answer to it as long as its a human that is producing the code.
I have said virtually the same thing in another discussion the other day:
That has always been my objection against “unit-test everything”.
If you ask me, that idea is mainly propagated by people who aren’t mathematically inclined, so to say.
For very simple stuff, a unit-test may work. For complicated calculations, algorithms etc, the difficultly is in finding every single corner-case and making tests for those. Sometimes there are too many corner-cases for this to be a realistic option to begin with. So you may have written a few unit-tests, but how much of the problem do they really cover? And does it even cover relevant areas in the first place?I think in practice unit-tests give you a false sense of security: the unit-tests that people write are generally the trivial ones that test things that people understand anyway, and will not generally go wrong (or are trivial to debug when they do). It’s often the unit-tests that people don’t write, where the real problems are.
(People who actually had an academic education in computer science should be familiar both with mathematics and also the studies in trying to formally prove correctness of software. And it indeed is a science).
On to the next:
What you consider “duh” practices are learned. Learned through the trials and efforts of our elders. 20 years from now, a whole generation of developers will wonder why we didn’t do baby-simple stuff like pointing hostile AIs at all our code for vulnerability testing. You know, a thing that doesn’t exist yet.
This touches on my Pessimist Pamphlet, and why something like Agile development came into existence in the first place. Knowing where something came from and why is very important.
The one process that I routinely use is coding standards. Yes, including testing for length required before allocating the memory and for verifying that the allocation worked.
The huge process heavy solutions suck. They block innovation, slow development and still provide plenty of solutions for the untrained to believe their work is perfect – because The Holiest of Processes proclaims it to be so.
Try getting somewhat solid requirements first. That and a coding standard solves nearly every bug I’ve even seen. The others, quite honestly, were compiler issues or bad documentation.
Another very important point: ‘best practices’ often don’t really work out in reality, because they tend to be very resource-heavy, and the bean counters want you to cut corners. The only thing that REALLY gives you better code quality is having humans write better code. Which is not done with silly rules like ‘unit tests’ or ‘don’t allow compiler warnings’, but having a proper understanding of what your code is supposed to do, and how you can achieve this. Again: as the Pessimist Pamphlet says: make sure that you know what you’re doing. Ask experienced people for their input and guidance, get trained.
Another one that may be overlooked often:
There’s also the problem that dodgy hacks today are generally responses to the sins of the past.
“Be meticulous and do it right” isn’t fun advice; but it’s advice you can heed; and probably should.
“Make whoever was on the project five years ago be meticulous and do it right” is advice that people would generally desperately like to heed; but the flow of time simply doesn’t work that way; and unless you can afford to just burn down everything and rewrite, meticulous good practice takes years to either gradually refactor or simply age out the various sins of the past.
Even if you have implemented all sorts of modern processes today, you will inevitably run into older/legacy code, which wasn’t quite up to today’s standards, but which your system still relies on.
And this one:
You can write shit in any language, using any process.
Pair programming DOES tend to pull the weaker programmer up, at least at first, but a weird dynamic in a pair can trigger insane shit-fails (and associated management headaches).
There’s no silver bullet.
Exactly: no silver bullet.
The next one is something that I have also run into various times, sadly… poor management of the development process:
Unfortunately in the real world, project due dates are the first thing set, then the solution and design are hammered out.
I’m working coding on a new project that we kicked off this week that is already “red” because the requirements were two months behind schedule, but the due date hasn’t moved.
And the reply to that:
It’s sadly commonplace for software project to allot zero time for actual code implementation. It’s at the bottom of the development stack, and every level above it looks at the predetermined deadline and assumes, “well, that’s how long I’VE got to get MY work done.” It’s not unusual for implementation to get the green light and all their design and requirements documents AFTER the original delivery deadline has passed. Meanwhile, all those layers – and I don’t exclude implementation in this regard – are often too busy building their own little walled-off fiefdoms rather than working together as an actual team.
Basically the managers who think they’re all-important, and once they have some requirements, they’ll just shove it into a room with developers, and the system will magically come out on the other end. Both Agile development and the No Silver Bullet article try to teach management that software development is a team sport, and management should work WITH the developers/architects, not against them. As someone once said: Software development is not rocket science. If only it were that simple.
Another interesting one (responding to the notion that machine language and assembly are ‘outdated’ and not a required skill for a modern developer):
The huge difference is that we no longer use punchcards, so learning how punchcards work is mostly a historic curiosity.
On the other hand every single program you write today, be it Haskell, JavaScript, C#, Swift, C++, Python, etc, would all ultimately be compiled to or run on top of some code that still works in binary/assembly code. If you want to fully understand what your program is doing, it’s wise to understand to at least read assembly. (And if you can read and understand it it’s not a big stretch to then be able to modify it)
And really, most of the skill in reading assembly isn’t the assembly itself. It’s in understanding how computers and OS actually work, and due to Leaky Abstraction (https://en.wikipedia.org/wiki/Leaky_abstraction) it’s often abstractions can be broken, and you need to look under the curtain. This type of skill is still pretty relevant if you do computer security related work (decompiling programs would be your second nature), or if you do performance-sensitive work like video games or VR or have sensitive real-time requirements (needing to understand the output of the compiler to see why your programs are not performing well).
Very true! We still use machine code and assembly language in today’s systems. And every now and then some abstraction WILL leak such details. I have argued that before in this blogpost.
Which brings me to the next one:
We can celebrate the skill involved without celebrating the culture that makes those skills necessary. I’d rather not have to patch binaries either, but I can admire the people who can do it.
A common misunderstanding on the blogpost I mentioned above is that people mistook my list of skills for a list of ‘best practices’. No, I’m not saying you should base all your programming work around these skills. I’m saying that these are concepts you should master to truly understand all important aspects of developing and debugging software.
This is also a good point:
My point is: software engineering back in the days might not have all those fancy tools and “best practises” in place: but it was an art, and required real skills. Software engineering skills, endurance, precision and all that. You had your 8 KB worth of resources and your binary had to fit into that, period.
I am not saying that I want to switch my code syntax highlighter and auto-completion tools and everything, and sure I don’t want to write assembler
But I’m just saying: don’t underestimate the work done by “the previous generations”, as all the lessons learned and the tools that we have today are due to them.
If you learnt coding ‘the hard way’ in the past, you had to hone your skills to a very high level to even get working software out of the door. People should still strive for such high levels today, but sadly, most of them don’t seem to.
And again:
Just as frustrating is that quite a few developers have this mania with TDD, Clean Architecture, code reviews processes etc. without really understanding the why. They just repeat the mantras they’ve learnt from online and conference talks by celebrities developers. Then they just produced shitty code anyway.
And the response to that:
A thousand times this. Lately I have a contractor giving me grief (in the form of hours spent on code reviews) because his code mill taught him the One True Way Of Coding.. sigh.
As said before, understand what the ideas are behind the processes. Understanding the processes and thoughts makes you a much better developer, and allow you to apply the processes and ideas in the spirit they were meant by the initiators, for best effect. And I cannot repeat it often enough: There is no silver bullet! No One True Way Of Coding!
Well, that’s it for now. I can just say that I’m happy to see I’m not quite alone in my thoughts on software development. On some forums you only see younger developers, and they generally all have the same, dare I say, naïve outlook on development. I tend to feel out-of-place there. I mostly discuss programming on vintage/retro-oriented forums these days, since they are generally populated with older people and/or people with a more ‘oldskool’ view on development, and years of hands-on experience. They’ve seen various processes and tools come and go, usually failing to yield a lot of result. The common factor in quality has always been skilled developers. It is nice to see so many ‘old warriors’ also hanging out on Ars Technica.
And again, I’d like to stress that I’m not saying that new tools or processes are bad. Rather that there’s no silver bullet, no One True Way of Coding. Even with the latest tools and processes, humans can and will find ways to make horrible mistakes (and conversely, even many moons ago, long before current languages, tools and processes had been developed, there were people who wrote some great software as well). Nothing will ever replace experience, skill and just common sense.
No, you’re definitely not alone.
Knowing how to code in assembly helped me immensely in understanding how pretty much every system/language I examined works.
It also makes me feel completely out of place when all of the new people (and “managers”) are taking well-proven practices and workflows, only to replace them with the latest trends. When the results are underwhelming, they make up all sorts of excuses. It’s never their fault, after all: they’re following exactly what they read on the internet. The internet is never wrong.
I may have to find a place where more of the aforementioned “old warriors” hang out, if only to preserve that little bit of sanity I have left.
Oh yea, that kind of naïveté is the worst. Just do things ‘by the book’ (or the net in this case), and everything will always turn out right? No, and that’s what experience, skill and common sense should teach you.
Or, what’s even worse… They’re doing Scrum, and it’s not working right. So they hire some Scrum-consultant, who comes in for an hour or so, collects a huge pay-check, and then just mumbles things like ‘Scrumbut, use more Scrum’. In this way, Scrum has become a great way to make money… Scrum-consultants, Scrum-certification etc. Nothing to do with actually writing good software, but lots of people are getting very rich off it.
There’s this commercial about a toilet cleaner, Toilet Duck. It says: “We, the people at Toilet Duck, recommend Toilet Duck”.
That’s what this Scrum-consultancy is all about.
Most of the time that happens because the guides and examples are very often situated in a controlled environment. I always find myself in a conflictual position with the manager when he says things like: “In an ideal world we should do things this way”, “Ideally we should apply this technique” and so on…
There’s no such a thing as the “ideal world”: every project is different, with new challenges and new problems that require a certain degree of adaptability. That’s why experience is important.
Yup, that’s what I also argue: the examples are usually trivial and highly unrealistic. It works *there*, but doesn’t mean it works *everywhere*.
Another one is managers not understanding risk. Such as: “Okay, you want me to add this new functionality to this outdated codebase? I can either spend a week or two on refactoring the code, so we have a robust state-machine in one place, and we can add new functionality in a controlled and low-risk way. Or, I can just try to hack it in, with a huge risk of making the code unstable and/or never getting the new functionality to work quite right”.
They choose the latter, then after two weeks you signal: “Okay, I couldn’t hack it in properly, we need to refactor anyway to get things under control”. But they won’t accept that. What part of “huge risk” didn’t you understand?
Also, they seem to think you’re the enemy. I’m not, I *want* to make the code work, but I can’t do it the quick-and-dirty way in this case, because the people who worked on the code before me have already made too much of a mess (probably for the same reason: pushy managers trying to cut costs, without any regard for long-term effects).
They start seeing you as the enemy, in my experience, almost as soon as they stop understanding the technical side of whatever is going on. The lack of understanding means lack of control and it probably triggers some kind of fear-based reaction.
Which is sad, since you should be operating as a team, based on mutual trust.
It’s a shame that even managers that think they apply Agile/Scrum act that way, since one of the 12 principles of Agile is this:
Motivation, support and trust. When you second-guess your developers, you’re not doing any of these things. Overruling developers by choosing a high-risk option doesn’t really help either.
I believe the technical term is “beyond economic repair”, although “technical debt” seems more common these days. Not that many people actually understand it though.
Some people will ask “So, give me a list of all the technical debt we have”… You then have to explain that it’s not as simple as something being “technical debt” or not. Usually you don’t know until it’s already too late. Eg, you have some code that works fine, but after some OS or compiler update, it breaks. There is no way to predict that, really. Working code is not bug-free code.
Many people fail to understand how much software development is like scientific research. Sure, you can set a deadline for building a cold fusion reactor, but it’s going to be a worthless random number. Software development is usually less extreme but typically code is being written to do something that’s simply not been done before, otherwise you’d be using something that’s been written already. Sometimes (not often) it’s necessary to say, OK, this can’t be done. Very often it’s impossible to predict problems and hurdles simply because no one can seen them until you get closer — that is the research part, you have to build the first 80% to identify the roadblocks in the next 20%.
I have the same experience with “technical debt”. Looking at a piece of code gives some sense of how problematic it is, but it’s generally not something that can be evaluated. Existing code breaks because something changed, and predicting such changes would require predicting the future. That is why writing maintainable code is so important.
Sometimes I think of a former coworker who had an incredible knack for writing code which worked, except it was completely mis-designed and written in a totally unmaintainable way. Clueless developers normally fail to produce anything functional, but he managed to write code which did work, yet had to be thrown away if anyone later tried to fix or change anything.
Yes, a related phenomenon I discovered is related to Scrum and its sprint-planning I believe. That is, on several occasions I discovered some shared components where something was ‘sorta’ implemented, but not quite. It distinctly looked like they implemented enough of the functionality to pass their specific use-case, but not good enough for anyone else who wanted to use it. There probably wasn’t enough time in the sprint to finish that off. And since the sprint was closed, nobody took a second look at the code until some other project tried to use the functionality, and had to spend unpredicted time fixing it first.
In this case it was a set of field names, which were stored in a configuration json file. These field names would be added to the underlying sqlite database automatically. It worked the first time. However, when you added new fields to the configuration file after the database had already been created, everything would break.
When I studied the code, I noticed that someone actually spent time trying to implement code that would support changes to the configuration file. However, the code made use of an internal list of fields, which was sorted alphabetically. This did not correspond with the field-order of the sqlite queries (which was in database-order, so in the order in which they were added). So I had to rewrite that. Until then, the ‘workaround’ had always been that the sqlite db was deleted and recreated whenever the configuration was changed. Of course you could never do this in production, because you’d lose all information.