Am I a software architect?

In the previous article, I tried to describe how I see a software architect, and how he or she would operate within a team or organization. One topic I also wanted to discuss is the type of work a software architect would actually do. However, I decided to save that for a later article, so that is where we are today.

As you could read in the previous article, ‘software architecture’ is quite vague and ‘meta’.  Software architecture happens at a high-level of abstraction by nature, so trying to describe it will always remain vague. And that’s not just me. I followed a Software Architecture course at university, and it was just as vague. They made you aware of this however. The course concentrated only on a case study where requirements and constraints had to be extracted from the input of various parties, and had to be translated into an architecture document (various diagrams and explanations at various levels, some aimed at the customer, some at end-users, some at developers etc. Also various usage scenarios and a risk analysis/evaluation of the architecture). No actual programming was involved.

That is somewhat artificial of course. The architect’s work is rarely that cut-and-dry in practice. Firstly, not all software can just be designed ‘on paper’ like that. Or at least, the chances of getting it right the first time would be slim. I find that often when I need to design new software, I am introduced to various technologies that I have not used before, and therefore I do not know how they would behave in an architecture beforehand. To give a simple example: if you were to design a C# application, you might make a lot of use of multithreading and Tasks (thread pooling). But if you were to design a browser-based JavaScript application, the usage of threading is very limited, so you would choose a different route for your design.

Secondly, in practice, especially these days with Agile, DevOps, cross-functional teams and all that, the architect is generally also a team member, and will also participate in development, as I’ve already mentioned in the previous article.

So let’s make all this a little more practical. The guideline I use is that the architect “solves problems”. And I mean that quite literally. That is, the architect takes the high-level problem, and translates it to a working solution. My criterion here is that the problem is “solved”, as in: once the code has been written, the problem is no longer a problem. There is now a library, framework or toolset that handles the problem in a straightforward way (I’m sure most developers have found those ‘headache’ products that just keep generating new bugs, no matter how much you try to fix them, and never performed satisfactorily to begin with. A real architect delivers problem-free products).

That does not necessarily meant hat the architect writes the actual code. It means that the architect reduces the high-level abstraction and complexity, and translates it to the level required for the development team to build the solution. What level that is exactly, depends on the capabilities of the team. Worst-case, the architect will actually have to write some code entirely by himself.

But if you think about it that way, as a developer you are using such libraries, frameworks and toolsets all the time. You don’t write your own OS, compiler, database, web browser etc. Other people have solved these problems for you. Problems that may be too difficult for the average developer to solve by themselves anyway. Each of these topics requires highly skilled developers. And even then, these developers may be skilled in only one or perhaps a few of these topics at best. That is, it’s unlikely for an OS expert to also be an expert at database, compiler, browser etc technology. People tend to specialize in a few topics, because there’s just not enough time to master everything.

So that is how I see the role of architects in practice: these are the experts who create the ‘building blocks’ for the topics your company specializes in.

Perhaps it is interesting to take some topics that I have discussed earlier on this blog. If you look at my retro programming, you will find that quite a bit of it has the characteristics of research and development. That is, a lot of it is ‘off the beaten path’, and is about things that are not done often, or perhaps have not been done at all before.

The goal is to study these things, and get them under control. Beat them into submission, so to say. Take my music player for example. It started as the following problem:

“Play back VGM files for an SN76489 sound chip on a PC”

You can start with the low-hanging fruit, by just taking simple VGM files with 50 Hz or 60 Hz updates, and focus on reasonably powerful PCs. Another developer had actually developed a VGM player like that, but ran into problems with VGM files taken from certain SEGA games, which used samples:

The problem with VGM files is that each sample is preceded by a delay value, so it doesn’t appear to be a fixed rate. And in fact, the sample could be optimized, so that the rate is indeed not fixed at all. That is, the SN76489 will play a given sample indefinitely. So if your sample data contains two or more samples of the same value, it can be optimized by just playing the first one, and adding the delays. Various VGM tools will optimize the data in this way.

The other guy had trouble getting the samples to play properly at all, even on a faster system. VGM allows for 44.1 kHz data maximum, so the straightforward way would be to just fire a timer interrupt at 44.1 kHz, and perform some internal bookkeeping to handle the VGM data. If your machine is fast enough (fast 386/486 at least), it can work, but it is very bruteforce. Especially when compared to the Sega Master System the game came from. That one only has a Z80 CPU at 4 MHz.

So I thought there was a nice challenge: if a Z80 at 4 MHz can play this, then so should an 8088 at 4.77 MHz. And the machine shouldn’t just be able to play the data by hogging the CPU. It should actually be able to play the data in the ‘background’, so using interrupts.

That is the problem I tried to solve, and as you see, I also drew some diagrams to try and explain the concept at a higher level. Once I had the concept worked out, I could just write the preprocessor and player, and the problem was solved.

I later re-used the same idea for a MIDI player. Since the basic problem of timing and interrupts was already solved, it was relatively easy to just write an alternative preprocessor that interpreted MIDI data instead of VGM data. Likewise, modifying the player for MIDI data instead of SN76489 was also trivial.

And these players also made use of two other problems that had been solved previously. The first is the auto-end-of-interrupt feature of the 8259 chip, which I discussed earlier. Because I solved that problem by creating easy-to-use library functions and macros, it was trivial to add them to any program, such as this player.

The second one is the disk streaming, which I also discussed earlier. Again, since the main logic of using a 64k ringbuffer and firing off 32k reads with DMA in the background was solved, it was relatively easy to add it to a VGM or MIDI player. Which in turn worked around the memory limitations (neither VGM nor MIDI data is very compact, and more complex files can easily go beyond 640k).

So, I think to sum up, here are a few characteristics that apply to a software architect:

  • Able to extract requirements and constraints from the stakeholders’ wishes/descriptions
  • Able to determine risks and limitations
  • Able to translate the problem into a working solution
  • Able to find the best/fastest/most optimal/elegant solution
  • Able to explain the problem and solution to the various stakeholders at the various levels of abstraction/knowledge required
  • Able to explain the solution to other developers so that they can build it
  • Able to actually ‘solve’ the problem by creating a library/framework/toolset/etc that is easy to (re)use for other developers

Politics

Here’s another thing I’d like to add to the previous article. I talked about how a software architect would work together with project managers, developers and other departments within the organization. I would like to stress how important it is that you can actually work together. I have been in situations where I may have had the title of architect on paper, and I was expected to be responsible for various things. But the company had a culture where managers decided everything, often even without informing, let alone consulting me.

If you ever find yourself in such a situation, then RUN. The title of architect is completely meaningless if you do not actually have a say in anything. How can you be responsible for the development of software when you have no control over the terms under which that software is to be developed? And of course, managers never understand (or at least won’t admit) that it’s often their decisions that leads to projects not meeting their targets.

For example, if you take some of the things described above, such as getting requirements/constraints and doing risk assessment. These things take time, and time has to be allocated in the life cycle of the project to perform these tasks. If a project manager can just decide that you do not get any time to prepare, and you just have to start developing right away, because they already set a deadline with the client, there’s not much you can do. Except RUN.

Advertisements
Posted in Software development | Tagged , , , , , | Leave a comment

Who is a software architect? What is software architecture?

After the series of articles I did on software development a while ago, I figured that the term ‘Software Architect’ is worth some additional discussion. I have argued that Software Engineering may mean different things to different people, and the same goes for Software Architecture. There does not appear to be a single definition of what Software Architecture is, or what a software architect does and does not do, what kind of abilities, experience and responsibilities a software architect should have.

In my earlier article, I described it as follows:

Software Engineering seemed like a good idea at the time, and the analogy was further extended to Software Architecture around the 1990s, by first designing a high-level abstraction of the complex system, trying to reduce the complexity and risks, and improving quality by dealing with the most important design choices and requirements first.

I stand by that description, but it already shows that it’s quite difficult to determine where Software Engineering ends and Software Architecture starts, as I described architecture ans an extension of engineering. So where exactly does engineering stop, and does architecture start?

I think that in practice, it is impossible to tell. Also, I think it is completely irrelevant. I will try to explain why.

Two types of architects

If you take my above description literally, you might think that software architects and software engineers are different people, and their workload is mutually exclusive. That is not the way I meant it though. I would argue that the architecture is indeed an abstraction of the system, so it is not the actual code. Instead it is a set of documents, diagrams etc, describing the various aspects of the system, and the philosophies and choices behind it (and importantly, also an evaluation of possible alternatives and motivation why these were not chosen).

So I would say there is a clear separation between architecture and engineering: architecture stops where the implementation starts. In the ideal case at least. In practice you may need to re-think the architecture at a certain phase during implementation, or even afterwards (refactoring).

That however does not necessarily imply that there is a clear separation between personnel. That is, the people specifying the architecture are not necessarily mutually exclusive with the people implementing it.

The way I see it, there are two types of architects:

  1. The architect who designs the system upfront, documents it, and then passes on the designs to the team of engineers that build it.
  2. The architect who works as an active member of the team of engineers building the system

Architect type 1

The first type of architect is a big risk in various ways, in my opinion. When you design a system upfront, can you really think of everything, and get it right the first time? I don’t think you can. Inevitably, you will find things during the implementation phase that may have to be done differently.

When the architect is not part of the actual team, it is more difficult to give feedback on the architecture. There is the risk of the architect appearing to be in an ‘ivory tower’.

Another thing is that when the architect only creates designs, and never actually writes code, how good can that architect actually be at writing code? It could well be that his or her knowledge of actual software engineering is quite superficial, and not based on a lot of hands-on experience. This might result in the architect reading about the latest-and-greatest technologies, but only having superficial understanding, and wanting to apply these technologies without fully understanding the implications. This is especially dangerous since usually new technologies are launched with a ‘marketing campaign’, mainly focusing on all the new possibilities, and not looking at any risks or drawbacks.

Therefore it is important for an architect to be critical of any technology, and to be able to take any information with a healthy helping of salt, cutting through all the overly positive and biased blurb, and understanding what this new technology really means in the real world, and more importantly, what it doesn’t.

The superficial knowledge in general might also lead to inferior designs, because the architect cannot think through problems down to the implementation detail. They may have superficial knowledge of architectural and design patterns, and they may be able to ‘connect the dots’ to create an abstraction of a complete system, but it may not necessarily be very good.

Architect type 2

The second type is the one I have always associated with the term ‘Software Architect’. I more or less see an architect as a level above ‘Senior Software Engineer’. So not just a very good, very experienced engineer, but one of those engineers with ‘guru’ status: the ones you always go to when you have difficult questions, and they always come up with the answers.

I don’t think just any experienced software engineer can be a good architect. You do need to be able to think at an abstract level, which is not something you can just develop by experience. I think it is more the other way around: if you have that capability of abstract thought, you will develop yourself as a software engineer to that ‘guru’ level automatically, because you see the abstractions, generalizations, connections, patterns and such, in the code you write, and the documentation you study.

I think it is important that the architect works with the team on implementing the system. This makes it easier for team members to approach him with questions or doubts about the design. It also makes it easier for the architect to see how the design is working in practice, and new insights might arise during development, to modify and improve the design.

Once an architect stops working hands-on with the team, he will get out of touch with the real world and eventually turn into architect type 1.

I suppose this type of architect brings us back to the earlier ’10x programmers’ as I mentioned in an earlier blog. I think good architects are reasonably rare, just like 10x programmers. I am not entirely sure to what extent 10x programmers are also architects, but I do think there’s somewhat of an overlap. I don’t think that overlap is 100% though. That is, as mentioned earlier, there is more to being a software architect than just being a good software engineer. There are also various other skills involved.

The role of an architect

If it is not even clear what an architect really is or isn’t, then it is often even less clear what his role is in an organization. I believe this is something that many organizations struggle with, especially ones that try to work according to the Scrum methodology. Allow me to give my view on how an architect can function best within an organization.

In my view, an architect should not work on a project-basis. He should have a more ‘holistic’ view. On the technical side, most organizations will have various projects that share similar technology. An architect can function as the linking pin between these projects, and make sure that knowledge is shared throughout the organization, and that code and designs can be re-used where possible.

At the management level, it is also beneficial to have a ‘linking pin’ from the technical side, someone who oversees the various projects from a technical level. Someone who knows what kind of technologies there are available in-house, and who has knowledge of or experience in certain fields.

Namely, one of the first things in a new projects is (or should be) to assess the risks, cost and duration. The architect will be able to give a good idea of technology that is already available for re-use, as well as the people best suited for the project. Building the right team is essential for the success of a project. There is also a big correlation between the team and the risks, cost and duration. A more experienced team will need less time to complete the same tasks, and may be able to do it better, therefore with less risk, than a team of people who are inexperienced with the specific technology.

Since architects are scarce resources, it would not be very effective to make architects work on one project full-time. Their unique skills and knowledge will not be required on a single project all the time. At the same time, their unique skills and knowledge can also not be applied to other projects where they may be required. So it is a waste to make an architect work as a ‘regular’ software engineer full-time.

With project managers it seems more common that they only work on a project for a part of their time, and that they run multiple projects at a time. They balance their time between projects on an as-needed basis. For example, at the start of a new project, there may be times where 100% of their time is spent on a single project, but once the project is underway, sometimes a weekly meeting can be enough to remain up-to-day. I think an architect should be employed in much the same way. Their function inside a project is very similar, the main difference being that a project manager focuses on the business side of things, where the architect focuses on the technical side of things.

This is where the lead developer comes in. The lead developer will normally be the most experienced developer in a team. It is the task of the lead developer to make sure the team implements the design as specified by the architect. So in a way the lead developer is the right-hand man (or woman) of the architect in the project, taking care of day-to-day business, in the technical realm.

The project manager will need to work both with the lead developer and the architect. As long as the project is on track, it will be mainly the project manager being informed by the lead developer of day-to-day progress. But whenever problems arise, the current project planning may no longer be valid or viable. In that case, the architect should be an ‘inhouse consultant’, where the project manager and lead developer ask the architect to analyse the problems, and to determine the next course of action. Was the planning too optimistic, and should milestones be moved further into the future, at a more realistic pace? Did the team get stuck on a programming problem that the architect or perhaps some other expert can assist them with? Does the design not work as intended in practice, and does the architect need to work with the team to modify the design and rework the existing code?

The holistic position of the architect also allows him to look at other development-related areas, such as coding standards, build tools etc. And the architect can keep up with the state of technology from research or competing companies, and help plan the strategy of a product line. Likewise, the architect can always be consulted during the sales process. Firstly to answer technical questions about existing products. Secondly, when a potential customer wants certain functionality that is not yet available, the architect can give an expert opinion on how feasible/costly it will be to implement that functionality, and what kind of resources/personnel it would take. In the more complex/risky cases, the architect might spend a few weeks on a feasibility study, possibly developing a proof-of-concept/prototype in the process.

Finally, I would like to reference a few resources that are somewhat related. Firstly, here is an interesting article discussing strategy: https://www.mindtheproduct.com/2018/03/growing-up-lean/

As I argued in my earlier blogs on Agile/Scrum… these methods are often not used correctly in practice. One of the reasons is because people try to predict things too far into the future. This article describes a very nice approach to planning that is not focused on timelines/deadlines/milestones as much, but on current, near term and future time horizons, where the scope is different for each term. The further things are away, the less detail you have. Which is good, because you can’t predict in detail anyway.

The second one is the topic of Software Product Lines: https://en.wikipedia.org/wiki/Software_product_line

It is a great way to look at software development. This goes well beyond just ‘design for change’, and the idea and analogy to product lines such as in (car) manufacturing may make the way of thinking more concrete than just ‘design for change’.

Posted in Software development | Tagged , , , , , , , | 1 Comment

AMD Bulldozer: It’s time to settle

As you may remember, AMD’s Bulldozer has always been somewhat controversial, for various reasons. One thing in particular was that AMD claimed it was the ‘first native 8-core desktop processor’. This led to a class-action lawsuit, because consumers thought this was deceptive advertising.

I think they have a point there. Namely, if Bulldozer was just like any other 8-core CPU out there, why would AMD spend all this time talking about their CMT architecture, modules and such? Clearly these are not just regular 8-cores.

AMD argued that the majority of consumers would have the same understanding of ‘core’ as AMD does in their CMT-marketing. The judge basically said: “Well, we’d have to see about that”. This led to AMD wanting to settle, because AMD probably figures that the majority of consumers would NOT have the same understanding, if people would actually investigate, and do a survey among consumers.

Which makes sense, because AMD is still a minor player in all this. Intel is the market leader, and they always marketed their SMT/HyperThreading CPUs as having ‘logical cores’ vs ‘physical cores’. The first Pentiums with HT were marketed as having a single physical core, and two logical cores. That is the standard that was set in the x86 world, which consumers would be familiar with. Intel has always stuck by that. The first Core i7s were marketed as having 4 physical cores and 8 logical cores (or alternatively 4 cores/8 threads). And AMD shot themselves in the foot here… With their marketing of CMT they are clearly implying that their CMT should be seen as more or less the same thing as SMT/HyperThreading. In fact, AMD actually argued that the OS needs a CMT-aware scheduler. Apparently a regular scheduler for a regular 8-core CPU didn’t work as expected.

So, the bottom line is that Bulldozer does not perform as you would expect from a regular 8-core CPU. And there’s enough of AMD’s marketing material around that shows that AMD knows this is the case, and that they felt there is a need to explain this, and also find excuses why performance may not meet expectations.

But you already know my opinion on the matter. I’ve written a number of articles on AMD’s Bulldozer and CMT back in the day, and I’ve always argued that it’s like a “poor man’s HyperThreading”:

This shows that HyperThreading seems to be a better approach than Bulldozer’s modules. Instead of trying to cram 8 cores onto a die, and removing execution units, Intel concentrates on making only 4 fast cores. This gives them the big advantage in single-threaded performance, while still having a relatively small die. The HyperThreading logic is a relatively compact add-on, much smaller than 4 extra cores (although it is disabled on the 2500, the HT logic is already present, the chip is identical to the 2600). The performance gain from these 8 logical cores is good enough to still be ahead of Bulldozer in most multithreaded applications. So it’s the best of both worlds. It also means that Intel can actually put 6 cores into about the same space as AMD’s 8 cores.

So here the difference between CMT and SMT becomes quite clear: With single-threading, each thread has more ALUs with SMT than with CMT. With multithreading, each thread has less ALUs (effectively) than CMT.

And that’s why SMT works, and CMT doesn’t: AMD’s previous CPUs also had 3 ALUs per thread. But in order to reduce the size of the modules, AMD chose to use only 2 ALUs per thread now. It is a case of cutting off one’s nose to spite their face: CMT is struggling in single-threaded scenario’s, compared to both the previous-generation Opterons and the Xeons.

At the same time, CMT is not actually saving a lot of die-space: There are 4 ALUs in a module in total. Yes, obviously, when you have more resources for two threads inside a module, and the single-threaded performance is poor anyway, one would expect it to scale better than SMT.

But what does CMT bring, effectively? Nothing. Their chips are much larger than the competition’s, or even their own previous generation. And since the Xeon is so much better with single-threaded performance, it can stay ahead in heavy multithreaded scenario’s, despite the fact that SMT does not scale as well as CMT or SMP. But the real advantage that SMT brings is that it is a very efficient solution: it takes up very little die-space. Intel could do the same as AMD does, and put two dies in a single package. But that would result in a chip with 12 cores, running 24 threads, and it would absolutely devour AMD’s CMT in terms of performance.

Or perhaps an analogy can make it more clear. Both SMT and CMT partially share some resources between multiple ‘cores’ as they are reported to the OS. As I said, Intel calls them ‘logical’ cores, but you can also see them as ‘virtual cores’.

The analogy then is virtual machines: you can take a physical machine, and use virtualization hardware and software to run multiple virtual machines on that single physical machine. Now, if you were to pay for two physical servers, and you were actually given a single physical server, with two virtual machine instances, wouldn’t you feel deceived? Yes, to the software it *appears* like two physical servers. But performance-wise it’s not entirely the same. The two virtual machines will be fighting over the physical resources that are shared, which they would not if they were two physical machines. That’s pretty much the situation here.

All I can say is that it’s a shame that Bulldozer is still haunting AMD now that they are back on track with Zen, which is a proper CPU with a proper SMT implementation, and they no longer need to market their CPUs as making it sound like they have more physical cores than they actually do.

Posted in Hardware news | Tagged , , , , , , , , , , , | Leave a comment

Steinberger guitars and basses: less is more

Time for a somewhat unusual blogpost. As you may know, aside from playing with software and hardware, I also dabble in music, mostly with (electric) guitars. I recently bought a new guitar, a Steinberger GT-PRO “Quilt Top” Deluxe in Wine Red:

There is quite a story behind this. I think there are various parallels with my usual blogs, such as this guitar design dating from the 1980s, and a lot of engineering and optimization went into this design.

For those of you who aren’t familiar with guitars, I have to say that the guitar world is generally quite conservative. Revolutionary guitar designs do not come along every year, or in fact not even every decade. I think roughly speaking, there are only a few major revolutions in the history of the guitar:

  1. The move from using ‘catgut’ strings (actually nylon on modern guitars) to using steel strings. The ‘original’ guitar, commonly known as classical or Spanish guitar, used strings made from animal intestines, as did most other classical stringed guitars (violin, cello, double bass etc). At some point in the early 1900s, a new type of guitar was developed by C.F. Martin & Company, designed for steel strings instead. These are also known as ‘western guitars’. They have a very different shape of the body and neck. The body is generally larger, which together with the steel strings, allows for more volume. The neck is longer and thinner than a classical guitar.
  2. The development of electric amplification in 1931. Now that many guitars had steel strings, it was possible to develop a ‘pickup’ module that uses electromagnetic induction, which can be placed under the strings, to convert the vibration of the strings to an electric signal, which can be sent to an amplifier and speaker. These pickups are simple passive circuits, with just magnets and a coil of copper wire. They are still used today in pretty much the same form.
  3. The ‘solidbody’ electric guitar, developed by Les Paul in 1940. As guitars became louder because of amplification, feedback became an issue: the acoustic body and strings of the guitar would resonate uncontrollably from the vibrations generated by the amplifier and speaker, which generated a feedback loop. Les Paul solved this by using a solid piece of wood for the guitar body. The body had lost its function as an acoustic amplifier anyway, now that there was electric amplification. A solid piece of wood was much resistant to feedback.
  4. The Stratocaster guitar, developed by Leo Fender in 1954. It had many interesting ideas, including a body shape that no longer resembled a traditional guitar, but was contoured for ergonomic purposes. It also introduced a new type of ‘tremolo’ system. I will get into more detail on this guitar later.

And I think that is pretty much it. At least, as far as complete guitars go. There have been small innovations on certain parts of the guitar, but generally these are considered optional extras or aftermarket upgrades, and do not significantly alter the guitar as we know it. In fact, the Fender Stratocaster remains one of the most popular guitars to this day, as does the Gibson Les Paul (originally from 1952, but the models with humbucking pickups and sunburst finish made between 1958 and 1960 became the archetypal model), and most players still use these guitars in more or less the same form as they were originally launched in the 1950s. Most other guitars are also just slight variations on these original guitars, and are mainly different in shape or choice of woods, but do not differ significantly in terms of engineering.

Enter the 1980s

In the late 1970s and early 1980s, there was a big revolution in rock/metal. Namely, we entered the era of the guitar hero, ushered in by Eddie van Halen. What made Eddie van Halen somewhat unique is that he built his own guitar from a combination of aftermarket parts and parts he ‘borrowed’ from other guitars. He used a Gibson humbucker pickup and put it into a Strat-style guitar. He also was a very early adopter of the new Floyd Rose double-locking tremolo:

See the source image

 

His guitar became known as the Frankenstrat. It became the template of the ‘Superstrat’ guitar, and various guitar companies, including Ibanez, Jackson, Charvel and ESP, would jump into this market by offering Superstrat-style guitars straight from the factory.

This was also the time in which I grew up, and this type of virtuoso playing was what attracted me most. As a result, a Superstrat guitar seemed like the ultimate guitar for me, because it basically offered you the best engineering and the most features. You got the best tremolo systems, the most advanced pickup configurations, guitar necks that were designed for optimum playability (generally thin, wide necks with a relatively flat profile and very large, aka ‘jumbo’ frets), combined with the ergonomics of the Stratocaster body design. I have always been attracted to clever engineering and optimized designs, in any area of life.

Getting back to the Stratocaster

What made these Frankenstrats and Superstrats possible, is the vision of Leo Fender, I would say. Above I said the Stratocaster was one of the big revolutions in terms of guitar design. I have to add some nuance to that statement. Namely, Leo Fender designed another guitar before that, in 1950. The first model was a single-pickup one, known as the ‘Esquire’. He then introduced a two-pickup model, which was initially known as the ‘Broadcaster’. However, because the company Gretsch already sold a drum kit under the name of ‘Broadkaster’, Fender decided to change the name to ‘Telecaster’. This model is still sold today. It can be seen as the forerunner of the Stratocaster, but guitarists also liked the original, so it remains a popular guitar in its own right. Fender also designed the Fender Precision bass before the Stratocaster, where he introduced the contoured body shape.

The Esquire/Broadcaster/Telecaster was one of the first solidbody electric guitars on the market. What made Fender somewhat unique is that Leo Fender was not a guitarist, or even a musician at all. He started out with an electronics repair shop, and then focused on designing and building amplifiers and pickups for electric guitars and basses. So Leo Fender was an engineer, not a guitarist, not a musician, and certainly not a luthier.

When he decided to build guitars, he also approached it like an engineer: he wanted the guitars to be cheap and easy to mass-produce, durable, easy to fine-tune, and easy to repair. This led him to make certain design choices that conventional guitar builders might never have made. One example is that he chose to screw the neck to the body. Another example is in his choice of woods, which were very hard woods. They were very durable, relatively light, and easy to work on. But they delivered a tone that was not necessarily very conventional. It was a very bright and jangly tone. But certain artists, especially country artists, loved this new sound, because it would cut through very well with fast lead playing.

The Telecaster introduced the wood types, the bolt-on neck, and had three adjustable saddles for setting intonation and string height for each pair of strings. It had a cutaway on the body to access the high frets on the neck. And it had two pickups, with a volume knob, tone knob, and a pickup selector switch for easy access below the bridge.

The Precision bass introduced the ergonomic ‘contoured’ body shape. It added a second cutaway above the neck, for even better access to the high frets.

The Stratocaster took these ideas further. It perfected the adjustable bridge by having individual saddles for all 6 strings. It added a third pickup and a second tone knob. It introduced a contoured guitar body, inspired on the Precision bass. And it introduced the new ‘tremolo’ system.

Now, ‘tremolo’ needs some explanation: What Leo Fender called a ‘tremolo’, was actually a ‘vibrato’ system (and funny enough, some of his amps had a tremolo effect, which he called ‘vibro’). Namely, tremolo is a fluctuation in volume. The ‘tremolo’ system on a Strat does not do that. It allows you to lower or raise the pitch, so you can perform vibrato effects, or portamento (pitch slides). But somehow, the name stuck, so even today, most guitarists and guitar builders refer to Strat-like vibrato systems as ‘tremolo’.

Another typical feature of the Stratocaster construction is that the body is a ‘frontloader’. That is, the cavities for all the electronics are routed out from the front of the body. These cavities are then covered up by a large plastic scratchplate which covers most of the body.

Customization through mass production

I think it’s safe to say that the simple mass-production design of the Fender guitars is what gave rise to aftermarket parts. It is very easy to replace a bridge, a neck, the electronics, or even to perform some custom routing. The scratchplate will cover it up anyway, so nobody will notice if the routing is a bit sloppy.

That is how Eddie van Halen could build his own Frankenstrat. And how many other players did very similar things to their guitars. Especially Floyd Rose systems were installed by many guitarists on their Strats and similar guitars in the 1980s.

But, the Floyd Rose is still a part designed to be fitted to a standard Strat-style design with little modification. It still makes use of the same idea of a sustain block under the bridge, which is held under tension by three springs and an adjustable ‘claw’:

This system is rather difficult to adjust, and the large springs also have an additional problem: They are susceptible to vibrations and give a sort of ‘reverberation’ effect. When you hit the strings hard, the springs will start to vibrate along with the bridge and body. When you then mute the strings, you hear the springs ‘echo’. This in turn can be picked up again by the strings and pickups, so the ‘echo’ can also be heard in the sound coming from the amp.

Another thing is that you still have the regular tuners on the guitar, which you need to put the strings on the guitar, and perform the initial tuning, before clamping down the strings with the locking nut. At that point, there are mostly ‘vestigial’. The bridge only has fine-tuners. Over time, the strings may go out of tune to the point where they are beyond the limited reach of the fine-tuners, and you need to unlock the nut, use the regular tuners, and then lock again, and fine-tune.

A third issue with a Floyd Rose is that you can no longer adjust the height of each individual saddle. You can only lower or raise the bridge as a whole, but the relative saddle heights are basically fixed.

So, while the Floyd Rose is an improvement over the traditional Strat tremolo in terms of tuning stability, it is basically just that: an improvement on a design dating back to the 1950s, which has to work inside the limitations of the original Strat design to an extent, because it is meant to be an aftermarket part, which can be installed on existing guitars.

Enter Ned Steinberger

So, what you should take away from Leo Fender and his Stratocaster is that he basically ‘started fresh’, with no preconceived ideas of what materials to use, or what kind of design and construction method. The result was a guitar that had a very unique look, sound and identity, and also pushed playability and versatility to new heights.

From then on, most guitars have been more less evolutionary in nature: taking existing guitar designs such as the Stratocaster as the basis, and changing/improving certain details. The Floyd Rose tremolo system is arguably the largest improvement in that sense.

But then came Ned Steinberger. He basically did the same as what Leo Fender did 30 years earlier: He designed basses and guitars without any preconceived ideas, just trying to find the best materials available, and trying to engineer new solutions to create the best instruments possible.

Ned Steinberger was also not a luthier by trade, he originally designed furniture. And as far as I know, like Leo Fender,  Ned did not actually play guitar or bass himself. Interesting fact is that his father is Jack Steinberger, a Nobel-prize winning physicist. So he grew up in a household of science.

Steinberger started on designing basses. He wanted to create a bass that was as light and compact as possible, while also being very durable. This led him to use modern composite materials such as carbon fiber and graphite, rather than wood. Another defining feature of the Steinberger instruments is the headless design. By using strings with ball-ends at both sides, rather than only at the bridge, there is no longer any need for conventional tuning pegs, and the big headstock that they are mounted on. The tuners can be moved to the bridge instead. Unlike the Floyd Rose, these are not fine-tuners, but full-range tuners, made possible with very fine-threaded 40:1 ratio. Also, Steinberger did not sacrifice per-string adjustment of string height or intonation.

Image result for Steinberger L bass

Another defining feature is the active electronics. Where conventional pickups for guitar and bass are passive coils and magnets, active pickups use an on-board amplifier, powered by a battery (usually a 9v block). Because of the amplifier, the signal from the pickup itself does not have to be that strong. This means that pickups can be designed with smaller magnets and different coils. The result is that there is less magnetic pull on the strings, allowing the strings to vibrate more freely and more naturally, resulting in a more natural tone with better sustain. The on-board active electronics also allow for additional filtering and tone shaping. The electronics for Steinberger were designed and manufactured by EMG.

The body is actually a hollow ‘bathtub’ created from carbon fiber, where the top is like a ‘lid’. This allows plenty of space to house all the electronics. And unlike wood, there is no problem with feedback.

Another detail is the use of a zero-fret, rather than a conventional nut. This means there is no special setup required for any specific string gauge or desired string height (‘action’) for the player. The ‘special case’ of the nut is eliminated.

You could say that everything about this bass guitar has a ‘less is more’ approach. The neck and body are reduced to little more than the absolute minimum for the player to hold and play the instrument.

And then, the Steinberger guitar

Once Steinberger introduced their bass, and it became quite popular with players, the next logical step was to create a sister-instrument in guitar form. It looks quite similar to the bass, and it is very similar in terms of design and construction, except for one important detail:

Image result for Steinberger GL guitar

The bass did not have a tremolo system, because these are very uncommon on basses, and are not very practical anyway, given the big, heavy and long strings. However, a guitar had to have one, Ned Steinberger must have thought. He may have seen this as an interesting engineering challenge.

And boy did Steinberger live up to that challenge! Because of his basic design with the double-ball strings and the zero-fret, there was no need for any additional locking of the strings. This basic design was effectively already equivalent to a Floyd Rose, but with the added benefit of having full-range tuners, instead of just fine-tuners. And the basic design of the bridge could easily be made to rotate around a pivot point, like a Strat-style tremolo does, without sacrificing the adjustability of the saddles.

Also, Steinberger could simply add one big spring to the metal base of the tremolo, instead of the multiple springs and the claw found underneath a Strat-style guitar. This got rid of the ‘echo’ problem of the springs. Also, he added a simple adjuster knob at the back, so you could adjust the spring to make sure the tremolo was in tune in its center position. Much easier than the claw and screws on a Strat.

But Steinberger did not just stop there. A characteristic of any traditional tremolo/vibrato system is that all the strings receive the same amount of movement. This however does not affect all strings equally. The thicker, lower pitched strings will have a much bigger drop or rise in pitch than the thinner, higher pitched strings.

While this is good enough for small changes in pitch on chords (a bit of vibrato), or changes of pitch on single notes, anything more advanced will go out of tune. This has only very limited musical application.

So, Steinberger thought: Challenge accepted! And he came up with what he called the TransTrem. As far as I know, it is still the only system of its kind. The ‘Trans’ in the name is short for ‘transpose’. Steinberger pulled it off to make each individual saddle on the bridge move at its own calibrated rate. This allowed him to make each string change pitch at the same rate. Which means that all 6 strings can remain in tune while operating the tremolo. From there, it was a relatively small step to make the tremolo ‘lock’ in a few ‘preset’ positions. This means you could transpose the entire tuning of the guitar up or down by a few steps (E is standard, you can go down to D, C and B, and up to F# and G), and remain in tune while doing so! This means you can change tuning on the fly while playing a song.

Eddie van Halen was an early adopter of the Steinberger guitar, and he composed a few songs that made use of this special TransTrem feature. The song ‘Summer Nights’ is a good example:

And as you can see, the guitar stays in tune during all of this, and sounds good in all tunings and during all trickery that Eddie van Halen pulls off.

Here is also a nice documentary on Steinberger, where Ned talks about how he came to the headless design, and his thoughts about ergonomics. It also gives some nice insights into the factory itself:

Fast-forward: Present day

I don’t suppose Steinberger ever became such a household name as Fender and its Stratocaster. So what happened? Steinberger’s guitars and basses were certainly ‘space age’ technology at the time. Back in the 1980s, people loved futuristic stuff, and initially Steinberger couldn’t make enough of them. By 1987, Ned sold the company to Gibson, one of the largest, and ironically enough, one of the more traditional guitar companies.

But by the 1990s, the futuristic thing got out of style, and I suppose Steinberger went along with it. Perhaps the instruments were too far ahead of their time? Ned Steinberger had moved on to a new instrument company, called NS Design. Somewhere halfway the 1990s, Gibson stopped producing the original Steinberger guitars.

From then on, the Steinberger name would resurface every now and then, usually on cheaper Asian-made guitars. The guitar I bought is a ‘Spirit by Steinberger’, and is made in China. This Spirit-model has been in and out of production a few times. When they were launched initially, I saw one at a shop, and tried it out. I always liked the original Steinbergers, so I liked the Spirit, because it looked and felt like those original guitars. It just was a lot cheaper. But I never bought it.

Recently, Gibson has relaunched the Spirit guitars again, so I figured I’d not miss out this time, and I bought one right away. Where the original guitars were very much high-end guitars, these ones are clearly budget guitars, and it seems that their price and their compact form are the main selling points. It is marketed as a travel guitar, more or less. There also seems to be some renewed interest in headless guitars in general, where some high-end brands such as Strandberg and Kiesel make headless guitars, albeit with ‘traditional’ body shapes. Perhaps that has something to do with the Spirit re-entering production.

My Spirit GT-Pro Deluxe differs from the original in various ways:

  1. It does not have the TransTrem, but a cheaper R-Trem, which does not have the transposing capability, but is otherwise more or less equivalent to a Floyd Rose style tremolo.
  2. It is made of wood, rather than composite materials.
  3. It has passive electronics rather than active.

Not having the TransTrem is a bit of a shame, but not surprising, given that it was a very complex piece of hardware. The R-Trem works quite well in practice, and is much easier to tune and set up than a Floyd Rose is.

The fact that it is made of wood is also interesting. Namely, there is always a considerable discussion about choices of tone wood and construction in guitars. This guitar has considerably less wood than any conventional guitar does. Yet, it sounds remarkably conventional. Also, it does not seem to suffer all that much in terms of sustain. So perhaps this guitar is just making a mockery out of conventional wisdom in terms of guitar building?

The passive electronics are made by Steinberger themselves, which is a bit of a shame in the sense that I have no idea what they are, so I have no frame of reference whatsoever. Combined with the unconventional guitar body, choice of woods and construction, I basically cannot say anything about what makes the guitar sound the way it does.

All I can say is that I quite like how the overall package sounds. The pickups are not very loud, but I’m not sure how much of that is due to the fact that there’s such a small guitar body, and how much of that is due to the pickup design. The guitar seems to have a very ‘clean’ and ‘neutral’ sound to it. It is something I more or less associate with active EMG pickups, but again, I don’t know how much of this comes from the tiny body, and how much comes from deliberately designing the passive pickups to sound like the original Steinberger with EMGs. What I can say however is that the guitar seems to have a very nice top-end. Pinch harmonics sound really powerful and sustain well on the high B and E strings. The guitar body and neck are mostly made of maple, and that is something I would associate with maple.

The quality of the pickups seems very good at any rate. They are extremely low-noise, and you do not suffer from microphonic feedback or other noises (and as I said, no dreaded ‘echo’ from the trem either).

It has a somewhat ‘dry’ palmmuted sound. It sounds okay on the low E string, but lacks a bit of ‘oomph’ compared to my usual guitars on the A and D strings. Again I’m not entirely sure how much of that is down to the overall lack of output of the pickups, or specifically because of the acoustic qualities of the guitar itself (perhaps lacking a bit of midrange).

So it could be that these are very generic pickups, and the guitar just sounds the way it does because of its wood and construction. It could also be that these pickups are very specifically tuned for such an unusual guitar as this one, and regular aftermarket pickups might not work at all in this particular guitar.

But overall, after I set up the guitar to my liking, I find it quite easy to play, and although it takes some getting used to its specific sonal characteristics, it can sound very nice. It has a very unique feel to it. It is extremely light and well-balanced, and you have excellent reach to even the highest frets. It’s so light that the entire guitar shakes when you perform vibrato with your fingers. Coming from the other end of the spectrum, where my first decent electric guitar was a Les Paul, that is very strange indeed. I suppose it also is an indication that I should try to optimize my playing to use the minimum amount of effort required for vibrato, and let the guitar do the work.

I hope to own a ‘real’ Steinberger one day, the original graphite/carbon fiber model with the TransTrem and the active electronics. Sadly they are rare, and since they are so complex, chances are someone has botched them over the years, so buying second-hand is quite a risk (I often find that even with ‘simple’ Floyd Rose guitars… people with no clue have damaged the tremolo system beyond repair). But I will have one, one day! For now I’d settle on just being able to play a real Steinberger someday, just to know what a real one really feels and sounds like.

For now, I’ll leave you with a quick recording I did of a Joe Satriani song:

 

Posted in Uncategorized | Tagged , , , , , , , , , , , | Leave a comment

Zen2: Credit where credit’s due

So, AMD has released its new Zen2 architecture, codenamed ‘Matisse’, and now available commercially as the Ryzen 3000-series.

And I think I can keep this blog short: Intel has its work cut out, because AMD is back. AMD now has TSMC build their x86 CPUs, using their new 7 nm process. But Zen2 does not stop there. As in: it’s not just a so-so architecture that leverages a superior manufacturing process to push clockspeeds or transistor count into an advantage.

No, Zen2 is actually a very good architecture in its own right. It can compete head-to-head with Intel’s best offerings on IPC, and performance-per-watt is very good as well. Combine that with AMD’s standard weapon of value-for-money, and it feels like we’re back in the late 90s/early 2000s, where AMD’s Athlons would compete head-to-head with Intels Pentium III and 4.

Perhaps most interesting is that they chose not to use a single die, but to spread the functionality over multiple dies, where they can combine different manufacturing processes (something that Intel has done in the past with CPUs and GPUs/IMCs as well, such as the 32nm/45m Westmere).

It seems that Intel’s biggest problem at the moment is that they are still on 14 nm for their mainstream offerings. Now that is a first, as far as I know. Intel has always had the edge in manufacturing, even when their architecture itself was not superior. This allowed Intel to compete back in the Athlon days. But this time, Intel is on the back foot in manufacturing. And for the first time since the introduction of the Core2, Intel no longer has the more efficient architecture either.

So, I suppose we will now wait and see what Intel can do to strike back. What will their 10 nm process bring? And can they introduce a new architecture that once again takes the IPC/performance crown?

But for now, AMD seems to be in a very good place. I think that is good news. I have always said, since AMD had taken over ATi, that problems in the CPU department would hurt the GPU deparment, for the simple reason that the CPU market is larger, and would have their priority. And I think this is what we have been seeing in recent years. With AMD struggling since Bulldozer, their Radeon line was slowly losing grip on nVidia. So hopefully we will now see the reverse: with Zen2, AMD should be able to get back a lot of CPU marketshare, and boost their revenue considerably. And with that, they should be able to invest more in R&D for their GPU line as well, and close in on nVidia once again.

Posted in Hardware news | Tagged , , , , , | Leave a comment

Just keeping it real at Revision 2019

I visited Revision again this year. And I took my IBM PC/XT 5160 with me. And I made a quick release. That release was a plan that I’ve had for a while, and which I mentioned in my previous blogpost: a music disk for the IBM Music Feature Card.

And here is the pouet.net page. You could download the binaries, but not many people will have the required hardware, I guess. And it will also break many emulators, because IBM Music Feature Card support is not a common feature in emulators (yet). I made sure that the code runs on my DOSBox pre-release with IMFC support, and AMAME, as described in my previous blogpost.

Credits

Code was done by me, graphics by VileR of 8088 MPH fame, and the music was done by UP-C.

If you look closely, the graphics aren’t just rips of the original Sega title screens, but they are an impression that combine the title/logo with some in-game view. All this in colourful composite CGA of course.

Likewise, the music is not a straight rip, but is tuned specifically for the IBM Music Feature Card/Yamaha FB-01 with custom SysEx messages embedded in the MIDI data.

System requirements

The minimum requirements are: IBM PC or compatible, 8088 CPU at 4.77 MHz, 128kb memory, composite CGA, IBM Music Feature Card, 360kb 5.25″ floppy drive, PC-DOS 2.0.

Technical info

The code is reasonably straightforward. The MIDI player is based on my pre-processed data format, combined with my streaming disk routine. Why the streaming disk routine, you might ask? Well, I originally developed that for the PCjr MIDI player for the DreamBlaster S2P. The problem with MIDI data is that it has no structure. It is just a linear sequence of note data and delays. So when a part of a song is repeated multiple times, the data is duplicated. The PCjr only has 128k of memory, which is shared with video memory, and your average MIDI file will be too large to fit in memory entirely.

Since that code was already done, and it worked well, even from floppy, I figured I could just replace the code for the DreamBlaster S2P with IMFC code and use the rest as-is. That way, this music disk will run even on PCs with low memory.

In the future I might experiment with data compression. I have tried Trixter’s LZ4 code on some MIDI data, and it compresses really well. For example, the Hang-On song is about 112k, but compresses down to less than 7k. That might be very interesting, especially for PCjr, since it lacks the DMA controller to load from floppy in the background.

There is not too much to tell, other than that. Perhaps that the interrupt controller uses Auto-EOI mode, as discussed earlier. And there’s two cute tricks for the graphics. The first is that I am using a cute trick that is possible in CGA composite mode 6: fade in/out. I had already used that in the sprite part of 8088 MPH. Namely, by adjusting the background colour, you can affect the luminance, so I get 4 levels of luminance to fade the screen. The second trick is made possible because of this: I start with the screen faded to black. Then I load the graphics directly from disk into CGA memory. And then I fade in. This way I do not need any graphics data in system memory at all, which again is very nice for low-memory configurations.

That’s about it for now, hope you liked it!

 

Posted in Oldskool/retro programming, Software development | Tagged , , , , , , , , , , , , , , , , , , | Leave a comment

The IBM Music Feature Card and Yamaha FB-01

This has got to be one of the most obscure pieces of hardware I have covered so far. A sound card for the IBM PC, released in 1987. And by none other than Big Blue themselves! I am talking about the IBM Music Feature Card:

ibm_music_feature_sound_card

I suppose this card needs some explanation. As you can already see from the picture, it is a large, complicated card, with a lot more components than an AdLib or a CMS/GameBlaster.

That is because the card features a full-blown MIDI sound module, with its own Z80 microprocessor. The MIDI sound module is based on the Yamaha FB-01:

2524t2ec16fhjiie9qtyi4chbqfjgsz25294g257e257e60_57

The rest of the card contains a proprietary MIDI interface. In fact, the entire card is designed and built by Yamaha, for IBM. In concept, it is very similar to the later Roland LAPC-I, which is also a combination of a MIDI interface and a MIDI sound module on a single ISA card, except in that case, it is an MPU-401 MIDI interface and an MT-32 module.

For more background information, you can read the excellent coverage of the IMFC on Nerdly Pleasures.

So, it is an FM sound card then. Is it any good? Technically, yes. Where the AdLib uses the low-end YM3812 chip, with 9 voices and 2 operators in mono, this uses the more high-end YM2164, which has 8 voices and 4 operators, in full stereo. The entire card is well-made, and was quite expensive at the time, aiming more at semi-professional musicians than gamers. Here is an example of Sega’s Afterburner theme converted to FB-01/IMFC:

However, like the AdLib, the card suffers from the fact that most musicians couldn’t really be bothered to make the most of the FM synth, as discussed in an earlier blog.

Anyway, I was fascinated by this card. Firstly because it is so rare, and not supported by a lot of software. Secondly, because it is an official IBM expansion card. And lastly because apparently nobody had bothered to add support for it yet, in DOSBox. So I decided to give it a try.

Is it really a Yamaha FB-01?

The approach to emulating this card appeared to be straightforward: there is the MIDI interface, and then there is the MIDI module, which is allegedly a Yamaha FB-01. Well, as usual, with PCs, it depends on how you look at it.

The MIDI interface is reasonably simple, and it didn’t take me too long to emulate enough of it to get games to detect the card, initialize it, and send MIDI data, which I could intercept. I simply made use of the MIDI code that was already present in DOSBox for the MPU-401, and sent the MIDI data out via the same interface.

This worked, and the only way I could test it, was with a real IBM Music Feature Card, with its MIDI in connected to the MIDI out of my DOSBox machine. But when I sent out an early test-build of this DOSBox for people to test with a Yamaha FB-01, they reported problems. Some instruments were wrong or missing in Leisure Suit Larry 3. What was going on?

Different firmware, that’s what was going on. While the hardware is mostly the same (the IMFC is obviously missing the LCD display and the front panel buttons), the IMFC apparently has an updated version of the main firmware. We studied the manuals of the IMFC and FB-01 closely, and mapped out all SysEx commands. We found that the IMFC supports one command that the FB-01 does not, and Sierra was using this command.

The command in question is the “Parameter List” command. Yamaha implemented an interesting type of SysEx command, which will likely confuse a lot of modern MIDI software. They basically have variable-length SysEx commands, which effectively switch the device into a certain state, after which you can send it a variable (in theory endless) amount of custom commands within that SysEx command.

The FB-01 supports one type of this list command, which is the “Event List”. It basically switches to a Yamaha-specific proprietary extended MIDI mode, where you can send key on/off, control change, program change, after touch, pitch bender and parameter change events, which allow some fine-grained control that normal MIDI does not.

The “Parameter List” however was apparently added to the firmware after the FB-01 was released. Similar to the “Event List”, it allows a variable number of commands. These commands modify the parameters of instrument settings. The FB-01 does of course support changing instrument parameters via SysEx messages. It just does not support the list format, only individual SysEx messages.

So the fix was relatively simple: the state machine has to be implemented in the emulation code. Whenever it detects the start of a Parameter List command, it will go into the parameter list-state. Then every command is in the list is translated to an individual SysEx command on-the-fly (that last part is extremely important, the list could be virtually endless, and you do not want to buffer the whole list before you start converting it). When an end-of-SysEx is detected, it switches back out of the parameter list-state, and Bob’s yer uncle.

Funny enough, we found out that this difference between the IMFC and FB-01 was known in the old days. Firstly, apparently someone once wrote a TSR for King’s Quest 4 and related Sierra games that modified the IMFC driver to be used on a MPU-401 and FB-01 instead. When I inspected the code, I found that this driver also did the same SysEx translation for compatibility reasons.

Secondly, Sierra also provided their own FB-01 driver for some games. Ironically enough, this driver also fails with some games. Why does it fail? Because apparently they translated the IMFC parameter list incorrectly. Namely, the parameter list works on the basis of instrument number. Sierra however translated to parameter change commands based on MIDI channel (the FB-01 supports both variations). The problem here is that there is not necessarily a 1:1 mapping between MIDI channel and instrument, so some parameters are set incorrectly with the FB-01 driver.

What’s a YM2164?

Right, with the basic MIDI problems out of the way, the next big step was to actually emulate the synthesizer part, so that people would not need a real Yamaha FB-01. I wanted to go with a similar approach to Munt, which is an emulator for the Roland MT-32. A Windows driver, which installs itself as a MIDI input device, and outputs the emulated sound to your audio device in realtime. This means that you can use it as a standalone synthesizer as well, using Windows MIDI tools.

The Yamaha FB-01 is a reasonably straightforward device: it has a Z80 microprocessor, some firmware in a ROM, and a YM2164 chip. The firmware for the Z80 contains the MIDI interpreter, which takes care of all the MIDI events, manages the voice banks and configuration, and drives the actual YM2164 chip.

So I figured I could re-implement the MIDI interpreter part myself, based on the SysEx documentation. The biggest problem is the YM2164 itself. This is a very obscure chip, mainly because it was not a chip that was sold to third parties. It was only used in Yamaha’s own products, and a few Korg synthesizers. As a result, you cannot find any datasheets or manuals online, because they were never made public. I tried contacting Yamaha, but they could not provide me with the documentation for the chip either, so it seems to be lost forever.

However, we still have the internets! And as luck would have it, Yamaha actually built modules with the YM2164 chip for the MSX computer range, the SFG-05. BUT! The YM2164 is only used in the later revision of the SFG-05. Earlier revisions used the YM2151. And there is our clue!

Namely, people have studied these different modules, and they are basically identical in terms of sound and operation. Upon studying the firmware, they only found a few superficial differences, in terms of undocumented registers and timer resolution.

And unlike the YM2164, the YM2151 is a common chip, with available documentation, which was used in various arcade machines, and has proper emulation support in MAME. So we can just use the YM2151 emulation from MAME and hook it up to our MIDI interpreter, and it should be a good virtual FB-01!

This was the initial plan. However, due to other priorities, I never finished my own implementation. Recently someone pointed out that MAME actually includes an entire Yamaha FB-01 machine (which indeed uses the YM2151 emulation code to approximate the YM2164). And there is also the AMAME fork, which focuses on using MAME synthesizers, and wants to make them into VSTis.

So I tried experimenting with MAME and AMAME a bit, to see if the FB-01 actually worked. And indeed, it did! This was somewhat of a breakthrough for my IMFC emulation project. Namely, I can now offer full software emulation. There is no physical FB-01 required anymore. This makes the project a lot more interesting for the average DOSBox user.

Here is a quick video I made to demonstrate the whole thing:

Early release

So I have decided to put together a quick release of what I have so far. This is my custom DOSBox build with IMFC support, and a compiled version of AMAME to offer you an emulated FB-01 to complete the chain. You can download it here.

In order to set it up, you need to do 3 things:

  1. AMAME needs two ROMs for the FB-01 emulator, which I did not include because of copyright issues. You will have to find them yourself. One is the FB-01 firmware, which should be in a file called fb01.zip. The other is the firmware for the LCD controller, which should be in a file called hd44780_a00.zip. Place those two files in the ‘roms’ subdirectory under AMAME.
  2. DOSBox needs to be configured to output to the correct MIDI device. In this version, there is no IMFC-specific section in the dosbox.conf file yet. I am re-using the MPU-401 configuration. So set up the MPU-401 as usual: under the [midi] section, you use the midiconfig=n option, where n is the index of the MIDI output device you want to use.
  3. AMAME also needs to be configured to get input from the correct MIDI device. This is done by name. Run “amame64 -listmidi” to get a list of the names of the possible MIDI in and out devices. Then use the -midiin and -midiout parameters to pass these names. I have included an FB01.bat as example, just change the names to the name of your device, and you should be up and running.

A loopback will be required between the MIDI out-device that DOSBox uses and the MIDI in-device that AMAME uses. I have tried LoopBe1 and loopMIDI, but found problems with both. The first seems to filter SysEx commands, which effectively filters out custom voice data from games, which makes it rather useless. The second seems to crash the FB-01 on large SysEx commands. I ended up using my E-mu 0404 USB interface with a physical loopback cable between its MIDI out and MIDI in ports. This works perfectly.

You could also use a real Yamaha FB-01 instead of AMAME of course.

Make a demo about it

The emulation still needs some work (it currently does not fully emulate the 8253 timer on the IMFC yet, MIDI in is not working yet either, and not all commands are fully implemented yet). But in its current state it should be good enough for all Sierra games at least.

I also still want to complete my own virtual FB-01. Firstly, because it should be more convenient to use than AMAME. Secondly, because my code will be a clean reimplementation of the MIDI interpreter, and will not require any copyrighted ROMs.

But for now, I think it is most interesting that the IBM Music Feature Card is now available to a much wider audience. And the next thing I want to do is probably some simple demo/music disk, targeting an 8088 at 4.77 MHz with CGA and an IBM Music Feature Card.

Posted in Oldskool/retro programming, Software development | Tagged , , , , , , , , , , , , , , | 1 Comment