Quite a few years ago I had the problem of creating a product that would help printers create estimates for potential printing jobs. We had one of the early micro-computers at our disposal, and the only programming tool that was available for it was a macro assembler. We had to get the product out in a ridiculously short period of time, and were very limited in the amount of memory we could use.
We got together and realized that we had a fairly simple problem. The ultimate goal was to create printed estimates. Each estimate was calculated based on a combination of data that was unique to the job (the size of the paper, the number of colors, the number of copies, the type of paper, etc.) and data that was common to most jobs, but could be changed at will (the amount to charge for different kinds of paper, press setup and per-copy charges, etc.).We also had to save estimates and create new versions with changed parameters.
Here’s what we did. We broke the whole problem down into seven overlapping problem domains. We created a set of macros for each domain. The parameters of each macro contained attributes relevant to the domain. When assembled, each macro would deposit coded data in memory – no instructions. For example, we had a set of macros for input, another for printed estimates, and another for the core variables relevant to estimating. Macros could refer to other macros, so we could eliminate redundancy and keep the memory requirements as small as possible.
Each parameter in a macro had to correspond to or do something, so we definitely wrote code, but the code we wrote was highly abstract and was roughly proportional to the extent of the macro parameter definitions. When we added a new macro or a parameter to an existing macro, we would add or modify a couple line of assembler code.
The actual functionality of the program was created by a small amount of code that rarely needed to be touched and was pretty easy to write and debug, and a relatively larger amount of macro calls that deposited meta-data in memory. The instructions walked through the meta-data and created the behavior the user saw. We spent time at the beginning understanding the nature of the problem, defining macros and writing code. As the project went on, we spent less time with code and definitions and more time with writing and modifying macro calls. As time went on, the users got more and more stuff, and we were able to change what they didn’t like quickly and without introducing further errors or side effects.
We lost a good deal of time because we picked a computer that was too early in its lifecycle. It had hardware errors, and the macro assembler that was so important to us was buggy. The operating system was primitive, and we even had to write our own file system. In assembler. With a flakey machine and a buggy macro pre-processor.
The main competitor at the time had a system they sold for roughly $25,000. Ours was faster, easier to use, had far more functions, and could be sold at great margins for $10,000. The project was taken on by me and three totally awesome geeks. We delivered the product on the date that was fixed at the start of the project, a date that based on nothing except when it would be nice to have the product. It proved to be nearly bug-free when delivered to customers; one customer found one bug after a couple of months, and I was able to fix it in an hour. Time from start to finish: ten weeks.
Comments