Tech Support Section Home Mekong Network Home
About This Site...
Mekong.Net Sites...
Recently Updated...
Recommended Links...
Contact Us...
Questions? Comments? Requests? Click here to contact us.

On Software Complexity

Several years ago, I worked (briefly) on a large-scale development project. It was what is sometimes called a "death march" project; most of the developers involved believed the project had absolutely no chance of ever being completed successfully. I've collected a few other thoughts about the project in a separate page ("Burning Hundred Dollar Bills"). Sure enough, about two months after I was assigned to the project, it was cancelled. Virtually nothing was salvageable; massive amounts of money and effort vanished without a trace.

The project in question was designed to use a "rules server" to create an extremely flexible architecture. It sounded wonderful... until you stopped to consider just how much complexity that "flexibility" carried with it.

I wrote the memo below to my team leader, in the aftermath of a somewhat unsettling meeting. To me, the meeting provided the first glimpse of the factors that would ultimately doom the project.

I don't think any of the code from the project was ever re-used. The memo below, on the other hand, could probably be recycled again and again.


Thoughts on the Rules Server Project

Just a few notes in the wake of Monday's meeting of the Rules Server team.

Looking at this project from the point of view of someone who lacks a technical background, I have to think in broad terms about what seems desirable for the end user.

Arthur Clarke once wrote that at a certain level, technology becomes indistinguishable from magic. I'm in awe of complex systems when they function correctly. Put a plastic card in the pump at the gas station, fill up, drive away. Fast, easy, and flawless.

However, I am a skeptic by nature. That means I'm often wrong. Had I been born twenty years earlier, I would have laughed in the face of anyone who said that men would some day walk on the moon.

Still, often being wrong does not prevent me from having firmly held opinions about what can and cannot be accomplished on a given project. Nor does lack of technical expertise stop me from thinking about the technical aspects of the Rules Server.

Everyone who develops software should be forced to work Tech Support at some point in their life. It would be an unpleasant experience that would drive home a very valuable lesson:

Stuff should work.

That is a simple axiom, but it keeps getting lost among the bells and whistles. We all want to work on cutting-edge technology, employing ground-breaking methodologies to push the frontiers. We want to build Formula One racers. Unfortunately, there is not a great need for Formula One mechanics. Few end users need cars that will go 230 miles per hour. We need cars that will start when it is ten below zero, engines that will not seize when we forget to check the oil for five months, and brakes that don't need new parts every five thousand miles.

Factory-built cars are reliable. At least, they are reliable compared to computers. We have constant crashes. We have data that gets overwritten by shoddy software. We have computers that take four minutes to turn on, which we shut off by pushing a button labeled "Start." We have stuff that sucks.

As an end user, my foremost request to software engineers is: Make it reliable. If that means keeping it simple... and I believe it does, for the reasons outlined below... then keep it simple.

There is a very, very crucial point that no one mentioned at Monday's meeting. It may simply be so obvious that no one felt it worth discussing, but I want to state it explicitly here:

The amount of power and flexibility given to the end user is directly proportional to the potential for problems.

The more the end user can modify, the more he or she can screw up. Every change has the potential to impact the program in unforeseen ways. Moreover, the potential for bugs, glitches, and outright disasters increases exponentially. If you toss a coin once, you have a fifty percent chance of calling it correctly... which is to say, you have a fifty percent chance of being one hundred percent correct. Toss the coin twice, and your chance of being one hundred percent correct drops to twenty-five percent. Toss it four times, and your chance drops to twelve and a half percent. Your chances of perfection get very slim, very quickly.

It may well be that my lack of understanding of rules-based development leads me to the wrong conclusions. In any case, one of the topics of Monday's meeting was the idea of a set of "control rules" that would limit the ways in which the end user would be able to modify the program. The discussion degenerated into semantics: what constitutes a control rule? Can the end user perform meaningful manipulation of the program without the ability to modify control rules? Could the user perform the necessary modifications if they were permitted to modify only the data in a class, and not the behavior of the objects? I worry that we may be giving the power of object-oriented programming to people with no expertise in logic. Power in the hands of an untrained person is a bad thing. ask the sysadmin of my ISP what happened when I tried my hand at shell scripting. My motto in the wake of that incident: "Mom says I can't use UNIX anymore. She says I'll put someone's eye out."

The concept of building a reliable system that can be modified on the fly by the end user seems, to me, impossible. It's not impossible in the sense of turning a chicken into a donkey. But it's impossible in the sense of firing a cherry pit out of a rifle and having someone catch it in their teeth a mile away. Strictly speaking, it is possible, but it's phenomenally difficult. It would be far better to design a program with limited capabilities but greater reliability, rather than a system with great capabilities but limited reliability.

I'm not arguing that a complex system is inherently unreliable in use. Greater complexity may lead to a finished product that is more reliable in operation. If I build a belt-driven grinder, I could add a belt-tensioning device that would make my machine more reliable. But the construction of that device is now more prone to errors. I have an additional task which, if implemented improperly, will prevent the machine from working correctly. My job is harder, but the user's job will be easier.

If the rules-based system is going to work, it seems to me that the programmers need to anticipate every possible implementation of the rules. It would be wonderful if the program itself had the ability to evaluate whether or not the rule would be valid on the basis of what is "known" about the use cases. I'm skeptical of its ability to do that. I keep thinking back to something someone mentioned to me a few years ago in the context of a discussion about artificial intelligence. Complex knowledge was easy for the machine. Simple knowledge wasn't, for the sole reason that we tend to overlook the most basic things. Tell a machine to build a tower of blocks, and it may be able to quickly calculate the strongest construction method based on the sizes and shapes of the blocks, and the way the blocks may be interlocked. But it fails at the task of actually building the tower, because it didn't "know" that the tower couldn't be built by starting at the top and working down.

As a user, by most urgent request would be: Start with a simple, sturdy base, and build up. Just give me something that works.