Most everyone in software seems to accept that layers are a good thing. In general, they're not. They take time and effort to build, they are a source of bugs, and make change more difficult and error-prone.
What are layers in software?
It's possible to get all sophisticated with this, but let's keep it simple. Imagine that your application stores data, presents some of it to users, the users change and/or add data, and the application then stores it again. Everyone thinks, OK, we've got the UI, the application and the storage. That's three layers to create, and for data to pass through and be processed. This is the classic "three-tier architecture," usually implemented with three tiers of physical machines as well.
Everyone knows you use different tools and techniques to build each layer. You'll use something web-oriented involving HTML and javascript for the UI, some application language for the business logic, and probably a DBMS for the storage. Each has been adapted to its special requirements, and there are specialists in each layer. Everyone agrees that this kind of layering is good: each specialist can do his/her thing, and changes in each layer can be made independent of the others. We end up with solid, secure storage, a great UI and business logic that isn't dependent on the details of either.
More layers!
If layers are good, more layers must be better, right? It's definitely that way with cakes, after all. We know layer cakes are in general wonderful things. In some places, having 12 layers or more is what's done.
It's not unusual for application environments to have six layers or even more. Among the additional layers can be: stored procedures in the database; a rules engine; a workflow engine; a couple layers in the application itself; an object-relational mapper; web services interfaces; layers for availability and recovery; etc.
It's hard to find anyone say this isn't a good thing. Imagine a speaker and a group of software developers. He says "Motherhood!" Everyone smiles and nods. He says "apple pie!" Everyone smiles and licks their lips. He says "layer cake!"
Everyone can picture it, perhaps remembering blowing out the candles on just such a cake.
You can remember as a kid opening wide and biting in to a nice big piece of birthday layer cake. He says "Software should be properly layered!" Everyone gets a look that ranges from professional to sage and nods in agreement at such a statement of the obvious.
Layers are good, aren't they?
Layer Cake, yes; Software Layers, uh-uh
Take another look at the pictures above; you'll notice that cake alternates with icing, whether there are 3 layers or 15. One person makes both. There's a way to make icing, a way to make the cake. Usually one person makes both and assures that a wonderful, integral layer cake is the result.
It's a whole different story in software. Even though the data flowing down from the top (UI) to bottom (storage) may be the same (date, name, amount, etc.), each layer has its own concerns and pays attention to different aspects of that data. Here's the real rub: when a change is made to data, far from being isolated, each component that touches the data has to be changed in different but exactly coordinated ways. The data is even organized differently -- that's why ORM's exist, for example.
One of the fundamental justifications for thinking layers are good is separation of concerns: you can change each component independently of the others (the same fraudulent justification that lies at the heart of object-orientation, BTW). But this is just wrong (except in trivial cases)! Any time you want to add, remove or non-trivially change a field, all layers are affected. Each specialist has to go to each place the data element is touched and make exactly the right change.
But it gets worse. Because each layer has its own way of representing data, there are converters that change the data received from "above" to this layer's preferred format, and then when the data is passed "down" it is converted again. If you are further saddled with web services or some other way to standardize interfaces, you have yet another conversion, to and from the interface's preferred data representation. Each one of these conversions takes work to build and maintain, takes work to change whenever a data element is changed, and can have bugs.
Think it can't get worse? It can and does! Each group in charge of a layer feels the need to maintain the integrity of "their" layer. Those "foreign" layers -- they're so bad -- they do crappy work -- we better protect ourselves against the bad stuff they send us! So we'd better check each piece of data we get and make sure it's OK, and return an error if it's not. Makes sense, right? Except now you have error checking and error codes to give on each piece of data you receive, and when you send data, you have to check for errors and codes from the next layer. Multiplied by each layer. So now when you make a change, just think of all the places that are affected! And where things can go wrong!
Here's the bottom line: every layer you add to software is another parcel of work, bugs and maintenance. With no value added! Take a simple case, like moving to zip plus 4. Even in a minimal 3 layer application, 3 specialists have to go make exactly the changes to each place the field is received, in-converted, error checked, represented locally, processed, out-converted and sent, with code to handle errors from the sending.
In Software, the Fewer Layers the Better
I'm hardly the first person to notice this. Why is the Ruby/RAILS framework so widely considered to be highly productive? Because it exemplifies the DRY principle, specifically because it eliminates the redundancy and conversion between the application and storage layers. What RAILS is all about is defining a field, giving it a name, and then using it for storage and application purposes! Giving one field a column name in a DBMS schema and another name in an class attribute definition adds no value. What a concept! (Although far from a new one. Several earlier generations of software had success for similar reasons, for example, Powerbuilder with its Data Window.)
It's simple: In cakes, more layers is good. In software, more layers is not good.
"In cakes, more layers is good. In software, more layers is not good"...
Except that the skill sets in the layers really are quite different, and the time is long gone when one God-like mind can comprehend the whole of a software system both in its large-scale aspects and in its details.
So the DB person knows a bunch of stuff that the UI/UX person doesn't know and (hopefully) doesn't need to know, and vice versa.
In fact, software developers come in at least two flavors: systems people and user people. I wouldn't want a user person mucking around with my DB schema, and I certainly wouldn't want a system person designing my app workflow.
And it's probably worth protecting these specialists from one another.
I'm not familiar (as a practitioner) with Ruby/Rails, but I do remember ColdFusion vividly, which was used (with one name for UI fields and DB fields, for example) to produce some monumentally un-maintainable, un-scaleable systems.
Probably more layers is not good ad infinitum, but too few is courting trouble as well.
Posted by: Pipik | 07/09/2012 at 09:13 AM