Software Design Principles in a Synthesis of Foam

Software Design Principles in a Synthesis of Foam

·

8 min read

A general intake of a generally wicked but eerily accepted foundation, loosely attached to the corresponding SWEBOK V4 Knowledge Area (KA), still undergoing Public Review[1], along with some spooky, devious deviations.

“Everything has been composed, just not yet written down.”

– Letter to Leopold Mozart, 1780

Software design is the disciplined use of science, coupled with some mundane techniques and imagination, meant to come up with a description of a software system capable of performing several called-for functions cost-effectively and enable going about the aforementioned use fruitfully, along with an expected result, as it unfolds at the stage of the software development life cycle it happens to be in when the going gets tough.

It’s a software engineering process[2] that is necessarily iterative and dependent on estimates, navigating imposed limitations and implicit constraints on the possibly identifiable, technically feasible solutions expected to meet a, somewhat arbitrary, assortment of selection criteria.

It takes place by analysing requirements, defining the software’s external characteristics – functional, a.k.a. behavioural – and the corresponding structure, detailed enough to serve as a basis for the software construction, a.k.a. coding, enacted under the deliverable form of a software design description (SDD) intending to represent and communicate what can be thought of as a blueprint or model of a software system, encompassing its refinement into components, their organisation, the definition of interfaces, the composition of the whole ensemble, and its hatchways to the outside world.

Context and key issues

Implementable design specifications are bound to conform to requirements, architecture fundamental aspects, and interfaces guiding software construction and testing. Quality concerns must also be addressed, such as performance and reliability, as well as those that deal with the software’s functional behaviour in support of adjacent domains.

One of the main and troubling issues related to it is that software seems to disregard Giambattista Vico’s neat “Verum-Factum” principle[3] that states we, humans, being the cause of what we make, can – and forcibly must - know all there is to know about what we make.

It’s a principle vaguely reminiscent of another one, taken from Necromonger philosophy, that says "You keep what you kill"[4].

Humans make the software, so it should come as no surprise that they ought to know what the software does, and never end up at odds with so much of its outputs. This makes software somewhat Cartesian, possibly in a ghostly rebuff to Vico, carrying this duality between the formality of its descriptions, in terse, precise, arguably funny language, and the need for empirical verification as to what the formally expected results will turn out to be.

The mind thinks so perfectly, but the body doesn’t always obey, and that can only be definitively known once checked, typically by a third party’s visual inspection. That’s the conundrum, to which some canonical design principles should be applied to stay as close as possible to Vico’s principle.

Fundamental design principles

Abstraction is the first, a must-have principled technique associated with problem-solving. It’s the process of focusing on the most relevant part of the information of a concept, problem, or observable phenomenon, as it relates to a particular purpose, while provisionally ignoring the rest. It’s a divide-and-conquer[5] of sorts.

It’s a layered principle, with several levels, meant to allow concentrating on one at a time, while still connecting it effectively with the levels above and below.

“The purpose of abstracting is not to be vague, but to create a new semantic level in which one can be absolutely precise.”[6]

Different levels of abstraction tend to be organised in a hierarchy, which can be sequential, with one layer having only one predecessor and one successor layer, or tree-like, or even with a many-to-many structure; no loops are permitted though.

There may also be multiple, alternate abstractions operating at the same level that don’t form a hierarchy but complement each other on the side, as multiple viewpoints or perspectives on a single aspect of a problem or solution. The solutions to problems modelled by software systems are intrinsically multidimensional with at least static, or structural facets, and dynamic, functional ones that can be abstracted separately via, for example, data models, class diagrams on one side, and state charts and sequence diagrams on another.

Then separation of concerns (SoC) is a principle applied by identifying and delimitating a design concern, which also tends to be an area of particular relevance to one or more stakeholders, and focus on it separately. It can be taken as one of the primary guidelines on how to approach the abstraction principle.

Edsger Dijkstra, again, puts it rather nicely:

"Let me try to explain to you, what to my taste is characteristic for all intelligent thinking. It is, that one is willing to study in depth an aspect of one's subject matter in isolation for the sake of its own consistency, all the time knowing that one is occupying oneself only with one of the aspects. We know that a program must be correct and we can study it from that viewpoint only; we also know that it should be efficient and we can study its efficiency on another day, so to speak. In another mood we may ask ourselves whether, and if so: why, the program is desirable. But nothing is gained — on the contrary! — by tackling these various aspects simultaneously. It is what I sometimes have called "the separation of concerns", which, even if not perfectly possible, is yet the only available technique for effective ordering of one's thoughts, that I know of. This is what I mean by "focussing one's attention upon some aspect": it does not mean ignoring the other aspects, it is just doing justice to the fact that from this aspect's point of view, the other is irrelevant. It is being one- and multiple-track minded simultaneously.”[7]

Modularization, another fundamental – probably the most - software design principle, can be seen as a concrete instantiation of the abstraction principle driven by a separation of concerns, materialised in designed and coded artefacts called modules, subroutines, objects, components, services, or subsystems, depending on granularity and varying on mileage.

The goal of modularisation is to place distinct functionality and responsibility in different modules, as since long advocated by one of the Fathers of the Church of programming[8], a goal which is also very much the core of UNIX philosophy and the concept of DOTADIW, or "Do One Thing And Do It Well."[9]

In the design and definition of the module itself, apart from doing one specific thing well, the principles of abstraction and separation also apply, by abstracting its interface from its implementation, separating them, as so succinctly put by John Ousterhout:

“In modular programming, each module provides an abstraction in form of its interface. The interface presents a simplified view of the module’s functionality, the details of the implementation are unimportant from the standpoint of the module’s abstraction, as they are omitted from the interface.”[10]

This calls upon another, very important principle, known as encapsulation, or information hiding, building upon the principles of abstraction and modularization, encases the module’s data and algorithms within itself, shielding its inner workings from other parts of the system, ideally operating as an independent software machine.

The most common application of encapsulation is one that involves defining a module by specifying its public interfaces, meant to be known and available for use by other modules, but hiding the details of its internal structure and how it does what it does. It’s an approach that can also be taken as a principle, of the separation of interface from implementation.

Finally, there’s a pair of somewhat entangled principles, yet maximally important, known as coupling and cohesion.

Coupling is the degree of interdependence among modules in a software system, which is something to minimize. Modules should be loosely or weakly coupled.

Cohesion, or localization, refers to the degree of the strength of association, or relatedness, of the elements within a module, the more the better.

They’re both relative measures, and they both hinge on complexity, particularly the cost of change. A module that does its thing and continues to do so irrespective of changes made in other modules and, at the same time, can be changed by adding or deprecating functionality without imposing changes on other modules, is both highly cohesive and loosely coupled.[11]

“The key measure of cohesion is the extent, or cost, of change. If you have to wander around your codebase changing it in many places to make a change, that is not a very cohesive system. Cohesion is a measure of functional relatedness. It is a measurement of relatedness of purpose. This is slippery stuff!”[12]

Processes, qualities, strategies and methods

The above canonical principles, along with some derivatives, such as consistency, completeness, and verifiability permeate all processes through the entire life cycle, not only the design-related ones like architectural, high-level, and detailed design, but also the downstream, or iteratively incremental processes, of coding and testing.

System qualities such as orderly, well-integrated and operable concurrency, in the sense of being amenable to efficient synchronisation and scheduling, while ensuring consistent data persistence along with reliable control of events and exception handling can never emerge on a system that missed those design principles.

The same software principles will also guide whatever strategy or method employed in building a software system, either approached from the data side, focused on data structures, from a classical, function-oriented side, or from an object-oriented or component-based services philosophy with varying distribution granularity under a possibly adjacent event-driven stance.

It doesn’t matter if you end up with an on-premise, vintage layered monolith or a microservices architecture in a heavenly SaaS cloud. If you don’t follow the principles abstracted above, you’ll end up with a big ball of mud, which will also be hairy.

“The third science is that of Machine-design. … Its province is to teach how to give to the bodies constituting the machine the capacity for resisting alteration of form mentioned in our definition.”

- Franz Reuleaux, The Kinematics of Machinery, 1876


[1] SWEBOK Evolution - IEEE-CS SWEBOK V4 Public Review (3rd Batch) – Comments Closing 9 January 2022 - computer.org/volunteering/boards-and-commit..

[2] Lori Cameron (2018). Margaret Hamilton: First Software Engineer

https://www.computer.org/publications/tech-news/events/what-to-know-about-the-scientist-who-invented-the-term-software-engineering

[3] Giambattista Vico (1668—1744). The Verum-Factum Principle. https://iep.utm.edu/vico/#SSH2c.i

[4] https://riddick.fandom.com/wiki/Necromonger_Empire

[5] https://en.wikipedia.org/wiki/Divide-and-conquer_algorithm

[6] Edsger W. Dijkstra (1972). The Humble Programmer. CACM, October.

[7] Edsger W. Dijkstra (1974). On the role of scientific thought. https://www.cs.utexas.edu/users/EWD/transcriptions/EWD04xx/EWD447.html

[8] David L. Parnas (1972). On the criteria to be used in decomposing systems into modules. CACM, December.

[9] https://en.wikipedia.org/wiki/Unix_philosophy

[10] John Ousterhout (2018). A Philosophy of Software Design. Yaknyam Press.

[11] https://wiki.c2.com/?CouplingAndCohesion

[12] David Farley (2021). Modern Software Engineering: Doing What Works to Build Better Software Faster. Addison-Wesley Professional.