This page looks best with JavaScript enabled

Myths in Software Development

 ·   ·  ☕ 8 min read

Separating the wheat from the chaff.

Conventional wisdom can go a long way and is often a useful guide.
However, it is usually best served with a healthy dose of skepticism and scrutiny.

Here, we explore the good, bad and ugly behind some common catchphrases frequently thrown around by devs.

TDD

Requires clairvoyance

Claim(s)

TDD requires developers to plan code in advance: One needs to have a map/plan of the code in one’s head beforehand to know what to test for.
Planning software from a whiteboard doesn’t work, one learns about the problem at hand while building the software.

Therefore, since devs are not clairvoyants, TDD brings nothing of value to the table.

Discussion

If you need to plan out the code in your head before writing a test for it, you are likely doing TDD wrong.
You should worry only about its interface and behavior when writing tests.
Leave the implementation and internal structure for steps 2 and 3 (green, refactor).

TDD is not a way of planning out your code with boxes on a whiteboard, quite the contrary: it incentivizes you to design as you go.
Part of its beauty is that it helps detect and resolve unknowns before they cause issues.
It forces you to think about the usage and behavior of your code first, before worrying about implementation details, simplifying the tests and aiding in the design.

Is wasteful if you do MVP + re-write

Claim(s)

All good software needs to be written at least twice for it to be any good.
If one builds a quick MVP to discover the unknowns and understand the problem, one can just throw it away and start anew: TDD (and testing more broadly) would only slow down the process.

Therefore, there is no need to write the tests first (or at all during the MVP phase).
Rather, one should write a quick and dirty MVP first (MDD?).

Discussion

Even accepting that “all good software is written twice” (which is debatable), it goes without saying that MVPs almost never get thrown away.
Rather, they stick around because “there is no time to rebuild it” and become production software.
This by itself means that the claim doesn’t hold any water in real world enterprise scenarios.

However, for the sake of argument, let’s say that this wasn’t the case.
Let’s assume that deleting an MVP and starting from scratch is actually the norm.

Wouldn’t it be easier and quicker to rebuild the software against test cases gathered/written while building the MVP?
What’s the alternative, writing down what you learned on a piece of paper? Not writing tests for the MVP but writing them for the first re-write? What about the second?
Why not have them already there once you decide to re-do the project?

CI/CD

Is the same as a pipeline

Claim(s)

Once there is a pipeline, there is CI/CD.
Doesn’t matter how often branches are merged or how often you deploy, as long as there is a pipeline.

Discussion

It says it in the tin: Continuous Integration, Continuous Deployment.
You can have the fanciest pipeline around: that alone won’t make your development Continuous.

If you don’t integrate feature branches with master multiple times a day (or skip them entirely) you are not doing CI.
If you don’t deploy at least once a day you are not doing CD.

Of course, the frequency with which you integrate/deploy is not set in stone: each project has its own limitations and scope.
Timeframes should be adapted.

The point is: a pipeline is a required part of CI/CD.
Nothing more, nothing less.

Automated Deployment == Continuous Deployment

Claim(s)

Once the deployment process has been automated, there is CD.
CD is the absence of manual deployments.

Discussion

As was the case above, automated deployments are a required part of CD.

Although deployment automation is an integral part of CD (especially at scale/in the long run), they are not sufficient condition to consider the deployments continuous.
You may, for example, have a fancy deploy script but only run it once a month.

What marks your deployments as Continuous is the frequency with which they are done.
This depends on the team’s workflow and practices, not on deployment scripts.

Clean Code

Horrible performance

Claim(s)

The “clean” code rules were developed because someone thought they would produce more maintainable codebases.
Even if that were true, you’d have to ask, “At what cost?”
src

Discussion

Nowhere in the book does the author advocate for clean code at all costs.
Quite the contrary, it is mentioned several times that code efficiency must be taken into account:

I will avoid [the more efficient solution] if the [efficiency] cost is small.

This claim is either missing the point entirely, based on a woefully misguided reading of the book or simply clickbait.

No one is advocating for a complete disregard to performance.
Rather, it is suggested that you should write your code with other people in mind.

Eventually, someone will have to maintain your code: make sure there is a good reason to make it hard to work with.

The reality is that for most development (outside games, big data processing and embedded), performance is nearly irrelevant (or at least far down the relevance list).
In most cases, the real world, practical performance difference between clean and dirty code is negligible.

To be fair, there are other reasons not to write clean code:
Ultra-tight deadlines and/or startup culture often push clean code aside (hopefully with the intention of re-visiting or re-writing the code later on).
Patching legacy/deprecated code doesn’t lend itself very well to writing clean code, as you would likely need to do a lot more work (as in weeks instead of days) that you should.

This doesn’t mean clean code == bad, it means you should use your own judgement and decide when to apply which principles and in what measure.

As with most things, it’s a trade-off.
Your code doesn’t need to be text-book-clean, but you should aspire to keep it as clean as reasonable given your circumstances.

Is in the eye of the beholder

Claim(s)

What’s clean to one person might not be clean to another.
Advanced programmers might find easy to read code that is incomprehensible for beginners.
Each language has different idioms (Golang with its short variable names).

Therefore, the concept of clean code is too relative to be of any help.
What’s clean depends on the context.

Discussion

On the one hand yes, what is or isn’t clean/readable depends on the context.
You team might be used to working with 2000+ LOC files. To them this might be normal and desirable.
This is not the end of the world, if it works for the team then it’s all good.

On the other hand, not everything is relative.
Calling a variable x is objectively less clear than giving it a decently descriptive name.
Writing your code in a way that can be understood by a beginner (or by someone coming from a different background/language than yours) has clear and obvious advantages.

Using you language’s latest super-fancy, concise, ultra-functional gimmick is less about clean code and more about showing off.

Clean code doesn’t look the same in all projects/teams/contexts, sure.
Just don’t use subjectivity as an excuse to disregard best practices and write code like you want to.

Linux

Is free if you don’t value your time

Claim(s)

Linux (Desktop) requires too much time to set up/maintain/configure to be worth the effort.
Whatever you gain from using Linux, it is offset by the amount of time and attention it requires.

Discussion

Linux can be as much of a time sink as you want it to.
The possibilities and configurations are endless and none are out of reach.

That being said, nothing prevents the use of a ready-to-use distribution.
These come already set up/configured and require little to no maintenance.

Since this is quite obvious, a more charitable reading of the claim might be something like: “The amount of stuff you have to learn is not worth the effort.”

Learning a new anything, a new OS in this case, implies … a learning process, which quires time and might be frustrating.
Linux being FOSS furthers the amount of learning required, since we all come from using proprietary software.

If the user is not interested in learning a new skill set, he should probably stick to what he already knows.

Breaks all the time

Claim(s)

Linux (Desktop) is unstable, unreliable and breaks all the time.
Therefore, Linux-based OSs, although useful in servers, are not fit for workstations/professional environments.

Discussion

I’ve been personally working from a rolling release distribution, widely considered unstable and breakage-prone for more than 3 years ATOW.
In this time, my Linux machine broke twice:

  • The first one as a consequence of me doing things I didn’t understand as sudo.
  • The other one due to a combination of my being silly and upstream changes (a borked update that should have been easy to recover from).

In both cases a fresh installation and setup was done in no time (thanks to set up scripts).

In contrast, I’ve lost count of how many times I’ve had to unbork my Windows machine after an update.

This claim contains a kernel of truth: Linux allows the user to break things, while the alternatives usually limit what the user can do so much that the only reasonable way something breaks is if they (Microsoft, Apple) break it.

This is easier on the user’s ego because it makes him inherently blameless.
The system likely “breaks less” because he is out of the equation.

It’s true that this is generally considered “good design”, but I am not sold on the idea.
You don’t blame the language when you see a “SegFault” or a StackOverflow, you blame the dev(s) for doing a poor job.

Plus, what’s so terrible about breaking things? That’s the best way to learn!
Just have a backup and you’re golden.

Support the author with