This page looks best with JavaScript enabled

Cybersecurity for Devs

 ·   ·  ☕ 4 min read

Program security into your software.

Some things to keep in mind to ensure your software is up to security standards, broadly separated into two distinct categories.

This distinction is mainly there for readability, as security requires all roles to be on the same page.
A system is only as strong (or secure) as its weakest link.

Development

Minimize complexity

Complexity is the enemy of good software (especially unnecessary or accidental complexity).
It is also the enemy of secure software.

The more complex a system is, the more nooks and crannies can be exploited.

Make sure your UI is simple: your users don’t need 20 ways to log into your service.
No one can keep track of 20 different user flows, you are bound to overlook something.

Consider each software integration as a possible security risk, each supported platform as a new Pandora’s box to be opened.
This doesn’t mean you should avoid these things altogether, just keep in mind that they imply risk.

Software minimalism is often misguided, but it has to be conceded that the larger the surface area, the more likely it is for an attack to be successful.

Clearly defined boundaries

Its common knowledge that all inputs to a system should be validated.
For this to make sense, one needs a clearly defined system.

This varies on context: your system might be a back end, a microservice, a database, a function…
Define the system you want to secure, so you know where to thoroughly validate input.

As an added bonus, having clearly defined boundaries will ensure that when something goes wrong, the issue can be contained.

Restrict access

Give as little privilege and/or visibility to your code as possible.
Make your methods and fields private by default and only open them as needed.

This is a good practice in general, but it also improves security by limiting use cases or unwanted use of your code.

This is not limited to classes and methods, but rather a general approach when defining who has access to/does what in a system.

Prefer Whitelisting

It follows from the previous point, that one should prefer whitelisting over blacklisting.
This implies blocking all access to a system by default, while granting access on a case by case basis (in contrast to allowing full access by default, while blocking it in certain cases).

Create alarms

Performing an action clearly outside the use case of the system should set off automated alarms.
A delete operation performed 1200 times in 1 minute in the main production database is likely more than a bug (and even if it is, you probably still want to be notified ASAP).

Set up a useful logging system, have alarms to cover edge cases and/or problematic user flows (logins, payments, etc.).
This will ensure your time to reaction is as little as possible.

Check for common risks

owasp

You can’t cover all security risks, but you can avoid the most common ones.
More broadly, browse the OWASP resources every once in a while.

Test for security

Even if you pentest your system regularly, writing security driven tests can alleviate a lot of the work.

Write your security requirements as tests, like you would write an acceptance tests for a use case.
Use fuzz testing to discover unwanted behavior.

If you fix a security issue, write a test to ensure it doesn’t happen in the future.
Even better, use TDD to fix it!

Organizational

Involve your PO

It is very uncommon for security to be an explicit requirement, but it is always an implicit one.
Event if your PO or your users don’t know it, they do expect it.

Nobody asks for security, but they will blame you when something bad happens.
Do your best to explain these issues to your PO, even if sometimes this implies not giving the users what they ask for.

Have a plan

Don’t assume that nothing bad can happen, nor that you will be able to recover from it.

Have a reliable backup system, some sort of contingency plan.
Ensure you can roll back the state of your software, which is facilitated by small release cycles (think CD).
This way, when a vulnerability is introduced, you can quickly roll back even before you start fixing it.

Audit your system

Reacting quickly is key, but preventing security issues is far more cost-effective.

Have a bounty program, call a pentester once in a while.
Even better, learn the basics of pentesting yourself!

Remove trust from the equation

It is often thought that proprietary software is more secure by virtue of being opaque.
This is known as “security by obscurity” and, while it can make sense in some contexts, it is generally not recommended (especially not on its own).

Either outsource security to a third party based on the guarantees it offers (and it’s SLA), or use open standards and software.

The chance of auditable, open and widely used protocols and software being insecure is very slim, and even if they are, you can easily patch or change them.

Support the author with