Every startup that develops software needs to think about quality. Surely not every startup creates mission critical code or medical software where bugs can kill patients. But as startup, you don’t want to scare away your lead customers and early adopters with failing software either. While many software quality aspects only pertain to a fully established company, you need to insert quality into the company mindset from the get go. Trying to add quality later on requires a change of process, mindset, and you may even have to fire and hire to get this right.
So what about quality? Good software quality comes in many different forms:
That is a long list. This is what usually happens in startup land: “Sure I run agile development so I don’t need to write specs” and “I don’t have time for all this because we need to get our first product to the customer!”. So what happens if you don’t build in quality up front?
So you’ve released the first product and written a million lines of code. And it starts failing at your customers. So you decide to start writing tests. For the million lines of code you already have developed? That means you can close shop for months!
While writing tests you probably uncover some performance, power consumption, or security problem in code that was bolted together without a solid design. What could have been a simple refactoring then now means starting from scratch to redesign the code. Netscape did a full redesign from scratch and in the process their market share plummeted. Joel Spolsky on Netscape’s demise:
They did it by making the single worst strategic mistake that any software company can make: they decided to rewrite the code from scratch.
A lack of tests often happens with code that is acquired from somewhere else, not in the least from open source. The only way a quality-minded company can work with this is to spend significant time to add tests. One of the proven ways is to first setup continuous integration and code coverage measurement, and then to incrementally improve the code coverage by adding tests.
Similarly, if you run static analysis tools and coding standards such as MISRA on an existing code base you will spend weeks to get to a clean report. Adding all exceptions to avoid false positives and getting reported violations back to zero is virtually impossible. Especially if the original coder is not around anymore to explain the intention of the code or constructs used.
What if your customer demands that your code is certified? You’d have to show the specs, show how each requirement can be traced to a test plan and to a passing test. And vice versa from test to plan to requirement. Again, creating such documentation after the fact is a huge effort. An important aspect is that if your developers never got used to write such design documentation, you will first have to change the mindset of your people. Imagine telling a free climber he all of sudden must use a safety rope.
So where to start? Make a good assessment of what elements you need for your domain. Agile teams often devote the first iteration(s) to setting up a good development and quality environment. Such a first “infrastructure sprint” will always pay off in later sprints, saving time in building, testing, debugging and reverse engineering. My recommended absolute minimum you need to get going:
Sounds expensive? Most tools are open source or vendors have a startup-friendly license. Sounds like a lot of work? Not really. Most of these can be setup in a few days, saving weeks to months later on. There are even specialist service companies that can setup such a “software street” for you. Typically that includes static and dynamic analysis tools (and a large invoice).
What if you are already (partially) in the woods? How do you maintain good quality over time? In my experience, the key mantra here is to:
Leave code you touch behind in a better shape than how you encountered it.
A small refactoring to cleanup a botchy design, a few fixes to remove coding standard violations, an extra test for every bug triggered in the field, etc. Get this into the mindset and daily habits of your programmers and your code quality will improve remarkably fast.
What really works to add to the incremental improvement are tools to track the quality. We developed a simple script at Vector Fabrics to run a coding style checker such as Pylint on Python code at every commit and push to git. The script enforced that newly written files had to adhere to a certain minimal quality, and edits to an existing file were not allowed to make things worse. Over time, you can raise the bar and enforce any edit to even reduce the number of coding style violations. At a larger scale, you can enforce this in continuous integration, keeping a trend of your overall quality from static analysis tools and the like. Dutch company TIOBE does this nicely, creating an “energy label” for your software, e.g. denoting your software quality as class D. While this is based on some magic formula that combines trends in static analysis, code coverage, etc. the simple view does give a strong incentive to improve or stay on top.
Summing up, changing a running software team is painful, so you’d better make sure to have good quality ingrained in your way of working right from the start. And with open source tooling and the whole agile and extreme programming legacy that is not as hard as you may think.
What techniques and processes would you see as the minimally required?
About Martijn Rutten
Fractional CTO & technology entrepreneur with a long history in challenging software projects. Former CTO of scale-up Insify, changing the insurance space for SMEs. Former CTO of fintech scale-up Othera, deep in the world of securitized digital assets. Coached many tech startups and corporate innovation teams at HighTechXL. Co-founded Vector Fabrics on parallelization of embedded software. PhD in hardware/software co-design at Philips Research & NXP Semiconductors. More about me.
Related Posts