At some point many executives and managers have to ask themselves:
- we’re spending how much of our precious HR budget on QA,
- and we still have too many production bugs
- so we need to spend more?
Without a new strategy, that could be just throwing in good money after bad. A lot of leaders in technology and tech-related fields know they need better QA, but have a hard time defining their vision or being confident that their expectations are realistic. Let’s explore what QA came from and where it can take us.
The Origins of QA: Circle the Wagons!
Quality Assurance started in factories back when some people still thought that automobiles were a fad. The Industrial Revolution was in full swing, and people who had grown up using and making personal, individually-crafted items in a village shed were now on assembly lines making stand-alone parts with huge new machines in vast concrete halls. Their production was compartmentalized and context-free; a steel rod may be shipped off to turn a watermill or to become a driveshaft for one of those “horseless carriages” people were always on about. The results were often disheartening. Walter Shewhart introduced the management science concept of “Plan-Do-Study-Act”1, formalizing the first iteration of QA as the “Study” part of his formula. Products were measured against expectations. The results led others to suggest specific corrections in individual items, or occasionally design, or even more rarely the manufacturing process itself.
Many think of software QA as a modern analog of that. NASA’s Office of Safety & Mission Assurance describes it as: “The planned and systematic set of activities that ensures that software life cycle processes and products conform to requirements, standards and procedures”.2
This is not wrong as a mission statement, but it is incomplete and outdated. Yes, I just criticized NASA- to remind us all that if we are unclear about the expanding value and nature of QA, we are in good company. The problem with that statement is that it clearly implies that old analogy for the QA workflow: “Wait for a box full of features to arrive at the QA workstation from wherever, test them with a checklist, send them on to… wherever.” The contribution is limited to a small number of responsibilities by focusing on a set of narrowly-defined tasks.
The Recent History of QA: Spaceships!
The core of any QA discipline is still making sure things look and work as expected. In software development though, just what that means and how it’s done is expanding beyond what anyone could have imagined. Why?
Well, part of the story is “because computers”. Making steel rods is very different from making software. The tools are now languages that no one actually speaks and third-party apps. The products cannot be carried, touched, or tasted and are often intangible tools themselves. The development process has evolved from a self-taught expert in a friend’s garage (like the village artisan mentioned above) to a group effort made possible only by the features and communication other software provides.
A brief demonstration of the vast differences between what we make (and have to test) now and what we used to make: the Apollo mission to the moon happened about forty years after Shewhart introduced QA for new-fangled “factories”. The entire computing power of NASA- the whole organization and all its assets- was roughly that of an average smart phone today, to put it gently. Fifty years later, ensuring quality for software and related products, in this environment, is clearly different than making sure steel rods are straight. We already grok3 that.
In the next post, we’ll discuss that question, and a way to move forward: Evolved QA.
*QA versus Testing: that’s worthy of its own series. In these articles, intended mainly for managers and executives whose responsibilities directly or indirectly include Software Development, we assume that testing is “interested in the product” and is a part of QA, which is “interested in the process”.4