In no particular order, here are some infamous symptoms of hard-to-test software: That you were dealing with innately untestable software.Īll of these are real examples from real teams building real software. I hope that reflecting on these symptoms and your experience with them, I can persuade you that they are not natural challenges that everyone deals with as part of normal software delivery, but pernicious smells indicating something is horribly wrong. Thus, I have to go with a more condensed strategy: I’m going to describe a variety of symptoms of hard-to-test software, the frustrations and challenges that you too may have dealt with, and hope that stepping back to consider them in aggregate convinces you that some software just might be significantly more challenging to test. Seeing these two equivalent systems side by side-one horrendously untestable, one delightfully testable-would hopefully convince you of the order of magnitude difference in testability.Įach of these descriptions could easily be dozens of dense pages, and unfortunately (or fortunately) this is a blog article, not a book. I could then show how nuances of the architecture, design, implementation, and tech stack decisions come together to create a frustrating (or straightforward) validation experience. After this, I would have to go into detail on the validation strategies that could be used on each, how test data management would work, how and where automation would be written for different types of tests, and where it would be could be run in a CI pipeline. I would have to describe the overall architecture, layers of system design, the implementation of every component within that system, the tech stack used, all the interactions across all the boundaries with other systems, the data model, the persistence strategy, and probably a lot more as well. In order to convince you of this massive difference in testability, I could describe in detail two software systems on opposite ends of the testability scale. This will lead to a simple but different view on quality that naturally leads to more testable systems and more predictable, effective software delivery. I’m then going to talk about how quality assurance, the whole discipline of how we test software, is insufficient and can actually make the problem worse. Instead, I’m going to try to convince you that this massive difference in difficulty actually exists and why it matters. And I wouldn’t need to write this article. If creating testable software systems was as easy as using a design pattern, we’d all already be enjoying highly testable software. Unfortunately, our problem is not that simple. Having dropped the hook of the article, I wish I could introduce the Three Design Patterns Guaranteed to Make Your Systems More Testable! and head right into an explanation of each. How could this possibly be? Yes, software can be poorly designed and poorly implemented, but surely this would result in only a 50% increase in difficulty, or 100% at most, no? How is it possible to implement software so poorly that it leads to a multiple orders of magnitude difference in the difficulty? This belief comes from observing and participating in the development and testing of hundreds of systems across dozens of companies over the last 22 years. If it takes one hour to validate in one system, it could take over one hundred hours in another. ![]() While there is no way to precisely measure validation difficulty-and comparing the relative complexity of two unrelated systems is highly subjective-I still hold that the overall difference in effort can be multiple orders of magnitude. This statement is not meant as hyperbole. There can be multiple orders of magnitude of difference in the difficulty of validating software systems of equivalent complexity. Here is a statement I strongly believe to be true:
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |