So you think you're a programmer...
Fellow programmers (or at least we think we are),
I've always been bothered by the term "code construction" because we really don't spend that much time constructing code (especially those of us predisposed to getting a copy of some working code and making appropriate modifications :-) ).
On top of that, I've been bothered by the notion of "software design". There has been a lot of talk for a long time about software design, but it really usually isn't done. Or it's done with great fanfare at the beginning of a project, but then quickly gets left behind when the "code construction" begins (or whatever you want to call that activity during which relatively small code changes and additions are made all over the source code).
C++ and object-oriented languages were supposed to allow the programmer to express the design in code, but we know how that idea turned out.
An article I read recently by one Jack W. Reeves (http://www.bleading-edge.com/Publications/C++Journal/Cpjour2.htm) suddenly brought the whole picture into much better focus. Jack -- whose mantra is "The code is the design" -- didn't get it quite right, but, for me, he is a giant on whose shoulders I stood for a glimpse of the possibilities for the future of software engineering.
It turns out software engineering is a most strange engineering discipline. Huge amounts of time, effort and money go into the process, with frequent complaints about how the badly the job is done. What makes software engineering particularly strange is that almost no effort goes into producing the end product. Certainly I've lost my marbles to say this, no?
To make a long story longer, consider other engineering disciplines. For example, consider civil engineering, as Jack does in his article. To build a bridge, for example, (probably a reasonable analogy to programming because each bridge project is usually one-of-a-kind, and there are usually a lot of unknowns going in, very much like a software project), how do engineers design the bridge? How do they know their design will work? How does the bridge get constructed?
Now ask the same questions about software: How is software designed? How do you know it will work? How does the software get constructed?
Let's answer the bridge questions first.
To design the bridge, engineers make a lot of drawings. Some of the drawings are overviews, and some show structural details, and even how the structural elements will be attached to one another. When the drawings are done, the engineers have touched upon everything the builders will have questions about.
To find out if the bridge will work, the engineers do a lot of simulations, feasibility studies, and most likely build some scale models that they even probably test in a wind tunnel. The data from the simulations, studies, and scale model tests give definitive answers to whether the bridge will work or not. Once the bridge is under construction, there is little doubt that it will work. If the data from the simulations, studies, and scale model tests show the bridge will not work, the process goes back to the drawing board for appropriate changes.
To construct the bridge, the builders follow the plans. There is no digression from the plan. There is almost never any design validation performed during the construction process (although a certain tunnel in Boston could have used more such validation as it would relate to bolts secured in damp concrete with epoxy glue...).
To summarize, the design is a concept that is documented in writing. The design is proven by actual scientific activities which validate the design, but don't produce the actual bridge. The actual bridge is created by piecing together structural elements according to the design.
Now the answers for the software questions:
Let's assume the software engineers actually do create a design. Also, let's assume that the software engineers write down the design in some form (system diagrams, object specification, perhaps UML, etc.). The design, in this sense, is not very different from the bridge design, especially if the engineers have touched on all relevant issues and have written down their ideas. The software design effectively specifies, to appropriate detail, what the software will consist of, and how it is intended to work.
How do the engineers prove their design will work? It seems that to prove the design, the only way to do this is to actually write the code! So, all this time, you thought you were programming, but in reality, you've been testing your design -- whether you had worked it out in detail and written it down, or whether you had some vague notion in your head that guided your code changes (clearly most programmers operate by the latter method). If the code proves your design has a problem, you change the design, and then make appropriate code changes to prove that the new design works!
So how does the software get built? This is the really bizarre part -- the software is a by-product of the design testing! There is actually no software construction!
So Jack Reeves has forged a path in the correct direction. But sorry, Jack, the code is not the design. The design is the design. The code is a by-product of testing the design.
So how can we make use of this epiphany? Well, just change how you see yourself -- you are no longer a programmer slogging through incomprehensible source code -- instead you are a test engineer proving or disproving a proposed design.
Clearly to test a design, you need to have a design. If the only design in existence is in your head, and -- in particular -- if you are working with other programmers, you really should write it down. Make some system diagrams, write down the key objects, what they contain, what they do, detail the key interactions -- all good information to make sure everyone is on the same page. Once you have written it all down and shared it with your colleagues, go back to testing the design.
What happens when you run into a bug? It depends on the type of bug. If it turns out your code wasn't written to faithfully reflect the design -- for example, you forgot to initialize a variable before using it -- no problem, just fix the code.
But what happens if your proof shows there is a design problem? For example, you find that you overlooked a particular situation your software will occasionally encounter, and your design makes no provisions for this situation. Of course, with some sort of quick patch, you might be able to gloss over this shortcoming in the design, but why not go back to your design for a minute and ponder what you overlooked, and perhaps why? More importantly, is there a clean way you can change the design to take this situation into account? Usually the answer is 'yes'.
Programmers (aka design testers) in startup companies working at breakneck speed to be first to market might find it a waste of time to go back and revise the design. What makes more sense than to just patch the code and move on? If you are a startup programmer working at breakneck speed trying to be the first to market, by all means, forget the design, but do remember it when you finally release your product and are working 18/7 fighting all sort of fires which fall into the category of "push here, pop there", among others.
Needing to have a design might be really obvious here, but something which is not as obvious -- until you look really close at the expression "design TESTing" -- is that you also need some sort of tests. Well, what sort of tests might these be?
OK, to be fair, I don't think I've ever encountered a software organization that didn't have some sort of smoketests, regression tests, and the like. But I rarely encounter an organization that has enough tests, and that has tests everywhere tests are needed to validate design changes. In short, you need an army of tests (not testers) to be able to scientifically demonstrate your design is sound.
Among my favorite tests are tests used by testbed programs in which I develop objects for integration in larger applications. I virtually never test my object designs in the full application. I virtually always change my object designs and test them in a testbed program. Many of my clients really like the testbeds because they make it a lot easier to see improvements (and regressions :-( ) caused by my latest design changes.
But testbed tests aren't enough. You also need to prove the soundness of the design and the implementation of the integrated system. Among the tests useful for this purpose are the tests you included in your requirements specification -- remember? the tests which demonstrate *exactly* what each feature is supposed to do? you know, the tests which form the acceptance criterion for declaring your project "done"? Don't shock me and say that you don't put tests in your requirements specification for each requirement "atom"!!
And beyond the testbed tests and the requirement tests, there are also stress tests generally created by sadistic quality assurance staff. And, of course, there are your friendly alpha test customers who kindly determine the mean time to failure of either your design or your imperfect implementation.
So it turns out you and I have been testing software designs all along, even as we've thought of ourselves as programmers. The future of software engineering lies in testing and revising designs. Code construction turns out to be a myth.
Happy design testing!
P.S. If you have any questions about how to turn your process into a well-controlled software design testing environment, email me at jkeklak@buildingblock.com.
I've always been bothered by the term "code construction" because we really don't spend that much time constructing code (especially those of us predisposed to getting a copy of some working code and making appropriate modifications :-) ).
On top of that, I've been bothered by the notion of "software design". There has been a lot of talk for a long time about software design, but it really usually isn't done. Or it's done with great fanfare at the beginning of a project, but then quickly gets left behind when the "code construction" begins (or whatever you want to call that activity during which relatively small code changes and additions are made all over the source code).
C++ and object-oriented languages were supposed to allow the programmer to express the design in code, but we know how that idea turned out.
An article I read recently by one Jack W. Reeves (http://www.bleading-edge.com/Publications/C++Journal/Cpjour2.htm) suddenly brought the whole picture into much better focus. Jack -- whose mantra is "The code is the design" -- didn't get it quite right, but, for me, he is a giant on whose shoulders I stood for a glimpse of the possibilities for the future of software engineering.
It turns out software engineering is a most strange engineering discipline. Huge amounts of time, effort and money go into the process, with frequent complaints about how the badly the job is done. What makes software engineering particularly strange is that almost no effort goes into producing the end product. Certainly I've lost my marbles to say this, no?
To make a long story longer, consider other engineering disciplines. For example, consider civil engineering, as Jack does in his article. To build a bridge, for example, (probably a reasonable analogy to programming because each bridge project is usually one-of-a-kind, and there are usually a lot of unknowns going in, very much like a software project), how do engineers design the bridge? How do they know their design will work? How does the bridge get constructed?
Now ask the same questions about software: How is software designed? How do you know it will work? How does the software get constructed?
Let's answer the bridge questions first.
To design the bridge, engineers make a lot of drawings. Some of the drawings are overviews, and some show structural details, and even how the structural elements will be attached to one another. When the drawings are done, the engineers have touched upon everything the builders will have questions about.
To find out if the bridge will work, the engineers do a lot of simulations, feasibility studies, and most likely build some scale models that they even probably test in a wind tunnel. The data from the simulations, studies, and scale model tests give definitive answers to whether the bridge will work or not. Once the bridge is under construction, there is little doubt that it will work. If the data from the simulations, studies, and scale model tests show the bridge will not work, the process goes back to the drawing board for appropriate changes.
To construct the bridge, the builders follow the plans. There is no digression from the plan. There is almost never any design validation performed during the construction process (although a certain tunnel in Boston could have used more such validation as it would relate to bolts secured in damp concrete with epoxy glue...).
To summarize, the design is a concept that is documented in writing. The design is proven by actual scientific activities which validate the design, but don't produce the actual bridge. The actual bridge is created by piecing together structural elements according to the design.
Now the answers for the software questions:
Let's assume the software engineers actually do create a design. Also, let's assume that the software engineers write down the design in some form (system diagrams, object specification, perhaps UML, etc.). The design, in this sense, is not very different from the bridge design, especially if the engineers have touched on all relevant issues and have written down their ideas. The software design effectively specifies, to appropriate detail, what the software will consist of, and how it is intended to work.
How do the engineers prove their design will work? It seems that to prove the design, the only way to do this is to actually write the code! So, all this time, you thought you were programming, but in reality, you've been testing your design -- whether you had worked it out in detail and written it down, or whether you had some vague notion in your head that guided your code changes (clearly most programmers operate by the latter method). If the code proves your design has a problem, you change the design, and then make appropriate code changes to prove that the new design works!
So how does the software get built? This is the really bizarre part -- the software is a by-product of the design testing! There is actually no software construction!
So Jack Reeves has forged a path in the correct direction. But sorry, Jack, the code is not the design. The design is the design. The code is a by-product of testing the design.
So how can we make use of this epiphany? Well, just change how you see yourself -- you are no longer a programmer slogging through incomprehensible source code -- instead you are a test engineer proving or disproving a proposed design.
Clearly to test a design, you need to have a design. If the only design in existence is in your head, and -- in particular -- if you are working with other programmers, you really should write it down. Make some system diagrams, write down the key objects, what they contain, what they do, detail the key interactions -- all good information to make sure everyone is on the same page. Once you have written it all down and shared it with your colleagues, go back to testing the design.
What happens when you run into a bug? It depends on the type of bug. If it turns out your code wasn't written to faithfully reflect the design -- for example, you forgot to initialize a variable before using it -- no problem, just fix the code.
But what happens if your proof shows there is a design problem? For example, you find that you overlooked a particular situation your software will occasionally encounter, and your design makes no provisions for this situation. Of course, with some sort of quick patch, you might be able to gloss over this shortcoming in the design, but why not go back to your design for a minute and ponder what you overlooked, and perhaps why? More importantly, is there a clean way you can change the design to take this situation into account? Usually the answer is 'yes'.
Programmers (aka design testers) in startup companies working at breakneck speed to be first to market might find it a waste of time to go back and revise the design. What makes more sense than to just patch the code and move on? If you are a startup programmer working at breakneck speed trying to be the first to market, by all means, forget the design, but do remember it when you finally release your product and are working 18/7 fighting all sort of fires which fall into the category of "push here, pop there", among others.
Needing to have a design might be really obvious here, but something which is not as obvious -- until you look really close at the expression "design TESTing" -- is that you also need some sort of tests. Well, what sort of tests might these be?
OK, to be fair, I don't think I've ever encountered a software organization that didn't have some sort of smoketests, regression tests, and the like. But I rarely encounter an organization that has enough tests, and that has tests everywhere tests are needed to validate design changes. In short, you need an army of tests (not testers) to be able to scientifically demonstrate your design is sound.
Among my favorite tests are tests used by testbed programs in which I develop objects for integration in larger applications. I virtually never test my object designs in the full application. I virtually always change my object designs and test them in a testbed program. Many of my clients really like the testbeds because they make it a lot easier to see improvements (and regressions :-( ) caused by my latest design changes.
But testbed tests aren't enough. You also need to prove the soundness of the design and the implementation of the integrated system. Among the tests useful for this purpose are the tests you included in your requirements specification -- remember? the tests which demonstrate *exactly* what each feature is supposed to do? you know, the tests which form the acceptance criterion for declaring your project "done"? Don't shock me and say that you don't put tests in your requirements specification for each requirement "atom"!!
And beyond the testbed tests and the requirement tests, there are also stress tests generally created by sadistic quality assurance staff. And, of course, there are your friendly alpha test customers who kindly determine the mean time to failure of either your design or your imperfect implementation.
So it turns out you and I have been testing software designs all along, even as we've thought of ourselves as programmers. The future of software engineering lies in testing and revising designs. Code construction turns out to be a myth.
Happy design testing!
P.S. If you have any questions about how to turn your process into a well-controlled software design testing environment, email me at jkeklak@buildingblock.com.