John Keklak's Coaching Comments

Wednesday, October 04, 2006

So you think you're a programmer...

Fellow programmers (or at least we think we are),

I've always been bothered by the term "code construction" because we really don't spend that much time constructing code (especially those of us predisposed to getting a copy of some working code and making appropriate modifications :-) ).

On top of that, I've been bothered by the notion of "software design". There has been a lot of talk for a long time about software design, but it really usually isn't done. Or it's done with great fanfare at the beginning of a project, but then quickly gets left behind when the "code construction" begins (or whatever you want to call that activity during which relatively small code changes and additions are made all over the source code).

C++ and object-oriented languages were supposed to allow the programmer to express the design in code, but we know how that idea turned out.

An article I read recently by one Jack W. Reeves (http://www.bleading-edge.com/Publications/C++Journal/Cpjour2.htm) suddenly brought the whole picture into much better focus. Jack -- whose mantra is "The code is the design" -- didn't get it quite right, but, for me, he is a giant on whose shoulders I stood for a glimpse of the possibilities for the future of software engineering.

It turns out software engineering is a most strange engineering discipline. Huge amounts of time, effort and money go into the process, with frequent complaints about how the badly the job is done. What makes software engineering particularly strange is that almost no effort goes into producing the end product. Certainly I've lost my marbles to say this, no?

To make a long story longer, consider other engineering disciplines. For example, consider civil engineering, as Jack does in his article. To build a bridge, for example, (probably a reasonable analogy to programming because each bridge project is usually one-of-a-kind, and there are usually a lot of unknowns going in, very much like a software project), how do engineers design the bridge? How do they know their design will work? How does the bridge get constructed?

Now ask the same questions about software: How is software designed? How do you know it will work? How does the software get constructed?

Let's answer the bridge questions first.

To design the bridge, engineers make a lot of drawings. Some of the drawings are overviews, and some show structural details, and even how the structural elements will be attached to one another. When the drawings are done, the engineers have touched upon everything the builders will have questions about.

To find out if the bridge will work, the engineers do a lot of simulations, feasibility studies, and most likely build some scale models that they even probably test in a wind tunnel. The data from the simulations, studies, and scale model tests give definitive answers to whether the bridge will work or not. Once the bridge is under construction, there is little doubt that it will work. If the data from the simulations, studies, and scale model tests show the bridge will not work, the process goes back to the drawing board for appropriate changes.

To construct the bridge, the builders follow the plans. There is no digression from the plan. There is almost never any design validation performed during the construction process (although a certain tunnel in Boston could have used more such validation as it would relate to bolts secured in damp concrete with epoxy glue...).

To summarize, the design is a concept that is documented in writing. The design is proven by actual scientific activities which validate the design, but don't produce the actual bridge. The actual bridge is created by piecing together structural elements according to the design.

Now the answers for the software questions:

Let's assume the software engineers actually do create a design. Also, let's assume that the software engineers write down the design in some form (system diagrams, object specification, perhaps UML, etc.). The design, in this sense, is not very different from the bridge design, especially if the engineers have touched on all relevant issues and have written down their ideas. The software design effectively specifies, to appropriate detail, what the software will consist of, and how it is intended to work.

How do the engineers prove their design will work? It seems that to prove the design, the only way to do this is to actually write the code! So, all this time, you thought you were programming, but in reality, you've been testing your design -- whether you had worked it out in detail and written it down, or whether you had some vague notion in your head that guided your code changes (clearly most programmers operate by the latter method). If the code proves your design has a problem, you change the design, and then make appropriate code changes to prove that the new design works!

So how does the software get built? This is the really bizarre part -- the software is a by-product of the design testing! There is actually no software construction!

So Jack Reeves has forged a path in the correct direction. But sorry, Jack, the code is not the design. The design is the design. The code is a by-product of testing the design.

So how can we make use of this epiphany? Well, just change how you see yourself -- you are no longer a programmer slogging through incomprehensible source code -- instead you are a test engineer proving or disproving a proposed design.

Clearly to test a design, you need to have a design. If the only design in existence is in your head, and -- in particular -- if you are working with other programmers, you really should write it down. Make some system diagrams, write down the key objects, what they contain, what they do, detail the key interactions -- all good information to make sure everyone is on the same page. Once you have written it all down and shared it with your colleagues, go back to testing the design.

What happens when you run into a bug? It depends on the type of bug. If it turns out your code wasn't written to faithfully reflect the design -- for example, you forgot to initialize a variable before using it -- no problem, just fix the code.

But what happens if your proof shows there is a design problem? For example, you find that you overlooked a particular situation your software will occasionally encounter, and your design makes no provisions for this situation. Of course, with some sort of quick patch, you might be able to gloss over this shortcoming in the design, but why not go back to your design for a minute and ponder what you overlooked, and perhaps why? More importantly, is there a clean way you can change the design to take this situation into account? Usually the answer is 'yes'.

Programmers (aka design testers) in startup companies working at breakneck speed to be first to market might find it a waste of time to go back and revise the design. What makes more sense than to just patch the code and move on? If you are a startup programmer working at breakneck speed trying to be the first to market, by all means, forget the design, but do remember it when you finally release your product and are working 18/7 fighting all sort of fires which fall into the category of "push here, pop there", among others.

Needing to have a design might be really obvious here, but something which is not as obvious -- until you look really close at the expression "design TESTing" -- is that you also need some sort of tests. Well, what sort of tests might these be?

OK, to be fair, I don't think I've ever encountered a software organization that didn't have some sort of smoketests, regression tests, and the like. But I rarely encounter an organization that has enough tests, and that has tests everywhere tests are needed to validate design changes. In short, you need an army of tests (not testers) to be able to scientifically demonstrate your design is sound.

Among my favorite tests are tests used by testbed programs in which I develop objects for integration in larger applications. I virtually never test my object designs in the full application. I virtually always change my object designs and test them in a testbed program. Many of my clients really like the testbeds because they make it a lot easier to see improvements (and regressions :-( ) caused by my latest design changes.

But testbed tests aren't enough. You also need to prove the soundness of the design and the implementation of the integrated system. Among the tests useful for this purpose are the tests you included in your requirements specification -- remember? the tests which demonstrate *exactly* what each feature is supposed to do? you know, the tests which form the acceptance criterion for declaring your project "done"? Don't shock me and say that you don't put tests in your requirements specification for each requirement "atom"!!

And beyond the testbed tests and the requirement tests, there are also stress tests generally created by sadistic quality assurance staff. And, of course, there are your friendly alpha test customers who kindly determine the mean time to failure of either your design or your imperfect implementation.

So it turns out you and I have been testing software designs all along, even as we've thought of ourselves as programmers. The future of software engineering lies in testing and revising designs. Code construction turns out to be a myth.


Happy design testing!


P.S. If you have any questions about how to turn your process into a well-controlled software design testing environment, email me at jkeklak@buildingblock.com.

Friday, November 18, 2005

Some questions to ask your developers

Managers: Here are some questions you may want to ask your developers:

(a) How many test cases have you created in the course of your current project?

(b) Can you run these test cases each time you make a significant change?

(c) Do you have enough test cases to be pretty certain you will catch most things your next change will break?

(d) Can you run all these test cases in less than 30 seconds? Can you also get a comparison of this run to the previous run in less than 30 seconds?

(e) Can you run these test cases independently of other programmers' changes?


Sadly, most programmers I know won't give overwhelmingly positive responses to these questions. Most indicate they make their code changes while nearly "flying blind".

Curiously, the practice of having test suites as a development tool has been my failsafe security net. It has allowed me to successfully juggle three or four significant software development projects at a time. For one project, I have a test suite of over 500 test cases that I can run in 10 seconds, and compare with any other run in another 10 seconds.

It's not really the fault of programmers that they don't build test suites for their projects. Most companies don't consider building project test suites to be an important aspect of software development. I've even enountered managers who condescendingly sneer at such a waste of time ("It'll all come out in integration testing," they crow).

Curiously, these managers always seem to in crisis mode, rushing from one unexpected fire to the next.

Maybe they should be asking their programmers certain questions.

Thursday, May 12, 2005

Should our methods be more agile?

Last night at the Agile Bazaar in Cambridge down the street from M.I.T., I had a chance to listen to both devoted advocates of "agile methodologies", as well as software developers seeking some relief from the arduous path of their profession (admittedly, I also took the opportunity to subject said same people to many of my opinions).

Some conclusions:

(a) Agile methodologies are probably as good as things will get as long as the human mind is the primary tool for creating software. My reason: the human mind is quite limited when it comes to activities such as software development. For instance:

+ There are only so many details a human can imagine and remember.

+ Human minds prefer tangible, physical things over imagined ones.

+ For most humans, the future is a fairly abstract concept, made vague largely by uncertainty and a general unwillingness to think really hard.

+ Humans like to be given information, and when they are not given information, they tend to make assumptions.

Implicity, the agile approach advocates working with these human short-comings, rather than against them. Short iterations produce concrete software to examine and build on. Short-term target dates can be envisioned with only modest effort. Planning for a short iteration is worthwhile because everyone can see it is not wasted activity. Frequent feedback validates direction and provides information. Goodwill is fostered because developers feel management understands their situation. This is all good.

(b) For as long as we've been doing projects, we've been using agile methodologies, but our culture tends to be too negative to see this. All organizations periodically correct the course of their projects. The problem is people usually view course correction negatively (probably because of the attendant blaming), instead of embracing it. Wouldn't it be a lot more productive if everyone involved in a project accepted, nay, advocated, nay, demanded -- before the start of the project -- there be some number of course corrections during the project?

(c) Agile methodologies need to accept outside constraints, rather than to try to re-educate them. For example, what to do about certain marketing departments, which don't care about how the software is created, but only that it be done by some date months away, and that it meet all expectations? In more than 25 years of software development, I have found it nearly impossible to educate uninterested parties. All the pontificating about scrums (an ugly word; can we change it?), iterations, extreme programming (another bad term), feature driven development, test driven development, and all that is agile will do little or nothing. What works is repeatedly and consistently exceeding the expectations of such parties. What also works is to actively seek buy-in sufficiently high in an organization. Agile proponents need to teach how to work with difficult stakeholders along with the mechanics of how to carry out projects.

(d) "Your software development process is what materializes when the heat is on" (a comment by an attendee) -- This is the crucible in which we need to find a less arduous way to create software. Do agile methodologies stick when the heat is on? What does stick?

(e) It's not clear that a lot of little iterations get you efficiently to your long-term goal. It seems some longer term planning and navigation needs to be blended with short development cycles to create sort of a "scrum fractal".

For more information about agile methodologies, visit http://www.agilealliance.org.

Friday, May 06, 2005

A good sign...

One of the best indications of the quality of a programmer is their coding practice. Some of the best programmers I've known produce code almost mechanically in a consistent format which is very easy to read.

Among their practices:

Matching curly brackets in the same column. Of course an opening curly bracket on its own line generates more "white space" (more on this later). However, it is overwhelmingly beneficial to be able to visually pick out matching curly brackets by simply scanning a column, instead of hunting for the opening bracket in some random column at the end of an 'if' or 'for' statement. What's more, these outstanding programmers -- when you watch them type -- would type the curly brackets together, so they don't waste time later resolving compile "end-of-file" errors. They do the same with parentheses.

White space. I learned from an ad copy guru that the human eye and brain likes white space. Dense ads usually repel the eye. The same goes for code, and these outstanding programmers know this. They put in ample white space that makes their code comfortable to look at, and easy to read and to understand.

Also, the white space is not arbitrary -- it serves a purpose. For example, lines of code for, say, initializing a set of related working variables are like a paragraph, and should be written as a block of code with a blank line before and after. This formatting conveys a meaning that makes it easier for the next programmer to understand what is going on.

Another place where white space is desirable: after commas, and before and after operators. These spaces are akin to spaces between words. Itcertainlyisharderforyoutoreadthiswithoutspaces,isitnot?

And yes, an opening curly bracket should be on its own line. Not only is it easier to match with its mate, the white space is also naturally appealing to the eye.

Methodical coding. This practice is closely linked to a disciplined process for designing software. These outstanding programmers ask and answer the same series of questions before writing code. "What objects are involved?" "What do each of these objects know about?" "What do they do?" "What information do they share?"

This process is reflected in the code construction. First header code is written to define objects (of course, when an object class is coded, the opening and closing curly brackets are typed together). Then the data members are added. Then the access methods. Then the implementation of the methods. Aside from top-down psuedo-code during design, code is written and tested bottom-up. Magically when the coding reaches the top, the vast majority of it works, and is very comprehensible.

Refactoring and reformatting. Most legacy code is messy. Outstanding programmers don't just leave it that way -- they leave it in a readable format for the next programmer (I'm not talking about religious wars about style here, but rather about making really messy code look professional). Outstanding programmers will tell you that messy code wastes time, and the small time investment it takes to clean it up more than returns the investment.

A more extreme measure in this direction is refactoring -- actually changing the code, often to hide irrelevant detail. Outstanding programmers know that the human mind can hold only so much detail at once, and the cost of code maintainence is vastly reduced by allowing only a certain amount of detail to be in scope. Thus, rewriting code that does five things at once so instead it takes five independent passes -- in the end -- saves time, effort and money. So does replacing 27 arguments with a few "parameter" or "context" objects. So does eliminating global variables.

So if you're thinking of hiring a programmer, an excellent indication of the person's skills is the quality of the source code they produce. Excellent code, excellent programmer.

Wednesday, November 03, 2004

History lessons...

A huge amount of time, money and brainstrain is spent on trying to figure out existing code. It is my experience that far more is spent on this activity than on actually writing code or even fixing bugs. I'd like to share with you some experiences and a suggestion that will significantly reduce this expenditure.

I'll start with the suggestion: Make certain software developers learn the history of the code they are working on. Once they do this, they can move much more quickly. Let me explain how I came to this insight through a few of my experiences.

In several situations, I've found myself surrounded by volumes of code and a list of enhancements to make and bugs to fix. Understanding what the client wanted was the easy part. The hard part was becoming fluent in the code so I could make the appropriate changes.

In one situation, it was just me and the code -- the programmers who had written the code were long gone. To make things worse, it was pretty badly written -- Windows code written by developers fairly new to Windows.

I managed to add some instrumentation code that revealed enough of what was going on so I could fix some of the worst bugs, but I never got to the point where I really felt fluent with the code. The main reason was there were vestiges of things which didn't seem to belong, but nonetheless there they were. I could only theorize about these vestiges of code -- did they have something to do with logic which had been long changed or removed? or they were the beginnings of some project that had been abandoned? were they still relevant? Since I wasn't being paid to do archaeology, I never got around to figuring out the reasons for these vestiges, but -- in the back of my mind -- they always bothered me. I always took these vestiges into account when modifying code, but I always had the feeling this was just a ritual.

For example, many objects had a member named 'm_revision', which seemed to be incremented whenever a change occurred. Clearly, the intent of this member was to give each change a revision number. The problem was that I wasn't entirely sure that these revision numbers were really used anywhere -- there was logic that depended on them, but it was not clear this logic was ever executed. I decided to dutifully make sure 'm_revision' was incremented properly just in case there were situations where this logic was executed.

Now imagine if I knew that 'm_revision' wasn't really used anywhere. The code for incrementing it and using it, then, was unnecessary. One option: I could cut corners and skip making sure my changes incremented 'm_revision' properly. However, this approach would probably make a bigger mess than continuing to make sure it was incremented. A better approach would be to remove all code which mentioned 'm_revision', which I could do with confidence if I could talk to a developer who could confirm that this was OK to do. As I mentioned earlier, the developers were long gone, so I had no practical choice but to operate on the assumption that 'm_revision' was necessary.

Some time later, I found myself in the midst of another client's body of source code, somewhat better written, but massive -- literally thousands of classes and millions of lines of code. Once again, my mission was clear, but the source code, to a great extent, was incomprehensible. I spent much more time than I would ever care to admit trying to get fluent with the code.

A key difference this time was I still had access to many of the programmers who had written this code. While many of them found my questions annoying, a few developers found a few hours for me to lead them through a long series of questions. Some of the discussions were merely about the meanings and purposes of certain classes. However, the most valuable discussions revealed how the developers came to create the classes that they did -- e.g. "First we defined these classes, but then we encountered certain problems, so we solved these problems by introducing these other classes", etc. Knowing how the code had evolved gave me a quick fluency which simply memorizing classes and purposes could never do. The explanations for why classes were created served as landmarks in the code, quickly making it familiar terrain. Although it has been a number of years since I interviewed these developers, my fluency with these classes remains.

I now think about how valuable it would have been to interview the developers who created 'm_revision'. With the history of 'm_revision' and its entire body of code, I could have developed a confident fluency, and I could have done much more for the client.

The lesson? Make it part of your software development culture to pass on the history of the code. History gives you fluency and landmarks that makes code feel familiar. Familiarity is what allows software developers to make code changes confidently and quickly.

The best way to pass on the history of the code is to write it down -- in a Word file, in the source code, on a company intranet. And write it down before the original developers can no longer be located. Regularly add stories and explanations from each development cycle. Who should do the writing? All developers, perhaps; a developer who writes English well, much better choice.

Finally, the beauty of this documention is you don't have to update it -- the code may change, but its history does not.

Saturday, October 16, 2004

Coaching software developers

All this talk about good software development practices -- keeping an up-to-date project/task plan, sharing test exercises with QA, interviewing developers at the end of their projects -- is all well and good, but why don't these practices naturally take hold in most software development environments? The answer is simple -- good practices take hold only with proper training and coaching.

Coaching?

Consider how athletes perform. Almost without exception, even the most naturally-talented althetes don't succeed without coaching. Coaches are necessary to observe techniques, to reinforce current skills and to teach new ones.

Software developers are mental athletes. Without training and coaching, software developers rely on raw talent, and perform nowhere near their potential. Training introduces software developers to skills and techniques beyond the amateur level. Coaching reinforces these skills and techniques, and expands them further.

What's involved in coaching software developers? For starters, training. There are a lot of proven techniques -- planning/task analysis, refactoring, pair programming, agile design, design patterns -- but software developers generally won't find out about these techniques unless they attend training courses, workshops and conferences. A convenient way to expose your software developers to training is to bring it into your company, and make courses part of the regular culture.

Training is not enough, however. It needs to be reinforced with regular coaching -- observing how software developers are putting techniques and skills into practice, reinforcing fading skills, redirecting unproductive techniques. When appropriate, coaches can introduce new skills.

Coaching software developers is a tricky business. The personalities of typical developers do not lend themselves well to having some "expert" come in to "correct" them. A successful coach needs to be welcomed by the software developers as an ally. For this reason, a coach needs to be a software developer himself -- a very good one at that -- and has to be able to build trust with software developers.

The place to start building trust is in training. First, software developers must request training, and not be required to take it. Anyone who is forced to take training will have a distrustful attitude from the start.

Next, the instructor needs to build trust so the conversation about techniques and skills can be taken successfully from the classroom to software developer's office. A technique I've used to build such trust is simply to spend some of the class time talking with software developers about the software they are writing. Developers find that I'm just like they are, that I've done the things they are doing -- we end up finding we're in sort of a fellowship.

Once the conversation has moved from the classroom to the office, it is important to keep the trust momentum going. Simply looking over what the developer is doing and assigning a grade is a sure formula for failure. An approach I find works for me is to drop by a developer's office and ask about how things are going with some skill or technique we talked about in class. This allows the developer to feel that his thoughts count, a key ingredient for maintaining and building trust. This approach, in the meantime, allows me to determine how well the developer has learned the technique and whether it is taking hold.

It is often beneficial to recognize developers who have mastered certain skills and techniques, since this motivates other developers to seek training and coaching.

Thus the key to get proven software development techniques to take hold in your development culture is to train and coach. Without coaching, chances are very good that learned skills learned will fade away, and your development staff will fall back to the amateur level. With coaching in good software development practices, your development staff has the potential to become formidable to your competition.

Tuesday, October 12, 2004

Project planning illustrated

Today I'll go through a simple project planning exercise to show what I mean when I talk about generating a task list and test exercises.

Although it generally isn't prudent to blindly apply any particular formula to software development, it seems pretty safe here -- I'm largely suggesting that developers think thoroughly before plunging ahead.

The project I'll use for this illustration is my oft-mentioned 'Battleship' game.

Imagine you, as a developer, just received a spec for the 'Battleship' game from marketing. It seems simple enough -- your game will run on a particular type of cell phone operating system, and will support standard features like graphically displaying your ships, hits on your ships, and your shots. When your ships are destroyed, you need to show them sinking in some graphically-dramatic fashion. Additionally, your cell phone needs to tell you the game is over when either you or your opponent have sunk all of the other's ships.

The end goal is quite clear (at least it seems that way initially). But what series of tasks will take you from no code to completed code? Let's try to lay out a task list.

Also, let's take a true unit development approach. This means we'll create certain components first, then put them together to create the final program.

What might the logical pieces be?

Let's say there are several main components: (1) the core logic of the game, (2) the graphical display, (3) the user interface and (4) the communications module. Each component knows nothing about the others. Components communicate via a well-defined interface. The components will be assembled in a relatively thin application program, although during unit development, each component will be hosted in its own testbed program.

For the sake of brevity, I won't go through the task analysis for all of the components. Instead I'll focus only on the communications module.

What tasks are needed to create the communications module? There is no one correct place to start, so I'll suggest we first create a communications testbed program to get some familiarity with the cell phone programming environment. Then we can turn our attention to designing and implementing the communication protocols and mechanisms. To do the latter, we need to make a list of the types of messages the communications module must handle. Thus the first cut at the task list is:

(1) Create testbed program
(2) Make list of message types
(3) Design protocol to handle all message types
(4) Define communications objects
(5) Implement classes for communications objects
(6) Implement methods for communications objects

Now let's try to see if we can assign estimated times, and if so, does each task take a half-day or less?

Immediately there's a problem. There's no way to estimate how long it will take to create the initial testbed program since we're not familiar with the cell phone development environment. We need to add a task to get up to speed on the development tools. Let's budget a half day for this.

Once we're familiar with the editor, compiler and debugger in the development environment, we can create our initial testbed program. This work also includes coming up with some naming conventions, directory structures, and other preparation. A half-day may be generous, but that's what we'll assume for this task.

Now for the communications protocol. We'll need to make a list of game scenarios to make a first cut at the messages we expect will be sent and received by the user and his opponent. Let's say we can make the list of messages in half a day, including informal discussions with the product manager about game scenarios.

Now we need to decide on the actual mechanisms which will send and receive messages. These mechanisms will consist of an assembly of objects, including message objects. Assume two or three iterations of this design take one day, including dropping by several colleagues' offices to get their opinions. Let's break this into two tasks: initial design (one half day) and design refinement (one half day).

It occurs to us that before we can properly implement the workings of the communications mechanisms, we need to get up to speed on the cell phone communication API. The best way to do this is to write some practice code in which we get two cell phones to talk to each other. For this programming, we don't worry about good design -- the purpose is just to get a working knowledge of the cell phone API. The time allocated for this exploratory task: one half day.

Once we understand how the cell phone communications API works, and once we've decided on an object design, we need to write the code. For each of the objects and operations, we will define one or more tasks. For example, for the "ProcessMessage" operation, we need three tasks: "Write code to handle the arrival of a valid message", "Write code to handle the arrival of a corrupt message", and "Write code to handle the arrival of an error message".

Once you break things down, you'll usually find that your programming time estimate will be longer than you initially imagined, but your revised estimate will be much more realistic. Also, the list of tasks will give you and your manager a quantitative way to judge progress during programming.

As you create individual tasks, you also need to create one or more test cases for each programming task. For example for the "Write code to handle the arrival of a valid message" task, you need to provide test cases to process messages. One such test might be: "Use the 'Send Valid Message' item on the testbed program's 'Simulate' menu to test the proper processing of valid messages".

Defining this test spawns a new project task, i.e., "Add a 'Send Valid Message' item to the testbed program's 'Simulate' menu to send the receiving object a valid message". This work requires an entry on your task list and a time allocation.

During the course of the project, starting with this task analysis, you will find new tasks are spawned constantly. Promptly adding such tasks to the task list gives both you and your manager a very realistic and up-to-date picture of the state of the project. It also makes it very likely you won't forget to do it.

In the course of writing each section of code, we assume you will try out the appropriate test cases, so we won't have a formal "Test and debug" task. You will consider your project is ready for QA testing when you've completed all of your programming tasks and all of your test cases work.

During this first pass at planning this project, nothing looked particularly risky. It doesn't appear there will be any tricky coordination between departments or joint venture partners, or reliance on technology which hasn't yet been fully researched and developed. However, this detailed analysis will flush out such issues, in particular when the product must rely on algorithms yet to be developed. Algorithm research generally cannot be diced and analyzed in this fashion. Often overly optimistic hopes about algorithm research end up sinking projects single-handedly.

Some things to note:

First, we did all of this analysis before actually writing any code. The point is to stretch your imagination as much as possible to create a realistic up-front picture of how the project will go. The more you do this type of analysis, the easier it gets.

Secondly, this exercise assumes you won't have much interaction with other people. I kept it that way for simplicity, but we could create interdependency by assuming the other modules will be done by other developers. Having other developers involved will require you to add tasks and time for communicating with them.

Third, this task list is just your starting point. As I showed with the "Write code to handle the arrival of a valid message" task, you will find that you need to add tasks you didn't think of initially. You will also have to break tasks into pieces when it is clear it is necessary. In order for the task list to be a useful tool, you, as a developer, need to be disciplined and update the task list constantly.

Fourth, this task list forms a quantitative basis for managers to see where a project stands at any point. As a manager, all you need to do is look at which items are marked "Done!", which items are marked "Not done", and which items have red flags (denoting problematic issues). A web-based task list makes it very easy for developers to share this information with managers.


For those who think this plan portrays programming at a leisurely pace, I submit that this plan closely reflects how things typically turn out in reality. Accelerating the schedule is an exercise in ignoring reality and enticing failure. Realistically taking into account the work to be done is the main reason projects succeed.

Boosting QA effectiveness via developer planning

At the beginning of many software development cycles, good intentions abound about how everything will be done correctly and thoroughly this time around. These good intentions include lip service about "test plans" to be produced by QA.

Much more often than not, test plans don't materialize. Why? Most QA people don't have a really clear sense of what to test until they have actual software in hand. They may spend some time with specifications trying to understand what test exercises would be appropriate. But the sad fact is that this work requires immense effort and imagination, and thus it is usually goes undone. In the end, QA proceeds without a map of what to test, and often overlooks significant problems until they are uncovered by accident late in the development cycle.

The enormous effort required to create a test plan is a sign a task needs to be broken down into several tasks. In this particular case, the several smaller tasks include obtaining test exercises from the initial project task list created by the developer. This presupposes your developers create an up-front project task list with test exercises.

Developer-QA collaboration is an extremely sensible thing to do. The developer is going to generate test exercises for each of his tasks anyways, so why not share these test exercises with the QA person to provide a sense of the things which need testing? The QA person needs to expand on these tests, but the initial set from the developer constitute a very significant running start for the QA person.

There are a number of ways a developer can share test exercises with QA. For instance, a developer can share out project directories so QA has access to test files and other data. A much better way to go is to implement a web-based project task list, each entry of which includes a simple test procedure and links to test files, if relevant.

For example, in the 'Battleship' game I've written about elsewhere, imagine one of the developer's tasks is to implement the rendering of a hit on one of your opponents ships. The test procedure would read something like:

1. Open testfile.dat ( a saved game state -- your battleships, your shots, your opponents battleships and your opponents shots)

2. Enter the coordinates of a shot which hits one of your opponents ships

3. Observe that the shot is rendered as a 'hit'.

The test exercise web page would contain a link to testfile.dat.

A bonus feature on the web page would be a status field which reported whether the exercise works successfully in the current version of the software. Both the developer and the QA person would be able to set the status. If the QA person sets the status to 'DOES NOT WORK', an additional feature could be an email notification to the developer.

The advantages of a web-based approach are many. For instance, everyone involved (developers, QA, management) has instant access to the most current version of this information. Managers can see the status of a project by visiting a summary page. Developers can add or modify tasks and test exercises with minimal effort.

I recommend developer-QA collaboration not only because it sounds good, but because it has worked spectacularly for me in practice. Instead of major adrenalin rushes and heart-in-your-throat panic, projects where I've involved QA from the start with my test exercises have wound down in boring, uneventful, predictable fashion. Early in the project, an overzealous QA person may mark many an exercise with 'DOES NOT WORK'. However as a project progresses, 'DOES NOT WORK' steadily changes to 'WORKS!'

One last point: your company culture probably isn't set up for sharing test exercises yet. The reason? Most developers don't create an initial project task list with test exercises up front. So a prerequisite for using the QA-developer collaboration is to change your software development culture -- to teach your developers to make their best effort to create an up-front project task list complete with test exercises, and to keep it current.