John Keklak's Coaching Comments

Saturday, October 16, 2004

Coaching software developers

All this talk about good software development practices -- keeping an up-to-date project/task plan, sharing test exercises with QA, interviewing developers at the end of their projects -- is all well and good, but why don't these practices naturally take hold in most software development environments? The answer is simple -- good practices take hold only with proper training and coaching.

Coaching?

Consider how athletes perform. Almost without exception, even the most naturally-talented althetes don't succeed without coaching. Coaches are necessary to observe techniques, to reinforce current skills and to teach new ones.

Software developers are mental athletes. Without training and coaching, software developers rely on raw talent, and perform nowhere near their potential. Training introduces software developers to skills and techniques beyond the amateur level. Coaching reinforces these skills and techniques, and expands them further.

What's involved in coaching software developers? For starters, training. There are a lot of proven techniques -- planning/task analysis, refactoring, pair programming, agile design, design patterns -- but software developers generally won't find out about these techniques unless they attend training courses, workshops and conferences. A convenient way to expose your software developers to training is to bring it into your company, and make courses part of the regular culture.

Training is not enough, however. It needs to be reinforced with regular coaching -- observing how software developers are putting techniques and skills into practice, reinforcing fading skills, redirecting unproductive techniques. When appropriate, coaches can introduce new skills.

Coaching software developers is a tricky business. The personalities of typical developers do not lend themselves well to having some "expert" come in to "correct" them. A successful coach needs to be welcomed by the software developers as an ally. For this reason, a coach needs to be a software developer himself -- a very good one at that -- and has to be able to build trust with software developers.

The place to start building trust is in training. First, software developers must request training, and not be required to take it. Anyone who is forced to take training will have a distrustful attitude from the start.

Next, the instructor needs to build trust so the conversation about techniques and skills can be taken successfully from the classroom to software developer's office. A technique I've used to build such trust is simply to spend some of the class time talking with software developers about the software they are writing. Developers find that I'm just like they are, that I've done the things they are doing -- we end up finding we're in sort of a fellowship.

Once the conversation has moved from the classroom to the office, it is important to keep the trust momentum going. Simply looking over what the developer is doing and assigning a grade is a sure formula for failure. An approach I find works for me is to drop by a developer's office and ask about how things are going with some skill or technique we talked about in class. This allows the developer to feel that his thoughts count, a key ingredient for maintaining and building trust. This approach, in the meantime, allows me to determine how well the developer has learned the technique and whether it is taking hold.

It is often beneficial to recognize developers who have mastered certain skills and techniques, since this motivates other developers to seek training and coaching.

Thus the key to get proven software development techniques to take hold in your development culture is to train and coach. Without coaching, chances are very good that learned skills learned will fade away, and your development staff will fall back to the amateur level. With coaching in good software development practices, your development staff has the potential to become formidable to your competition.

Tuesday, October 12, 2004

Project planning illustrated

Today I'll go through a simple project planning exercise to show what I mean when I talk about generating a task list and test exercises.

Although it generally isn't prudent to blindly apply any particular formula to software development, it seems pretty safe here -- I'm largely suggesting that developers think thoroughly before plunging ahead.

The project I'll use for this illustration is my oft-mentioned 'Battleship' game.

Imagine you, as a developer, just received a spec for the 'Battleship' game from marketing. It seems simple enough -- your game will run on a particular type of cell phone operating system, and will support standard features like graphically displaying your ships, hits on your ships, and your shots. When your ships are destroyed, you need to show them sinking in some graphically-dramatic fashion. Additionally, your cell phone needs to tell you the game is over when either you or your opponent have sunk all of the other's ships.

The end goal is quite clear (at least it seems that way initially). But what series of tasks will take you from no code to completed code? Let's try to lay out a task list.

Also, let's take a true unit development approach. This means we'll create certain components first, then put them together to create the final program.

What might the logical pieces be?

Let's say there are several main components: (1) the core logic of the game, (2) the graphical display, (3) the user interface and (4) the communications module. Each component knows nothing about the others. Components communicate via a well-defined interface. The components will be assembled in a relatively thin application program, although during unit development, each component will be hosted in its own testbed program.

For the sake of brevity, I won't go through the task analysis for all of the components. Instead I'll focus only on the communications module.

What tasks are needed to create the communications module? There is no one correct place to start, so I'll suggest we first create a communications testbed program to get some familiarity with the cell phone programming environment. Then we can turn our attention to designing and implementing the communication protocols and mechanisms. To do the latter, we need to make a list of the types of messages the communications module must handle. Thus the first cut at the task list is:

(1) Create testbed program
(2) Make list of message types
(3) Design protocol to handle all message types
(4) Define communications objects
(5) Implement classes for communications objects
(6) Implement methods for communications objects

Now let's try to see if we can assign estimated times, and if so, does each task take a half-day or less?

Immediately there's a problem. There's no way to estimate how long it will take to create the initial testbed program since we're not familiar with the cell phone development environment. We need to add a task to get up to speed on the development tools. Let's budget a half day for this.

Once we're familiar with the editor, compiler and debugger in the development environment, we can create our initial testbed program. This work also includes coming up with some naming conventions, directory structures, and other preparation. A half-day may be generous, but that's what we'll assume for this task.

Now for the communications protocol. We'll need to make a list of game scenarios to make a first cut at the messages we expect will be sent and received by the user and his opponent. Let's say we can make the list of messages in half a day, including informal discussions with the product manager about game scenarios.

Now we need to decide on the actual mechanisms which will send and receive messages. These mechanisms will consist of an assembly of objects, including message objects. Assume two or three iterations of this design take one day, including dropping by several colleagues' offices to get their opinions. Let's break this into two tasks: initial design (one half day) and design refinement (one half day).

It occurs to us that before we can properly implement the workings of the communications mechanisms, we need to get up to speed on the cell phone communication API. The best way to do this is to write some practice code in which we get two cell phones to talk to each other. For this programming, we don't worry about good design -- the purpose is just to get a working knowledge of the cell phone API. The time allocated for this exploratory task: one half day.

Once we understand how the cell phone communications API works, and once we've decided on an object design, we need to write the code. For each of the objects and operations, we will define one or more tasks. For example, for the "ProcessMessage" operation, we need three tasks: "Write code to handle the arrival of a valid message", "Write code to handle the arrival of a corrupt message", and "Write code to handle the arrival of an error message".

Once you break things down, you'll usually find that your programming time estimate will be longer than you initially imagined, but your revised estimate will be much more realistic. Also, the list of tasks will give you and your manager a quantitative way to judge progress during programming.

As you create individual tasks, you also need to create one or more test cases for each programming task. For example for the "Write code to handle the arrival of a valid message" task, you need to provide test cases to process messages. One such test might be: "Use the 'Send Valid Message' item on the testbed program's 'Simulate' menu to test the proper processing of valid messages".

Defining this test spawns a new project task, i.e., "Add a 'Send Valid Message' item to the testbed program's 'Simulate' menu to send the receiving object a valid message". This work requires an entry on your task list and a time allocation.

During the course of the project, starting with this task analysis, you will find new tasks are spawned constantly. Promptly adding such tasks to the task list gives both you and your manager a very realistic and up-to-date picture of the state of the project. It also makes it very likely you won't forget to do it.

In the course of writing each section of code, we assume you will try out the appropriate test cases, so we won't have a formal "Test and debug" task. You will consider your project is ready for QA testing when you've completed all of your programming tasks and all of your test cases work.

During this first pass at planning this project, nothing looked particularly risky. It doesn't appear there will be any tricky coordination between departments or joint venture partners, or reliance on technology which hasn't yet been fully researched and developed. However, this detailed analysis will flush out such issues, in particular when the product must rely on algorithms yet to be developed. Algorithm research generally cannot be diced and analyzed in this fashion. Often overly optimistic hopes about algorithm research end up sinking projects single-handedly.

Some things to note:

First, we did all of this analysis before actually writing any code. The point is to stretch your imagination as much as possible to create a realistic up-front picture of how the project will go. The more you do this type of analysis, the easier it gets.

Secondly, this exercise assumes you won't have much interaction with other people. I kept it that way for simplicity, but we could create interdependency by assuming the other modules will be done by other developers. Having other developers involved will require you to add tasks and time for communicating with them.

Third, this task list is just your starting point. As I showed with the "Write code to handle the arrival of a valid message" task, you will find that you need to add tasks you didn't think of initially. You will also have to break tasks into pieces when it is clear it is necessary. In order for the task list to be a useful tool, you, as a developer, need to be disciplined and update the task list constantly.

Fourth, this task list forms a quantitative basis for managers to see where a project stands at any point. As a manager, all you need to do is look at which items are marked "Done!", which items are marked "Not done", and which items have red flags (denoting problematic issues). A web-based task list makes it very easy for developers to share this information with managers.


For those who think this plan portrays programming at a leisurely pace, I submit that this plan closely reflects how things typically turn out in reality. Accelerating the schedule is an exercise in ignoring reality and enticing failure. Realistically taking into account the work to be done is the main reason projects succeed.

Boosting QA effectiveness via developer planning

At the beginning of many software development cycles, good intentions abound about how everything will be done correctly and thoroughly this time around. These good intentions include lip service about "test plans" to be produced by QA.

Much more often than not, test plans don't materialize. Why? Most QA people don't have a really clear sense of what to test until they have actual software in hand. They may spend some time with specifications trying to understand what test exercises would be appropriate. But the sad fact is that this work requires immense effort and imagination, and thus it is usually goes undone. In the end, QA proceeds without a map of what to test, and often overlooks significant problems until they are uncovered by accident late in the development cycle.

The enormous effort required to create a test plan is a sign a task needs to be broken down into several tasks. In this particular case, the several smaller tasks include obtaining test exercises from the initial project task list created by the developer. This presupposes your developers create an up-front project task list with test exercises.

Developer-QA collaboration is an extremely sensible thing to do. The developer is going to generate test exercises for each of his tasks anyways, so why not share these test exercises with the QA person to provide a sense of the things which need testing? The QA person needs to expand on these tests, but the initial set from the developer constitute a very significant running start for the QA person.

There are a number of ways a developer can share test exercises with QA. For instance, a developer can share out project directories so QA has access to test files and other data. A much better way to go is to implement a web-based project task list, each entry of which includes a simple test procedure and links to test files, if relevant.

For example, in the 'Battleship' game I've written about elsewhere, imagine one of the developer's tasks is to implement the rendering of a hit on one of your opponents ships. The test procedure would read something like:

1. Open testfile.dat ( a saved game state -- your battleships, your shots, your opponents battleships and your opponents shots)

2. Enter the coordinates of a shot which hits one of your opponents ships

3. Observe that the shot is rendered as a 'hit'.

The test exercise web page would contain a link to testfile.dat.

A bonus feature on the web page would be a status field which reported whether the exercise works successfully in the current version of the software. Both the developer and the QA person would be able to set the status. If the QA person sets the status to 'DOES NOT WORK', an additional feature could be an email notification to the developer.

The advantages of a web-based approach are many. For instance, everyone involved (developers, QA, management) has instant access to the most current version of this information. Managers can see the status of a project by visiting a summary page. Developers can add or modify tasks and test exercises with minimal effort.

I recommend developer-QA collaboration not only because it sounds good, but because it has worked spectacularly for me in practice. Instead of major adrenalin rushes and heart-in-your-throat panic, projects where I've involved QA from the start with my test exercises have wound down in boring, uneventful, predictable fashion. Early in the project, an overzealous QA person may mark many an exercise with 'DOES NOT WORK'. However as a project progresses, 'DOES NOT WORK' steadily changes to 'WORKS!'

One last point: your company culture probably isn't set up for sharing test exercises yet. The reason? Most developers don't create an initial project task list with test exercises up front. So a prerequisite for using the QA-developer collaboration is to change your software development culture -- to teach your developers to make their best effort to create an up-front project task list complete with test exercises, and to keep it current.

Thursday, October 07, 2004

Something that makes a difference that you can do right away...

Most development organizations are often like overloaded computers -- their CPU monitors pegged out at 100% most of the time, with an occasional brief drop. This is not exactly an environment where you can expect to make wholesale, immediate changes, so I'd like to talk about something even maxed-out organizations might be able to handle. I'm not saying this slight change won't have to be squeezed in, but it won't take that much squeezing, and the benefit is enormous.

What I'm talking about is interviewing developers at the end of their projects to record their thinking. Note that this is not the same as asking developers to record their thoughts themselves -- this has been proven time and again not to work well. A key ingredient is a technically-knowledgeable and articulate interviewer who digs out the relevant thinking behind a project and records it for the next developer.

For example, imagine your company produces cell phone games, and one of your developers has recently completed the first version of "Battleship". The developer finished the job more or less to the satisfaction of management (the developer didn't use the planning I talk about elsewhere, so the project did not exactly end quietly and smoothly), and is ready to work on his next game.

In the meantime, your company is preparing to release "Battleship" to customers. Once the game is on the market, probably some bugs will be discovered and need to be fixed, and marketing will want to add some features for a new version.

Your software development management has decided to assign this follow-on work to a more junior developer. He is encouraged to approach the original developer with any questions he has.

In reality several things will happen. The original developer will be immersed in his new project, so when the junior developer approaches him, he will generally be cranky. Why? Because the original developer's memory of what he did in the "Battleship" project has faded, and it is sometimes painful to refresh your memory about code, especially when you may have been too clever. Secondly, the original developer, like most typical developers, may have expectations that anyone with a reasonable IQ should be able to figure things out from the source code, so the question-asking is just a lazy short-cut. Third, there might be things in the code the original developer is not particularly proud of.

In any event, the junior developer does not get nearly enough guidance, and resorts to "software archaeology" to reconstruct the original developer's thinking from the source code. At this point, the junior developer may begin to develop migraines, especially if the original developer did not write particularly clear and well-formatted code.

Eventually the junior developer submits some bug fixes and modifications to the code. Regularly, quality assurance finds broken features and new bugs that are the result of the junior programmer's assumptions about the code. Fortunately for your company, quality assurance is able to find all such major flaws.

This type of scenario occurs regularly in software development organizations -- a code hand-off with a promise for support from the original developer hardly ever works. Human nature almost guarantees a rocky road ahead. Also, there is a forced quality to the process, and anything with a forced quality usually means that the process needs to be broken down into smaller pieces.

The piece that is missing is the interviewing I mention above. The trick is to get the right person to do the interviewing and recording.

First the person has to be a programmer or former programmer (a really good former programmer is ideal). Nothing will annoy a developer more than an interviewer who knows little or nothing about programming. A common background in programming gives the developer a sense of kinship with the interviewer, which builds trust and rapport between the two. This bond allows the interviewer to press with questions that would irritate the developer when asked by someone else. This goes double when the interviewer asks the developer to explain something a second time.

Secondly, the person has to have a keen sense of what to ask. When I serve as an interviewer, I focus on terms which haven't yet been defined. For example, when I interviewed a developer about a simulator program he had written, he mentioned a php interface layer. I know what php is, and I have a general notion of interface layers, but I didn't know what it meant for this particular program. When I asked the developer to explain this further, our conversation yielded some key information the source code by itself did not reveal plainly.

Thirdly, the person has to be able to articulate the information so the next developer can easily digest and apply it. It seems to me that having a keen sense of what to ask and articulating the answers for the next developer go hand in hand -- the path the interviewer follows is probably the same path the next developer will follow when trying to get his head around the code. So if you can find a person who developers like talking to about their projects, chances are good this person can also record the answers in a useful way for the next programmer.

Of course, capturing a developer's thinking requires some of the developer's time. In an environment which is pegged out to 100% most of the time, you might not see how there is time for interviewing. However, the more you work interviewing into the development process, the easier the going will be somewhat down the road. Junior developers won't have to spend senior developers' time asking questions (essentially doing interviewing), senior developers won't be interrupted nearly as much and will be able to focus on their current projects, and you won't risk having products go out the door with errors based on faulty assumptions from the source code. The bottom line is squeezing in the interviewing into the process makes an overall difference in a relatively short time.

When is the best time to squeeze in the interviewing? At the very end of the project. Ideally, the interviewing should take place after the product has shipped and the developer is not under any pressure to fix "the last bug". At this point, the developer is the most relaxed, and his memory of his thinking behind the code is still fresh.

Where should the interviewers information go? Ideally in the source code. Tools such as Doxygen make it quite easy to keep a developer's thinking and code side-by side in the same file. Another approach is to create a wikipedia, but I'm a bit leery of keeping the thinking in one place and the code in another. Word documents come in a distant third.

No matter where you store the developers' thinking, the point is that it won't just be evaporating with their fading memories. Perhaps your company's CPU meter might begin to dip below 100% a bit more often, all while your company wastes less time and money on software disasters in the field.

Wednesday, October 06, 2004

Plan, plan, plan...

The woes of much of the software development business is rooted in failure to plan. I don't mean a rigid plan that everyone follows blindly off a cliff, but rather a plan which mirrors reality and is updated regularly.

Let me start by relating a story two associates told me about a project which went south. I'm sure everyone involved in software development or management has their own version of this story.

My associates were contracted to develop a relatively complex web application. The client had a fairly good idea of how the web application should appear and how it should behave. But underneath the web application was quite a bit more than just some html and javascript -- the application relied on a mix of .NET, php, C++ DLL's, Flash and a lot of tricks to bring it all together. On the surface, the job was fairly straight-forward (or at least it appeared to be); underneath, there were a lot of tasks which were not as obvious.

My two associates and the client quickly signed a contract and embarked on the programming with the most optimistic of expectations. As soon as the programming started, it became clear to my associates how many details needed to be addressed. The client was not updated on these developments, so he expected the application would be delivered as originally contracted. Then the first milestone date arrived.

At this point, my associates were to deliver an application with a certain set of working web pages. However, they delivered no pages, to the consternation of the client, whose confidence in the team plummeted. Why did they deliver an application with no pages? Because they spent the initial time creating portions of the invisible sub-structure, not on the visible presentation of the data.

The project continued to fail to meet expected milestones. Eventually, the client lost confidence entirely, and pulled the plug on the project.

The moral of the story? Quite simply, you've got to set appropriate expectations. What does this have to do with planning? Planning gives you the expected critical path through the project so you can match appropriate deliverables with milestone dates.

This sounds like the rigid up-front plan I advised against up above, but it's not. The plan you need to make is your best up-front attempt to visualize how the whole project will proceed, which is then updated regularly to keep it in synch with reality.

I learned this over the past fifteen years or so of contract software development. My proposal process always included an analysis which reduced the project to a series of tasks which I expected would take between one-half day and one day. Of course, I didn't share the details of this analysis with prospects; I simply provided a list of milestones and deliverables. However, this task list gave me great assurance that the milestones proposed in contracts were based on sound quantitative reasoning.

Without exception, projects did not proceed as initially envisioned. Usually there were tasks I underestimated or overlooked, or in some cases the client changed his mind about what the software would do or look like. Regularly (often every other day) I updated my list of tasks to reflect what had been done and what needed to be done to complete the project.

When these changes began to materially affect the schedule, I notified the client, who -- almost without exception -- accepted the schedule changes because I could articulate the reasons why the schedule had to change. Invisible sub-structure tasks became visible to the client and were considered part of the deliverables. Expectations moved with reality.

Over time, I implemented a web-based tool which allowed me to quickly dissect a project into tasks and to update the list of tasks with a minimum of effort. In certain situations, I provided the client with access to this task list so he could see exactly what was and was not done at any given point. I'm still working on this tool to provide a summary capability so clients can see, with a single glance, where projects stand.

Learning to plan, and to update plans regularly, requires a lot of discipline and support. I've found that most programmers don't stick with it unless they are regularly coached and supported until the planning process becomes a habit. Convenient tools make the process of forming this habit a lot faster and easier.

When I explained my planning technique to my associates, they laughed and said that their web application project would probably have gone an entirely different way if they had taken my approach. When I took them through the paces of dissecting one of their current projects into a task list, I could see the spark of insight in their eyes which told me they understood this type of planning is essential for the success of any project.


Warning signs...

This entry is addressed to executives and investors of software companies. Why? Because you have the biggest stake in long-term health of your company. And given how common it is to find dyfunctional software development organizations, chances are that things at your company aren't exactly in the pink.

Some red flags to alert you that things aren't necessarily poised for success in the long run:

(1) Programmers tell you that the code makes their heads hurt, and they don't think the code they are working on has long-term viability.

(2) Releases can't seem to get to FCS on time, and without major bugs popping up at the last minute.

(3) Service packs need to be recalled or patched regularly.

(4) Programmers are expected to put in long hours, in particular on weekends.

The reality of software development, some might say. Not necessarily so.

First, code that makes programmers's heads hurt is a huge liability. This means that things are barely, if at all, under control. If you're company is a start-up, then you are really going to a bad place in a hurry. If your company is an established leader in a market, you're toying with the prospect of not being able to respond quickly to shifts in the competitive landscape.

To be able to create new features and fix bugs, programmers need to have clear code. They can't work effectively with code which is all but incomprehensible, filled with thousands of lines of cleverly-implemented and poorly-formatted 'if' statements that take a genius to follow. Most of the time, in these cases, programmers will privately question how much longer they'll be able to do anything with the code.

There is no quick fix to this problem, but there is a remedy which will move things towards professionally-engineered software and a safer place for your company. The remedy is a practice called refactoring, which is the common sense practice of tidying up the code when it starts to become unclear. The best time to do this is when the logic behind the code is still fresh in the mind of the programmer who thought up the logic and wrote the code. Lacking that, programmers should regularly be assigned projects to refresh their memories and refactor sections of code which cause migraines.

For releases which can't get out on time, and which seem to be hobbled by surprise bugs on the eve of release, there is a strong likelihood that the projects which contributed code to the release were never properly planned. What I mean by "planned" is not the "big upfront design and gantt chart", but rather an initial pass to identify all of the tasks each project involves to get a sense of how much time is required and to identify risks and pitfalls, and then regular updates by the programmer to keep the plan current. This requires a bit of effort and discipline at first, but programmers who use this practice (including myself) find our projects wind down in quite a boring, unexciting and on-time fashion. This practice can be taught, and is easier to learn if programmers are provided with appropriate (read: easy-to-use) planning tools.

For service packs which cause major embarrassment, there is a strong likelihood that the thinking behind the code has faded away, and programmers are "reverse-engineering" the thinking from code (which often makes their head hurt), and they make mistakes in the process. These conceptual mistakes lead to programming errors, and often large problems in service packs.

A solution for this is to introduce into the software development organization one or more technically-savvy and articulate people who interview developers about the thinking behind their code. The results of these interviews are incorporated into the source code in some convenient fashion (perhaps with Doxygen) to provide the next programmer to work on the code with the explicitly-stated thinking of the last programmer who wrote the code. Having programmers add this commentary themselves has routinely fallen flat because programmers generally don't like to write, can't write articulately, and have a hard time making sure they have addressed all of the relevant questions. An independent interviewer will naturally drill down in the necessary places, and will hopefully will be able to articulate the relevant information in a way that quickly gets the next programmer up to speed.

On top of having to "reverse-engineer" thinking from code, programmers probably aren't communicating with the testing staff about exactly what needs to be tested, which is why a service pack goes out with major problems. This problem which can be remedied by providing a mechanism for programmers to provide testers with a basic test plan for each change to the software. It sounds like a lot of extra work that will slow the programmers down, but remember, this is about getting a quality product out to your customers. Just because programmers don't want to be disciplined is no reason to majorly inconvenience your customers and embarrass your company (not to mention lose profits). Programmers won't natually become disciplined without regular training and coaching, though, so it is your responsibility to introduce this into your software development culture.

Long hours and weekends for programmers is a sign of a number of problems. First, it is very likely that projects were not thought through thoroughly enough to validate time estimates. There was probably a lot of wishful thinking, and not too much consideration of real risks. The remedy for this problem is the task planning I talked about above for getting releases out on time.

Secondly, it might be that the code is making the programmers' heads hurt, and they're having a hard time reconstructing the thinking of the previous programmers. Clearly, if the previous programmers had been interviewed and their thinking documented, and they were required to leave the code in a clear state for the next programmers, the amount of time new programmers need to spend to accomplish their work would be significantly less.

Third, the morale of the programmers is likely eroding, and there is a more-than-healthy amount of tension with management. Programmers no longer make their best efforts and cut corners, which sets the stage for bugs which appear only after the software is released, and code which make programmers' heads hurt. It's great to have a team which works hard, but when the work begins to erode your programming staff, you're actually damaging your company. Many companies have faded from prominence for exactly this reason.

Experience has shown it is counterproductive to program for more than six hours per day. Programming is an activity of the human mind, not a machine. The human mind needs to refreshed and enriched. If forced to spend its existence in a monotonic state of brute force programming and bug fixing, its effectiveness erodes. People burn out. People move on to other jobs (often before spending time with an interviewer who divines and documents the thinking behind their code).

There are other red flags which I will write about in coming segments, but don't wait. Take a few minutes to find out from your programmers where things stand, and put in place measures which will vastly increase the long-term health of your software source code and programming organization.



Welcome to Coaching Comments!

Someone once said something along the lines of, "If bridges were built like software, you'd have to be foolish to cross them regularly."

Isn't it about time that the software development profession rose above its pedestrian level and became a real engineering discipline? Today there is a wealth of experience and a panoply of tested software development techniques that form the backbone of what could become true, professional software development. For example, design patterns provide a vocabulary of "building blocks" to transform amorphous piles of code into comprehensible collections of mechanisms. Extreme programming draws on human nature to structure the development process so it makes steady forward progress. Task planning yields believable schedules and ferrets out risks and wishful thinking.

The beauty of this particular point in time is that theories are no longer just theories -- bleeding-edge software developers have generated volumes of data about what works in practice and what doesn't. Now is the time for applying what works!

The column is for software developers and their managers to learn about and to reinforce their knowledge of these software practices. I'll try to be both a teacher and coach. Your feedback will be greatly appreciated.

Here's to bringing the software craft to a true engineering profession.