ASQ Software Division 10th ICSQ (2000) Notes

Scott Duncan
19 min readMar 4, 2022

I used to attend Agile and quality related conferences years ago and have saved notes from many of them. While some topics have become overtaken by events since then, there still seem to be some useful ideas that I’d like to pass along.

Yourdon Keynote, “The Revolution for Just Enough Quality”

Ed Yourdon delivered the opening keynote on the subject of “good enough quality.” Yourdon began, however, by discussing “light vs heavy” software process, acknowledging that, for software where the consequences of failure are severe (loss of life, of money, etc.), more formal processes and higher quality concerns are appropriate. However, he feels such an approach is “unnecessary, impractical, and even undesirable in many high-pressure projects.” Yourdon said that “what users really want”, instead of defect-free software, “is software that’s cheap enough, fast enough, feature-rich enough, and available soon enough (i.e., “good enough”)” which is the environment which “Silicon Valley lives with … every day.” Yourdon feels achieving such quality requires “carefully defined processes that involve just enough investment of time, energy, and resources to achieve the desired result.”

My impression of the process debate in the software industry has been that the industry’s problem is not really light vs heavy process but of consistency in process. When pressure occurs in projects, established development (and business) practices are often bypassed largely just to meet the schedule, not because anyone consciously determines they are not needed. As to “good enough” quality, the industry seems to lack the (public) historical data to be able to judge what “good enough” means, i.e., where the appropriate thresholds exist for different quality expectations and what can be done to achieve, and assure we have achieved, those thresholds. The result is that “good enough” ends up meaning “in whatever shape it is when the ship date arrives.”

Yourdon says the software industry has failed to achieve “good enough” software for a broad variety of reasons:

§ We have a tendency to define quality only in terms of defects.

§ We assume that fewer defects equals better quality and that “‘mo’ better” quality is always preferred by the user.

§ We tend to define quality (defect) requirements/objectives once, at the beginning of the project and keep it fixed until the bitter end.

§ We’ve been told for such a long time that processes are crucial, that we often forget that processes are “neutral” — a fool with a “process-tool” is still a fool.

§ We pursue quality with a fixed process that we define once, at the beginning of the project (or, even worse, for all projects in the whole company).

§ We underestimate the non-linear tradeoffs between such key parameters as staff size, schedule, budget, and defects.

§ We ignore the dynamics of the processes: time-delays, feedback loops, etc.

§ We ignore the “soft factors” associated with the process like morale, adequacy of office space, etc.

In comparing light vs heavy processes, Yourdon stated that their differences are most apparent in their degree of documentation, frequency of reviews and approvals, and degree of decision-making authority. Choosing one over the other, he said, requires looking at aspects of four criteria: project cost, project duration, staff size, and risk assessment (consequences of failure). All of these, of course, are part of traditional project estimation and project management as a project progresses.

Yourdon then asked, “If all this stuff is obvious, why aren’t we doing it?”

Strategically, Yourdon claimed many systems development problems are caused by conflicts between short-term and long-term issues, which manifest themselves in terms of:

§ Budgets — e.g., who pays the penalty for lousy development, in terms of higher-than-necessary maintenance costs?

§ Financial rewards (e.g., bonuses, salary increases, stock options, etc.).

§ Promotions and transfers.

§ Corporate culture (How are “heroes” identified? What legends and myths are told to future generations over the corporate campfire?).

If these issues are not identified and resolved at the top of the organization, it’s unrealistic to assume a project manager or the end-user will resolve them with a budget for systems development/acquisition.

Tactically, Yourdon claimed four things get in our way. We do not:

§ have a clear way to define project success;

§ understand the tradeoffs between cost, schedule, quality, and risks;

§ resist the negotiating “games” when estimating; and

§ know when and how to break the rules.

When it comes to what “success” means, Yourdon said that “many projects succeed or fail at the very beginning, before any technical work is done” and have a problem identifying “who has the right to declare success.” Unfortunately, “success” can mean many conflicting things such as finishing on time, staying within budget, delivering the required functionality, and providing “good enough” level of quality, the combination of which “may prove impossible to achieve.” Therefore, “success often depends on agreement as to which areas can be compromised or satisfied” and the risk then becomes a “lack of realistic triage at the beginning of a project” which is why Yourdon claims failure/success occurs so early.

Yourdon observed that discussion of tradeoffs between cost, schedule, and quality usually occur too late in a project — after initial optimism fades — and decisions are made under pressure and under the assumption that such tradeoffs are linear. (The famous example is that one can trade people for time, e.g., if you have 10 people on a project with ten weeks of effort left, you can get it done in 5 weeks by adding 10 more people.) Usually the only reasonable tradeoff late in a project is to reduce the functionality to be delivered by the scheduled date, assuming, of course, that there is no agreement to change the date. Yet other tradeoffs are often attempted without accepting the reality that much of the work may end up only being partially finished anyway and may be lost forever anyhow. Yourdon claimed that ordinary people cannot handle the relationships between cost, schedule, and quality (because they are really “non-linear, third-order polynomials”) without appropriate models of the software process and simulation tools. (I have proposed that the company I am working for should purchase of one of the best tools for this: iThink from High Performance Systems. I have used this tool and have developed such models in the past for project estimation and in-process re-estimation. Yourdon recommended a book by James McCarthy entitled Dynamics of Software Development. I am not familiar with this book, as I have previously used one entitled Software Project Dynamics by Abdel-Hamid and Madnick, which discusses actual modeling of software development dynamics.)

On the subject of negotiation “games,” Yourdon referred to an article by Rob Thomsett from the June 1996 edition of American Programmer (now Cutter IT Journal) entitled “Double Dummy Spit, and Other Estimating Games.” I have a summary of this material in my software estimation course materials. It uses a humorous approach to address the problem of more serious pressures applied in project costing and scheduling. Yourdon had several pieces of advice with regard to strategies for negotiating project estimates:

§ Don’t get tricked into making an “instant estimate” — ask for time to think about it (a week, a day, even an hour).

§ State the estimate in terms of confidence levels or ± ranges rather than single figures.

§ Make the customer, or other members of the organization, share some of the uncertainty.

§ As project manager, see what saying the following achieves — “I don’t know precisely when we’ll finish, but I’m more likely to be able to figure it out than anyone else in the organization. I promise that as soon as I have a more precise estimate, I’ll tell you right away.”

§ Do some reading and research to become better at negotiation:

o Bargaining for Advantage: Negotiating Strategies for Reasonable People, by G. Richard Shell (reissue edition, Penguin Books, June 2000)

o Getting Past No: Negotiating Your Way from Confrontation to Cooperation, by William Ury (Bantam Doubleday Dell, 1993)

Ultimately, when rational negotiation breaks down, Yourdon had the following “break the rules” suggestions:

§ Quit (the project or the company) [if things get too bad, vote with your feet].

§ Appeal to a higher authority [assuming there is one who cares].

§ Go see the movie Gladiator, and learn to say, like Russell Crowe, “We who are about to die salute you!” [or, in other words, just tough it out].

§ Decide which “rules” you’re going to break in order to achieve an “irrational” set of schedule/resource demands that have been imposed upon you.

On this latter point, remember that Yourdon advocated (negotiation of) reduction in functionality as the only normally acceptable approach to tradeoffs. Some sort of phased approach to delivery might be enough of a departure from “the rule” to both “break the rules” but also reach an acceptable solution.

Yourdon closed by discussing the movie (and book) Pay It Forward and how the idea behind them could be applied by the software industry. Basically, Yourdon said the ideas was to “do something for 3 people they would have trouble doing for themselves and then, instead of asking them to pay you back for the favor, ask them to pay it forward to three more.”

From an IT perspective, the possible candidates would be co-workers, your immediate boss, subordinates (project team members who report to you), or (as Yourdon put it) “geeks on a project who really need your help.” Yourdon’s suggestions for what could be done in an IT context were:

§ Give the person/people you select a detailed, worked-out example of a high-quality requirement spec, or design, or test plan, etc.

§ Break the rules — and thereby take some political risks on their behalf — so that they’ll be able to devote more time and resources to doing quality work, if they have any inclination to do so.

§ Take home a low-quality piece of work produced by a co-worker, boss, or subordinate, and spend an evening or weekend re-doing it with higher quality — and then return it anonymously, with the suggestion that if they found it helpful, they should consider paying it forward.

In applying this to end-users, Yourdon suggestion possible candidates were the end-user who will actually use the system, that person’s boss, or the head of that person’s department/division. Yourdon’s suggestions for what to do were:

§ Give them a detailed, worked-out example of a high-quality requirement spec. Who knows: maybe they’ll show it to 3 other end-users, and maybe someday it will lead to a higher-quality development effort.

§ Break the rules — and thereby take some political risks on their behalf — who knows, maybe they’ll be able to help you develop a higher-quality system. Offer to meet operational end-users off-site, on their own time, to find out more about their needs.

§ Offer to spend an afternoon, an evening, or a weekend doing their job, especially if they look so tired, frazzled, and overworked that they can’t pay attention when you’re interviewing them about their requirements for a new system.

And finally, from a general societal perspective, Yourdon suggested candidates would be family (e.g., spouse, kids, siblings, parents), friends and neighbors, and people who “support your infrastructure” (e.g., police, the guy at the newspaper stand, the waitress at the coffee shop who smiles, no matter how grumpy you are). For them, Yourdon suggested

§ Without being obnoxious or condescending about it, show them by example the difference between doing the best job you can possible do, versus the “least you can get away with.”

§ Look for opportunities to do “pay it forward” favors for the people in your environment whom you normally ignore.

§ Take the initiative to do something nice/important for them that they normally wouldn’t even ask for, because they know that you’re too busy doing “important work” for your employer.

In conclusion, Yourdon emphasized that we, in the software industry, don’t get to decide what constitutes “just enough” quality, our customers do and they don’t have the luxury of spending infinite time and $$ in order to get “perfect” technical quality. While Yourdon agreed that changing the current degree of quality requires changing processes and procedures, he felt, more importantly, that it involved “changing the politics and culture of the organization.” And, to him, this meant that “you can’t change the status quo if you’re determined to continue being part of the status quo.”

Bach Keynote, “ Testing in the Next 10 Years”

James Bach whose background is in software testing consulting delivered another keynote talk. Bach’s most recent effort has been development of a testing approach for Microsoft, which he calls the General Functionality and Stability Test Procedure. It consists of a battery of 5 major testing tasks, done in any order, but which all must be done to consider testing to have been completed. Bach did not go into detail about this and I am not sure if it is proprietary to Microsoft or not, but I will pursue it with Bach. It may still belong to Bach on a for-profit basis, i.e., he will not just give it away.

Bach’s talk focused on a skill, rather than process, based approach to software testing. Bach advocated an on-the-job apprenticeship model common in many arts and crafts professions in conjunction with college and university programs, which properly focus on testing skills. One such program under development is at Florida Tech and another speaker, Charles Engle, noted the need for an overall strong software-engineering program in parallel with the computer science focus of most colleges and universities.

Bach suggested that organizational maturity, given the volatile community in which he consulted (Silicon Valley based, and other, high-tech startup and commercial product firms), was difficult to maintain due to staff turnover. Experience just tends to walk away, causing some to feel they must rely on a heavily documented process approach. Bach’s main answer to this, in the context of testing, at least, was greater reliance on development of individual skills and an acceptance of loss. Bach did not feel most people tended to view testing areas as where they would stay for ongoing career growth, so one could not assume that the problem of testing experience disappearing would be solved over time. That is, a decade from now, he felt there would be the same experience drain in testing areas.

Given his experience in commercial product development, he advocated encouraging the growth of “ad hoc testers” who combine test design and test execution to explore a product and are guided by inference, heuristics, and technological insight, not by specs, code, or analytical methods. Such people minimize test documentation, because it slows them down, see themselves as risk focused, and, have always thrived in a market-driven software testing community. A number of people active in software testing research and consulting were trying to take ad hoc (heuristic) testing “professional,” i.e., develop skills to make such an approach a true discipline. Bach mentioned work by

§ Cem Kaner on specific methods of concise test documentation;

§ Johanna Rothman and Brian Lawrence on “soft systems” requirements analysis;

§ His and Kaner’s work on a teachable discipline of rapid, exploratory testing;

§ His and Whittaker’s [an instructor at Florida Tech] work on an approach to testing software using heuristic “attacks”; and,

§ Brian Marick’s ideas on better ways of relating to developers.

Bach stated that professional heuristic testing, as opposed to random, unfocused testing, should be based on a variety of existing areas of knowledge such as:

§ Cognitive Psychology (thinking and perception)

§ Family Psychology (small group behavior)

§ Economics (dynamics of software businesses)

§ Graphic Design (effective documentation)

§ Decision/Game Theory (rational decisions despite uncertainty)

§ Forensics (constructing persuasive arguments)

§ General Systems Theory (deriving reliable heuristics about dynamic open systems)

§ Technology (heuristics about specific systems)

Bach did say that a heightened appreciation for how people learn and how they use heuristics to drive their personal processes was important given market volatility. He mentioned a variety of books, which discuss the subject of cognition and learning such as The Social Life of Information, Tools of Critical Thinking, and Cognition in the Wild. I am not familiar with these books; however, I have, in the past, been associated with empirical studies of programmers and designers looking at the cognitive skills needed (and

employed) by each. One of the references Bach uses notes a finding repeated in various studies which states that skilled experts in various fields cannot verbalize how they do what they do. When they are “watched,” as it were, and what they do is logged by someone else, they will agree that the resulting record is what they do, but asking them to document how they do their job will not produce the same, comprehensive description of the work.

Finally, Bach said he felt organizations needed to develop better ways to “propagate trust upwards rather than control downward.” Some might view this as saying management should not attempt to require any formal methodology or process, allowing people to do what they wish since, not doing this, suggests a lack of trust. However, I do not believe Bach was asking for implicit trust. I believe he was asking for trust based on voluntary demonstration of effectiveness and adherence to good practice. The audit aspect of a comprehensive software quality assurance system does this by checking “compliance” against agreed upon practice. Given some acceptable thresholds of compliance, management does not have to check individual projects, management, or staff activity. Addressing individual issues of process adherence can be left at the lowest appropriate levels. If, on the other hand, wholesale evidence of failure to meet minimum practice exists, management cannot have the “trust” required and more formal process usually results.

Bach predicted that, in 10 years, for 95% of the testing world, nothing much would change because

§ Testing practice will continue to be dominated by local influences.

§ Articles, books, and papers will say substantially the same things as they do today.

§ Test automation tools will get better, yet still be thinly deployed, poorly applied, and lag far behind the general technology curve.

§ Although certification programs will be firmly established, relatively few testers will participate.

§ And, in general, testers will be no more respected than they are today.

However, for the top 5%,

§ Testing sub-specialties and communities (based on industry segment or technology) will become self-aware and aware of each other.

§ Each community will have evolved its own rich lore of methods and heuristics.

§ Forums for test methodology dialog will be established.

§ Heuristic testing will have emerged as a discipline, incorporating wisdom from fields beyond computer science.

§ Professional certification in heuristic testing will be available.

§ More resources will be available to support self-education.

Other Talks

During those times when I was not delivering one of my own talks, I attended others that addressed:

§ the Software Engineering Body of Knowledge (SWEBOK) project (by Jim Moore of the IEEE Computer Society and one of the chairs of the project),

§ two talks on improving testing’s impact on the overall software lifecycle (by Howie Dow and Robin Goldsmith, both consultants in software testing), and

§ key impediments to improving software development effectiveness and quality (by Charles Engle, formerly of the SEI and now manager of development for a company producing software used in infantry and armor training simulators, including some used at Fort Benning).

The SWEBOK Project (Jim Moore)

The Software Engineering Body of Knowledge (SWEBOK) project formally began about 3 years ago though work in this direction had begun a year before that. It was initiated as a joint project of the IEEE Computer Society (IEEE-CS) and the Association for Computing Machinery (ACM) supported by the research resources of the University of Quebec at Montreal (UQAM). The project has the following objectives:

§ “Promote a consistent view of software engineering worldwide;

§ “Clarify the place of, and set the boundary of, software engineering with respect to other disciplines;

§ “Characterize the contents of the SWEBOK;

§ “Provide a topical access to the SWEBOK; and,

§ “Provide a foundation for curriculum development and individual certification and licensing material.”

I participated in reviewing the output of the first two rounds of this project — the strawman and stoneman versions — with the latter having recently been completed. The strawman version consisted of selected professionals writing the initial documents for each section of the SWEBOK and then a group of some 100 individual around the world reviewing specific sections in their areas of expertise. The stoneman edition brought the reviewed and revised work of the strawman version together into one document that was then reviewed by approximately 200 reviewers. The next step — ironman version, phase 1 — will be to take the revised version of the stoneman SWEBOK and present it to industry, government, and academia and ask that they attempt to use it to assess how well it:

§ Describes the professional categorizations and work duties of professionals now working for them;

§ Provides a basis for possible software engineering certification and licensing efforts which already exist (e.g., Texas and some provinces in Canada) and those anticipated; and,

§ Serves as a viable basis for development of curricula at the college and university level that can then be certified by appropriate academic boards overseeing engineering and computing education.

The high-level categories, which now define the SWEBOK, are:

§ Requirements

§ Design

§ Construction

§ Testing

§ Maintenance

§ Configuration Management

§ Quality

§ Engineering Tools & Methods

§ Engineering Process

§ Engineering Management

with from 5–13 subcategories under each one.

SWEBOK has a website (www.swebok.org) where detailed information can be found about project sponsors, goals, and categories of knowledge.

Testing’s Impact on the Software Lifecycle (Howie Dow and Robin Goldsmith)

There were two talks together in one session on the topic of how testing organizations, by asking the right questions as they develop their test plans, can contribute to the overall improvement of project planning and product design. The first speaker, Howie Dow, manages various testing activities at the Compaq Enterprise Testing Lab. The second, Robin Goldsmith, is an independent consultant.

Dow’s environment means that he is often asked to test applications/system products with which he has no familiarity. Sometimes he is testing for Compaq and other times he is testing for an external client. Dow offered a set of questions he felt he needed to have answered in order to understand how to plan testing:

§ What does the product do?

§ What alternatives were explored and why were they rejected?

§ Who are the people on the project?

§ What is the development approach?

§ What is the architecture?

§ Who has the most to gain or lose with success or failure of this project?

§ Why was the development approach selected?

§ What is the target hardware platform and why was it selected?

§ What other software will be needed with this product?

§ What is the emphasis of the project, what is being included and excluded?

§ Who started the project and why?

§ Why is this project being developed?

§ What is the business environment and how does this project “fit in”?

Many people in the audience, of course, were from environments more like that at TSYS where they were only testing their own software development efforts. Many of the questions from Dow’s list were not relevant to them. Dow’s point, however, was that a testing organization should have questions like this, i.e., about the business and technical context of the project, which go beyond matters of specific features, numbers of transactions, etc.

It was pointed out that, for an internal testing organization, the context questions are so obvious to the staff that it hardly seems relevant having such a list. On the other hand, some referred to Bach’s presentation suggesting that people new to the organization (and to testing) could use a set of such questions to help them understand the context of the testing they would be doing. Perhaps, then, the answers to such questions should be an explicit part of training in internal testing organizations rather than a set of Client-focused questions, as Dow required.

Goldsmith’s approach to software development process, be it testing or not, is “What’s in it for me?” That is, his approach to securing support and buy-in focuses on supplying reasons why various quality and process approaches could bring value to others in the organization. For example, he began by discussing the potential value of providing specific examples of data a display formats in requirements as a way to reduce verbiage and more likely ensure developer understanding of what was desired.

A more dramatic example had to do with measuring test effectiveness and then converting that into the cost of allowing defects to escape from one phase of the lifecycle an then found and repaired in later phases, including after the software is delivered to production (or to the customer). Goldsmith provided the following pair of examples (A and B), noting, of course, that the numbers would be organizational specific. The two were compared to illustrate the impact of earlier defect detection in economic terms.

Goldsmith’s question was, “Who, in your organization, would feel there is ‘something in it for them’ to have this kind of information?” Who would like to know whether early detection of defects could save them approximately 46% of the cost of finding defects? Goldsmith’s point was that, if you do not collect this sort of information and cannot say what it is costing you to fix a defect at various points in your lifecycle, it is hard to conceive of asking the question. But without asking such questions, it is almost impossible to focus on the most critical places to improve or justify that improvement efforts actually resulted in anything worthwhile.

Impediments to Software Effectiveness and Quality (Charles Engle)

Engle brought up most of the common issues discussed in software process and quality. Given his background in government contracted DoD software, of course, there were some which are “common” in that environment, but not as much so in other software development situations. For example, Engle spent some time discussing the impact of government preference for commercial off-the-shelf (COTS) software products compared to custom software and how the hidden costs of then enhancing that software to fit specific situations can exceed the cost of having built to the specific specifications initially.

Engle’s main contribution, though, at least from my perspective, was his (too) brief discussion of “contributory,” “overhead,” and “detractive” aspects of software. I’ve stated his overall categorization below and commented on the significance of each more fully that Engle spent the time to do himself.

§ Engle included the major deliverables of the software lifecycle (e.g., requirements, design, code, tests) in the “contributory” category and stated that, in general, they were to be maximized, i.e., the greatest effort should be focused on these things.

I would generally agree with Engle’s characterization of this category and the action/reaction to be taken with regard to it. I would point out, though, that most automation has been focused on this category (specifically code), perhaps naturally so, but also perhaps to the detriment, in many cases, of the “overhead” category where the tendency has been to avoid, instead of automating, much of it. (The title “overhead” itself is an indication of the view taken of things included within it.)

§ In the “overhead” category, Engle placed such things as configuration management, planning and tracking, and metrics which, he claimed, though important, should be automated to permit the least amount of manual effort possible devoted to them.

While configuration management (at least of code) is usually pursued with considerable automation, the other topics, and configuration management of the other “contributory” items, is rarely as thoroughly pursued. This, of course, reveals the strong emphasis code and code-related deliverables still receive in software development. Somewhat surprisingly to me, Engle did not discuss analysis of the information from “overhead” systems. I would argue that such analysis should not be so automated that adequate reflection on the data does not occur regularly.

§ Finally, Engle placed faults, scrap, and rework in the “detractive” category, emphasizing the need to eliminate these as fully as possible.

While much effort is devoted this category — some published reports claim 20%-40% of a project, the used of information available from the “overhead” systems and processes is rarely employed to achieve the goal of eliminating such effort.

Final Thoughts

It is important to remember that achieving quality is not about people “doing their best.” If you believe you really have people who do not try to do their best, then they should not be retained. However, with so many opinions about what is “best,” a large organization without a shared vision of quality faces rework and lower quality despite people “doing their best.” A well defined (though not at all necessarily “heavy”) process gives people a roadmap so that they can reasonably judge their own level of compliance. If reasonable consistency is demonstrated, management oversight can be based on informed, rather than blind, “trust.” If management cannot confidently claim they know what is going on and what behaviors their people support, there is no real trust. Under such circumstances, reports of software errors, evidence of customer dissatisfaction, and audit non-compliance data can shake any trust that may exist.

--

--