SEPG 1999 Conference Notes
I used to attend Agile and quality related conferences years ago and have saved notes from many of them. While some topics have become overtaken by events since then, there still seem to be some useful ideas that I’d like to pass along.
[The Software Engineering Institute was established in 1984 as a federally funded center to advance the state of the art of software development, initially for the benefit of government projects. It was (and still is) located at Carnegie Mellon University in Pittsburgh, PA. One of its early efforts was creation of the Capability Maturity Model (CMM) used to assess the maturity of an organization’s software development process. A major event held for many years was the Software Engineering Process Group conferences to bring practitioners together to discuss current software engineering work and practices. I attended several of these. This post is the first of a couple about my attendance at these conferences.]
Monday through Thursday of last week, I attended the annual Software Engineering Institute (SEI) Software Engineering Process Group (SEPG) Conference in Atlanta. As I noted in my message the week before I went, an SEPG is typically the group of people at a company that is involved with setting direction for implementing and monitoring software process improvement (SPI) related activities, usually using the Software CMM as the model.
This Conference started about a decade ago with ~30 people in Pittsburgh where the SEI is located. It has grown steadily over the years and drew ~1800 attendees this time, bringing together management and staff from around the USA, Canada, Mexico, Europe, Asia, Australia, and South America who are concerned process assessment and improvement. Both tutorials (half and full days) and presentations (45 minute sessions) are offered: the former, on Monday and Thursday; the latter, on Tuesday and Wednesday.
What follows below are highlights of the tutorials and sessions I attended, including an evening “Birds of a Feather” (BOF) session on working with middle-management. I had sent out an email regarding conflicts in sessions and my plans for attendance, expecting the schedule not to change significantly as it seemed not to have done so in the past. I also expected a proceedings (paper and/or CD ROM version) which was the practice in the past. This time there were substantial rearrangements in the schedule and no proceedings — the latter will be sent “within a few weeks” on CD ROM. However, there was usually no shortage of handouts at each session, so I was able to collect quite a large quantity, even from sessions I did not personally attend. (To get an idea of the logistics of the Conference, imagine 1800 people looking to fit into about 6 parallel sessions — an average of 300 people per room — and only a few minutes between sessions spread across two ballroom/exhibit levels of the hotel.)
I must say that the week started with disappointment since I did not find the Monday tutorials or much of the Tuesday sessions to be as informative as I had experienced in the past. I had heard a lot of the material before, but that could usually be avoided based on the abstracts. Abstracts, this time, seemed not to be particularly representative of many of the sessions, at least not in terms of providing important context for the ideas being “advertised.” I think you will see what I mean as you read through the material below.
The official Conference began with the Monday tutorials and I expected to be in one entitled Practical Software Measurement, thinking it would go into some detail regarding actual measurement. Unfortunately, after the first 90 minutes, they had not gotten into such material and it was clear this was just going to review the status of the (largely-DoD and contractor) effort to put a “standard” into place for creating a measurement program. Having the DoD document set and knowing the US TAG standards effort to adopt this for the ISO standardization effort in this topic, I opted to move along to another session to see what might be there.
The next session I tried was a half-day one on culture change surrounding trying to implement the S/W CMM. It was better, however, much of the material was being elicited from the audience at a rate I could not keep up with and which will only be available on the CD-ROM. However, it was a collection of “reasons” SEPG members have been given why people in their organizations have felt the CMM, or process change in general, would not succeed in their organization, i.e., what the presenters called “interfering assumptions.” Being able to address such objections, based on my work at other companies, is quite important, especially without senior management “urging” for middle management to support such a program. Hopefully, in a few weeks, I will have this list. The presenters had a “Top 10 List” of their own to start off the audience feedback which will give you an idea of what was being solicited from the audience:
· Discipline always interferes with creativity.
· The schedule is fixed, the resources are fixed, the feature content is fixed, the
· quality level is fixed, too. It’s whatever it happens to be on the delivery date.
· Managing requirements is a waste of time: the requirements change too often.
· We always hire qualified people, so we don’t need any training.
· CMM is just another quality initiative, and, if we ignore it, it’ll go away, too.
· Who’s paying for it? If it isn’t funded, we aren’t doing it.
· We have a defined process, so obviously everyone follows it.
· Our project is different. The process doesn’t apply to us.
· We’ve tried process improvement before, and it didn’t buy us anything.
· We don’t need engineering discipline; we’ve got engineering finesse!
I’m guessing you have heard at least a few of these.
Since I had decided to opt out of the whole day measurement tutorial and the culture change one was a half-day, I had two choices for the afternoon. One was an “executive overview” of processing improvement and the CMM which, since I have given these myself, I decided not to attend, but did pick up the handouts. The other, was entitled “Fast Track to High Maturity” with an abstract that discussed how an organization of some 20+ people grew to over 300 (and continues to grow). It promised a discussion of tools and techniques to achieve quick maturity growth.
The first 90 minutes or so discussed the company and why they chose the site they did to build this center (Barbados). It became clear that they had staffed this organization with carefully picked, experienced people who had a process focus and methodology background from their prior employers. Hence, they were a start-up company, in effect. But the thing that had me leave by the mid-point of the program was when they described how they paid for this effort. They had a large existing software conversion/reengineering business whose resources were used to get this new effort up and going, half of which was going to be process improvement consulting, not product development, anyway. As a tutorial, this was not what I had hoped or felt could be used at many other places. However, I did have a chance to talk to one of the presenters later in the week at another session and will comment on what they said about software quality assurance (SQA) and SEPG functions later.
Tuesday morning started with two keynote addresses. The first was by Rick Harder of BellSouth on lessons learned from beginning to take their IT operation to Level 3 of the CMM. The second, by Karl Wiegers formerly of Kodak and now a consultant, discussed applying existing S/W Engineering practices more diligently.
Harder began by discussing the impetus for change and the need, in most cases, for people to feel uncomfortable about not changing as the biggest motivator. This can come in many forms. For senior management, it was growing dissatisfaction by customers in IT service and threats of outsourcing by their clients (the BellSouth service centers and business units). Harder noted a theme that was echoed often during the week and which I had noted about a decade ago in industry quality and productivity studies: middle management is critical in initiating and sustaining change efforts. One often hears that getting upper management support is critical, and this is true enough, but implementation is based on middle management support. So what would motivate middle managers to change and support change in their organizations, especially when they had gotten to their positions largely based on the behaviors/practices they and their organizations now exhibited? Harder admitted that the improvement effort had to be a matter of company policy with “teeth” in it. He stated that prior “soft sell” approaches, no matter how much company PR and high-level talk accompanied it, had not worked. [This was the identical experience at another regional phone company when millions were spent on formal quality training, but no strong upper management policy directions were set. After several years, the entire effort had died.] Basically, being a part of the program, was made “a condition of employment”; however, there were many ways for people to show their support and adapt the program goals to their own improvement needs. Throughout, though, measurements were taken on the impact of change and improvement efforts, running the improvement effort like a “real project” — another common theme throughout the week and which has been used to great success in several major companies.
Wiegers talk addressed what he feels to be a disturbing trend for organizations to simply fail to avail themselves of significant software engineering experience and knowledge. One is the lack of training in techniques and methods that industry shows work — as contrasted with training in tools and company process guides. One area where he has noted this in particular is in many IT testing organizations. He asked whether people felt their test staff(s) were familiar with concepts of branch and bounds coverage, path coverage, equivalence partitioning, structural vs functional testing, use case modeling, etc. Not that they had to use all of these, but were they even aware that such ideas/approaches existed or were they simply not able to make a choice since they had a narrow range of information (perhaps limited experience) to draw upon? He repeated this theme for other lifecycle phases, noting what he felt was a drop in formal software design over the years compared to a decade ago. When it came to improvement, his main question was, whether development teams in organizations were using _any_ form of (self-) improvement approaches? When challenged at one company by someone who asked, in effect, if these ideas were so good why he hadn’t hard of them, Wiegers had to ask how hard the person had been looking for them? (In this case, it was use of design description notations and design review techniques.)
On the heels of Harder’s keynote, I went to a presentation by BellSouth on involvement of the customer in improvement. In their case, of course, it was the internal BellSouth business units who were the customer for the IT organization(s), not the end-users of telecommunications services. Nevertheless, the speakers emphasized the following things which they felt had to be shared with, and accepted by, their customers if they were to get out of the code-and-fix cycle of endless rework and requirements changes:
· Use of measurement data and an acceptance of its accuracy.
· Insistence on customer participation in project planning and requirements management.
· Agreed upon change management approaches.
· Support for effective software quality assurance (more than just testing).
· Agreed upon configuration management/release criteria for system changes.
The next session I attended was about the Top Ten Reasons Process Improvement Efforts Fail. The speaker, from TRW, had collected them into 3 broad categories: failure of planning and connecting improvements to business goals; the Ready-Fire-Aim syndrome where no baseline of process capability (usually through an assessment or audit) exists; and execution problems. I’ll mention these later items since they reiterate a few ideas heard in the keynote talks.
Hefner, the TRW speaker, said ignoring middle management (hoping senior management exhortations would do the trick) was a major execution/implementation problem for improvement efforts as middle management stands to lose the most and, therefore, resists change. They have succeeded by “dealing with chaos,” e.g., schedule pressures, late nights and weekends, last-minute changes, building their reputation for dedication and hard work on such an environment.
Another implementation problem has to do with managers and staff discounting people selected to work on process improvement if they are not among the most respected by their peers. Too often when improvement programs get started, managers look for more “expendable” people to attend the SEPG meetings, staying away themselves and avoiding making it a priority of some of their most effective people. One way around this, taken by another organization who spoke at this Conference (ITT), was to make middle managers responsible for the improvement effort very directly. They could not delegate this to others (who often had to go back to them for “permission” to proceed with actions anyway) nor discount decisions since they, as a group, would make them.
Finally, Hefner stated that failing to train people in the improvement model/approach selected, thinking that, as smart technical folks, they could figure it out on their own, was a barrier to improvement success. Here is where TRW (and other companies) have used consultants when they did not have people experienced in large improvement efforts within their own company, i.e., as trainers to jump-start the effort.
The next talk had to do with Process Improvement in Small Hi-Tech Companies, but it turned out to be another talk about a start-up effort and the efforts to “isolate” the development project from too many external interruptions, using formal change management, requirements management, and fundamental process definitions for various “engineering” activities like system testing, design standards, etc. The information was good, just nothing which described effective implementation learnings very clearly.
The subsequent talk was somewhat better, focusing on Process Improvement in Small Companies or Small Projects. The latter was defined as <20 people with “non-conventional lifecycles, limited personnel for performing many roles, short schedules, and combined management and technical roles.” One recommendation to maintain order but limit bureaucracy was the use of checklists of “best practice/ideas” and availability of templates for all required documentation. In particular, the former were strongly encouraged as a way to continually capture results from (design) reviews and (code) inspections which could be fed back into (design and coding) standards, continually improving and growing the knowledge in the organization which could be passed from person to person easily. Another use for such a checklist or best practice/idea approach was in project estimation, building an increasingly more sophisticated, bottom-up estimation approach. The use of measurement to track projects, with centralized support, was also offered as a technique that could be applied through milestone reporting and resource tracking by small efforts given the centralized database and analysis support.
Due to a scheduling change, the next time slot left me largely only one choice of interest which was the applicability of statistical process control (SPC) in software. My goal was to see just what areas people were measuring and finding useful, hoping their findings might be applicable in guiding initial metrics program efforts. Of all the speakers, one from Boeing seemed to have some of the best advice about applying SPC to software. His said that the program should be aimed “at a level where decisions are made.” That is, at a management level (at least middle management seemed to be his point) where action could be taken expeditiously when the data indicated it. However, he cautioned that SPC was about “stable processes” and that perhaps not all processes in software development could/should be stabilized to the point where such control would be applied. Processes where considerable automation has been achieved might be appropriate (e.g., testing) or where the process and the execution of it has well-defined and understood tasks (e.g., reviews and inspections and defect tracking within them). Architecture and design tasks, unless supported by formal methods or formal design notations, were unlikely candidates for SPC.
During the last session of the afternoon, two presentations seemed as though they might be interesting: “Maintenance and the CMM” and “What Makes Indian Software Companies Tick?” As I had heard recently that the CMM was gaining great popularity in India and more companies were being assessed through Asia using it, I thought it might be interesting to hear what one quality improvement consulting firm (QAI) had to say about India’s industry. The maintenance topic sounded relevant, but I was also aware that this might have involved more formal CMM adherence than just process improvement guidance. So I decided to try to attend a bit of both to see what each might offer.
The maintenance subject, as I suspected, had a bit more formality associated with it, but did have a few ideas that might be more broadly applied even if formal CMM use was not being pursued. One was to apply measurement effectively to deal with the short-term visibility many projects provided to the organization, i.e., they were over and done with so quickly that anecdotal and experiential data was harder to capture that with projects lasting longer and having more formal post-mortems. Also, since much maintenance work was “reactive,” measures of effectiveness in response to customers were felt to be important and provided a way to introduce measurement to the organization. Use of a time management/tracking tool to highlight concurrent project time was also employed by the company giving the presentation (CIGNA).
To bring SQA more strongly into play, CIGNA emphasized use of periodic maintenance group (not individual project) “reviews” (i.e., audits) to assess general capability to perform the work. Another SQA expectation was formal peer review to avoid the “quick fix” that doesn’t seem to justify independent testing. The goal of this was, quite frankly, to extend accountability for the quality of the work beyond a single individual. And finally, as estimating such small projects was hard to justify using traditional approaches, they strongly encouraged capture of historical data to build up standard profiles of maintenance which would then be tailored, if necessary, by the development group assigned the task with the expectation that most projects, after a while, would have a standard model they could pick (from a “library” of models) that would provide a good basic estimate of time and effort.
The talk about Indian companies emphasized several things that were making CMM implementation more effective. By “effective” in this context, the speaker noted that 60% of the companies at CMM level 3 throughout the world were from India and these companies were making maturity advances faster than the world-wide average. The key factors in achieving this were:
· Strong existing ISO 9000 and quality system orientation. [Indeed some US companies have leveraged their ISO experience to quick CMM Level 3 ratings.]
· Acceptance of software as a key national industry with strong government support.
· Active professional activity among developers and use of lessons learned from other companies with strong quality/improvement histories from the USA and UK.
· Since most work is exported to foreign clients, strong process management has been deemed critical throughout the lifecycle.
· Need to demonstrate quality to foreign clients through association with standards and registration efforts.
· High employee turnover rates (15%-25%) coupled with strict contractual delivery dates and quality targets. [The Indian software industry is in great growth and experienced people are aggressively recruited.]
· Active senior management knowledge of and visibility in process improvement efforts.
· Regular self-assessments and formal (third-party) ones — at least every 6 months.
· Individual employee belief that knowledge of CMM-related process ideas are important to their professional growth and marketability.
The speaker noted that the software industry stuck out in this regard given the very non-systems-focused view taken by the rest of Indian society. He also noted that little ROI data was available to show the actual impact of the process focus, though this was largely, he felt, because of reluctance to make such data public. Finally, he expressed the Indian ability to “study for and pass examinations,” an inheritance from the British school system’s tradition, as an explanation for the ability to hone doing “just enough to pass” registration audits and assessments.
The evening session was an impromptu presentation on working with middle management in achieving process improvement. The presenter was a visiting scientist with SEI and a consultant who had a very nice presentation on this subject, but no handouts. I’ve asked that he send me a copy and will pursue this. Much of what was discussed revolved around forming an accurate picture of what middle management faces, what they believe about their position and duties in the company, and how they will react to their management’s leadership in promoting process improvement.
The next morning (Wednesday), there was a keynote talk on testing by Bob Poston and a panel discussing CMMI directions and goals. I found the CMMI panel largely covering material with which I was familiar but did not know what to expect from the testing talk. I knew the speaker from his work at Bell Labs/Bellcore many years ago and knew of his experience in testing. I was not sure how he would relate this to process improvement. He did so addressing time to market and development cycle time with improved testing productivity and rework reduction.
Basically, Poston advocated a testing model starting at requirements definition since he claimed 55% of all system level defects get traced back to requirements issues. The addition of “a little formality” to requirements definition, he claimed, would allow the results to be used to reduce additional work downstream in test case design and generation. He advocated this by using a system-level use case tool approach to requirements definition which could address all three of the goals of testing: showing proper implementation of requirements, finding defects, and showing that all code was exercised. (This latter point, he said, was important in addressing potential security and configuration management problems where failure or inability to do so can mean blocks of code that exist in a system but have not been linked back to known requirements: why are they there, then?)
Poston never named the tool, but demonstrated the method, highlighting the most common requirements defects issues (e.g., ambiguity/incompleteness, inability to test, undesirable design implications). The use case model involves defining the use case (an action-object pair), the model diagram (clients and agents of the action), and the “event trace” diagram (showing data flow and response). To this is added data and constraint (logic) specification, completing the information needed to produce a requirement from which tests can be generated. He went on to provide examples of tests that could be applied to the action, information, logic, events, and states implied by the requirement as defined by the use case model, including advice on data sampling.
It might be of interest to some testing organizations to see if such an approach would be practical . If so and if experience grew in applying the approach, tool automation might eventually justify using this on some other ones, especially where development organizations are going to do their own testing. Since Poston worked for many years in the Unix environment, these tools might pre-suppose C/C++ development platforms. However, the approach might be applicable and other vendors/sources might exist to support this. It may be a possibility for the ISD Interact work, which is Unix/C++ based, or Olin Wise’s web development efforts.
The first regular session on Wednesday morning which I attended was given by Xerox and addressed “14 Important Lessons Learned” about process improvement in a “Rapidly Changing Commercial Environment.” First on their list was getting coaching on SPI since they did not have much staff experience in this area — as TRW noted, opting not to try to “do it themselves” just because they were “experienced in software development.” Next on their list was “using the best people on SPI,” but also those among them who felt motivated to make process changes and improvements. Surprisingly, compared to other “top ten, etc.” Lists, Xerox emphasized improved meeting management skills since a great deal of SPI effort is in decision-making, prioritization, etc. which takes place in meetings and meeting-like atmospheres. Managing SPI “like you would manage a project” was also on their list.
Among things they did not do so well and realized, in hindsight, should have been changed were: not keeping SPI relevant to business goals, keeping (middle) management involved in SPI, making SPI a part of the line-item budget of organizations, and keeping the staff informed of what was happening in SPI efforts. The latter was important since people needed to feel the effort was not another “flavor of the day” activity that would fade without observable results. And finally, they emphasized not giving up on the effort quickly in the face of resistance and difficulties.
The second session of the day really seemed not to have much of interest based on my original schedule, but I noted that one speaker was someone I had known a few years ago doing other work that might be. I sat partially through his session at the end, going for a while to a session on “Software Project Management ‘Flight Simulators’.”
The “Flight Simulator” topic was about using system dynamics modeling to create a “learning laboratory” for project managers to “experiment” with management practices related to software projects. As the speaker in the other session had been involved in this subject, as had I at Bellcore, I wanted to see where the technology had gone in the past 3 years while I was not actively engaged in developing such models. The “flight simulator” session was familiar material about providing an easy to comprehend (dials and gauges and sliders) interface to the modeling environment for providing management feedback from and input to the model. The latest version of the iThink modeling environment (a very powerful tool, applicable to all business process modeling and available at single-user licenses of ~$1200) allows direct creation of such an interface whereas prior versions had no non-technical interface available, making it harder to create models for end-user manipulation. The speaker at this session has a web site with a basic model already available, apparently for free if you have the latest iThink version to run it.
My former acquaintance (from Litton’s DoD contracting area) had been developing models of the software inspection process and using then at Litton to determine the effectiveness of various approaches to doing inspections. The model was “tuned” with data from Litton inspection historical data on time spent in inspections, defects found, effectiveness of finding defects, etc. Now that we have made contact once again, he may be a valuable source of information on dynamic modeling since he has maintained involvement with various people doing it in the process of writing a book on the subject. He is also on the faculty at USC and works with Barry Boehm on the COCOMO II (software estimation model) project which is updating the older model to handle newer development methods and languages.
I had hopes that the third session of the day would be very good, being entitled “Software Process Improvement on a Shoestring.” It was, though not because it was actually about inexpensive process improvement approaches. The speaker, from ITT, admitted this was partly a ploy to get people to come to the session, but he did have ideas used at ITT which cost them little in formal budget terms. What made it work overall, though, was that the ITT division where he worked was led by a person who believed very strongly in improvement and mandated that the middle (functional) management delegate more technical responsibility to their immediate subordinates and personally take on the process improvement effort as “their job.” Within this context, they did a number of things on limited budgets.
First, they trained themselves by serious devotion to learning, in this case, the CMM using available materials. Then, they did a “thought experiment” assessment, going through the CMM, goal by goal, and honestly assessing whether they felt the organization did or did not meet each one. Where they felt it did, they documented how. Where they did not, they established action items, which they would pursue personally within the organization, to close the gap. Significant reductions in time to market, defects, rework, as well as increased enthusiasm by he staff, led by management example, for continuing improvement, were all elements that made them feel the effort “paid for itself” within 18–24 months after starting.
At the same time as this talk was going on, one on developing synergy between SQA and SEPG groups was being given by the same organization that had established the center in Barbados. I got a copy of the handouts for this and their suggestion for synergy was to have the roles performed by people under the SEPG organization and, initially, even the same people until the roles were formally enough defined and staff were trained to allow separation to occur, though keeping both groups under the SEPG management. The major benefits were:
· lower cost due to fewer people, overall, being needed (some companies have reported up to 10% of the staff in SQA and SEPG organizations; this company had about half that, but their total staff was only 300 people and experience suggests that the number needed to do adequate SQA and SEPG tasks does not rise strictly linearly with organizational size which may account for the 10% in larger companies);
· SQA group becomes associated with improvement, not just audit/compliance, functions and members are more welcomed into development areas once this dual function is made more clear.
The viewgraphs caution, though, that people experienced in SQA “may not fully understand the SEPG aspects of the job” and that the SQA job “may be neglected because of the SEPG load,”
requiring that the SQA role be given “first priority” should that begin to occur. (At the end of the session I was attending, I caught up with the speaker from this one to verify these impressions which seemed, essentially, accurate.)
The next talk was entitled “Team Excellence” and it attracted a large number of people, causing them to run out of handouts. When all was said and done, however, this talk was largely just about a different nomenclature and a slightly different organizational approach to an SEPG, emphasizing effective promotion (marketing) of the improvement program throughout the organization through very visible displays of goals and levels of achievement. This idea was noted by a couple other speakers in their explanations of how the improvement/metrics programs were handled, displaying charts and graphs of the various goals and performance each month. Since this data was at the product/group level, individuals were not being identified in this manner and responsibility for achievement was made more a management, than a technical staff, priority. (Generally, the technical staff become more enamored of improvement efforts since they begin to see how it is making their lives easier, i.e., more predictable.)
The third session of the afternoon had three topics of interest, so I made a quick stop at each to check out the handouts. The first, “Employing Process Mentors…” really focused on having each SEPG member take on a specific CMM Level 2 or 3 process (KPA) as their own responsibility, becoming an “educator,” “guide,” “consultant,” “mediator,” and “administrative support” person within the organization for that process. This is in contrast to everyone on the SEPG trying to cover the full range of the processes in any detail. The educational role was “identified as the primary task for process mentors,” spreading the knowledge of the KPAs throughout the organization in this fashion and creating a “virtual” SEPG membership larger than the actual staff which could be allocated to formal SEPG tasks. The viewgraphs do not suggest this, but in similar situations, I have seen this approach used to cross-train SEPG members as well.
The panel on ‘Why Organizations Have Assessments” addressed a wide-variety of issues. I did
not stay for the whole first part of this session, but did return during the next session as this topic was given two sequential sessions, given the large number of panelists (6). In general, focusing improvement attention on “what needs doing” was the main reason for having assessments with a related side benefit of attracting management attention to improvement issues overall by giving them a “standard” against which to benchmark their organization(s). The main point of contention between the panelists seemed to be over the cost and formality of assessments. A representative from Israel Aircraft Industries suggested the mini-assessments did not tell them much, so they chose the formal assessment route which they felt could only be justified, financially, ever 2–3 years: it took 8 people about two weeks, costing them, including the staff people their had to interview and the write-ups and feedback sessions attended, about $100,000 for an assessment. Others reported much less cost, but the speaker from Israel claimed they were likely only talking about the direct lead assessor and immediate assessment team costs for the assessment, not including the training and staff time. Some admitted this was true; others were silent on the matter. My experience leans more toward the Israeli speaker’s numbers, though 8 people for the assessment team is rather large: another talk’s viewgraphs which I got mentioned “hybrid” assessments where only two assessors are used rather than a whole team (the typical SEI trained approach).
The final talk in this time slot was on a “Quantitative Approach to Quality System Audits.” It was given by someone I knew from Bellcore (whose name changed during the week to Telcordia Technologies) who had started as an internal auditor with the SQA group in which I worked for several years and moved to the development organization to take over the internal audit program as its manager. Bellcore’s internal program, given that they have ISO 9001 TickIT surveillance audits every 6 months and have undergone two SEI assessments within the past 3 years, is a full-time job for several people given that they have 140+ products in several business units to do every 12–18 months (some more often). Their SQA and SEPG organizations are about 10% of the 3000 person development organization in Bellcore, but it includes audit, assessment, improvement, and metrics staffs. The latter, a couple people, are tied closely to the internal audit program since data on audit results form a significant part of the measurement effort (and move to Level 4 of the CMM) being pursued at Bellcore. The important part of this who audit/metrics effort is the public posting of organizational results compare to the company average. This catches middle management’s attention since everyone gets to see how everyone else is doing, encouraging a certain level of improvement “competition” in the organization — a deliberate intent of senior management which holds organizations quite accountable for their improvement efforts.
This ended the two days of general sessions and I planned, at this point, to spend the next day at the Team Software Process tutorial. However, when I got to the room and took a look at the handouts, it was relatively clear that there was an assumption that the team had already been exposed to the Personal Software Process. The PSP is focused on what an individual developer can do to monitor and manage their own work using data they collect and study, applying as many CMM KPAs to the individual case as possible. I have some references to PSP material and there is a book which can be purchased as a “textbook” for pursuing the subject. Fundamentally, the TSP session seemed to be about taking developers who had experienced the PSP approach and making “self-directed” teams (2 to 20 members) out of them where they mutually supported one another’s efforts to self improve as well as to improve the overall process by which the team developed its software. The SEI is rolling out a more formal training effort in the TSP approach once they have piloted a few programs at companies who have supported the existing PSP program.
As the viewgraphs are relatively clear, given an understanding of the PSP approach which, while I have not formally taken the program, I am familiar with, I decided not to stay the day in this session and headed for the “Satir Sampler” session which was about using Virginia Satir’s model of individual change and applying it to organizational situations, relating this to the work of the SEPG in process improvement efforts. In the afternoon, I thought I would look into the CMMI tutorial; however, during the lunch break, I learned it was really going to be a panel presentation which would cover administrative and review matters as much as any technical material. Since I work with one of the main “authors” of the CMMI on the US TAG, I decided to use him as a more direct source of the information that seemed relevant to my current work. I returned to the Satir program for the rest of the day.
The Satir session was quite good and quite relevant to the work an SEPG or any process improvement group might wish to pursue. Because there was a great deal of interest to me in this presentation, I’d like to prepare a summary of this individually, rather than try to put it all into one paragraph. I think everyone would get a better idea of whether or not such a thing would be orf interest if I did it this way. However, some of the ideas involve understanding the Satir model of change, which requires that people go through a period of “chaos” (discomfort over the change) until they get the sense of “what’s in it for them” (what Satir calls the “transforming idea”). One job of the SEPG, using middle management’s support, is to make this clear to the organization, especially by, as Deming called it, “driving out fear.” That is, creating an understanding of what “safety” means in an organization, i.e., what risk-taking means, and trying to eliminate keys elements of risk aversion, leading, ultimately, to a better technical as well as cultural atmosphere. A second key idea is the development of personal “congruence” which leads to organizational openness, allowing change to take place more easily. A final idea addressed a new way to handle staff (not necessarily technical project) meetings: the “temperature reading.” The agenda of such a meeting has on it items named “Appreciations,” “New Information,” “Puzzles,” “Complaints with Recommendations,” and “Hopes and Wishes.”
If this whole session sounds a bit “touchy-feely” for software development, realize that conference after conference, including the SEI’s SEPG one after many years, has begun to acknowledge the tremendous significance of corporate culture and “people” issues in the success of quality and process improvement efforts. Gerry Weinberg, many years ago, wrote a couple books called Understanding the Professional Programmer and The Psychology of Computer Programming — among many excellent books he has written over the years. Both emphasized the role of culture and “soft” factors in software development. Many feel these ideas led, in fact, to the development of the People CMM.
As I noted the other day, I wanted to provide a more complete summary of the the Satir model tutorial I attended on the last day of the SEI SEPG Conference. Its formal title was “A Satir Sampler: Fostering Growth & Change in Organizations.” The “sampler” comes from he fact that the full Satir introductory course is a relatively intense week-long program from which this tutorial provided selected highlights. Virginia Satir pioneered the concept of family therapy, i.e., when an individual presents symptoms of social/psychological problems, it is necessary to look at the “system” in which they exist (e.g., the family). From this family approach the “system” idea has been expanded to organizational culture and individual “illness” as demonstrated through stress and difficulty dealing with change, i.e., inability to cope, appropriately, with problems in work and other social environments.
As I noted in my SEPG report, this may all sound a bit “touchy-feely” for software development; however, over the years greater acknowledgement of the significance of corporate culture and “people” issues in quality and process improvement success has been demonstrated. Gerry Weinberg’s books — in particular, Understanding the Professional Programmer and The Psychology of Computer Programming — emphasize culture and so-called “soft” factors in software development. Other books by Weinberg, such as Are Your Lights On (and problem-solving) and Exploring Requirements (about eliciting requirements effectively) include healthy doses of “people” issues and ways to deal with them in, otherwise, “technical” situations. Weinberg has helped promote Satir’s ideas in business organizational settings such as the Problem Solving Leadership workshops he and colleagues conduct in Albuquerque, NM. (The three presenters of this SEI tutorial were all students of Weinberg and one of Satir’s main students, Jean McLendon.)
What I think is one of the strong characteristics of this material is a focus on trying to extract the meaning and sense out of otherwise chaotic or rigid rules and situations. I have participated in presentations over the years that conduct similar kinds of workshop activities and similar “people-centered” approaches. However, they always seem to leave he impression of challenging the status quo, rather than trying to understand and transform it as constructively as one needs to do when taking on the role of “change agent,” i.e., when committing to remain in the situation and work for its improvement as opposed to “voting with one’s feet” and leaving. The Satir approach focuses on a transformational philosophy, i.e., changing what one finds uncomfortable or dysfunctional without rejecting what others may see as value and benefit to them in what the current situation represents.
To give us more of an idea of Satir’s “philosophy,” the presentation opened by reviewing some “Basic Beliefs” from Satir’s writings:
· Change is possible. Even if external change is limited, internal change is possible.
· We all have the internal resources to cope successfully and to grow.
· We have choices, especially in terms of responding to stress instead of reacting to situations.
· People connect on the basis of being similar, and grow on the basis of being different.
· The problem is not the problem; coping is the problem.
· People are basically good. To connect with and validate their own self-worth, they need to find their own inner treasures.
· We cannot change past events, only the effects they have on us.
· Most people choose familiarity over comfort, especially during times of stress.
· Human processes and universal, and therefore occur in different settings, cultures, and circumstances.
· Process is the avenue of change. Content forms the context in which change takes place.
· Healthy human relationships are built on equality of value.
· Hope is a significant component or ingredient for change.
There were some 200 people in the room at the start of the session. Given that it was the last day of the Conference, people’s travel schedules gradually diminished the size of the audience at each break and over lunch. However, about 40 people stayed to the very end, and I was encouraged to do so as well as skip my other planned afternoon tutorial. When we engaged in group exercises, we were asked to break into “triads,” groups of 3 (or four if needed) as the basic structural element of our interaction. Satir used groups of three as a basic family structure (mother, father, and child) in her work, and others retain this structure even for more “equal” group situations because of the “ever-shifting pattern of dialogs” which takes place, allowing one person to always be, even if momentarily, an observer of the interaction between the other two. Each person, therefore, “has many opportunities to be a give, receiver, and watcher,” exploring the dynamics of interactions in this small, but representative group situation.
The Satir Change Model is shown below with its four stages and two transition points:
The Current (“Old”) Status Quo (stage 1) is the way things are, the familiar, for better or worse. The Foreign Element (transition 1) upsets the Status Quo, throwing the system out of balance, in large or small ways, for long or short periods of time, and for expected or unexpected reasons, usually as a result of something external to the system, e.g., addition or removal of a person from a team, a reorganization, a change in priorities. The Period of Chaos (stage 2) is the reaction to the Foreign Element which is often an attempt to return to the Status Quo, the place of familiarity, by behaving the same way, hoping to make the situation the way is used to be, or by reacting overtly against the Foreign Element, as if to drive it away, leaving only the Status Quo. To move to some new equilibrium (the Next Status Quo) requires some Transforming Idea (transition 2) which is often a “what’s in it for me” realization, but the system must both be open to the idea as well as able to recognize it when it comes. Once acceptance of the change takes place, a period of Practice and Integration (stage 3) occurs when new knowledge and skills are incorporated into the system — failure to allow for this is one of the mistakes when change efforts are initiated in organizations. Finally, the new “balance” is achieved in the system at the point of the Next Status Quo (stage 4).
Much of the rest of the presentation was about specific ways to address both initiating change effectively as well as dealing with it constructively using the Satir model to guide understanding of necessary stages and transitions to be addressed. How one manages change (“meta-change”) is the focus of much of the Satir material presented at this tutorial. One of the examples of status quo (be it in a family or a business situation) are the “rules” established for the system to function. Some are personally perceived rules; others are formally stated rules. The presenters offered an approach to transforming rules without doing so in a threatening way, i.e., without completely rejecting them and ignoring the possibly, real conditions that may have caused them to come into being as well as continue to support their existence, at least in the eyes of some in the organization. One such rule, that I’ll use as an example later from the tutorial, is the idea that “We will exceed customer expectations.” As a quality policy and goal, this is very common, but the tutorial exercises addressed the implications of this for its achievement as well as improvement effort success.
An important element of group behavior, according to the presenters, is recognition of what “safety” within the group means. They used a scale from 1 (totally safe — I can do anything here and be accepted) to 10 (totally unsafe — I am afraid anything I do here may be unacceptable) and asked us all to close our eyes and raise our hands as they counted down from 10 to 1, selecting the point of the scale where we felt we were in that tutorial at that moment. At the end, when we opened our eyes, many of us were surprised to find at least 1 person each at 7 and 8 with several more at 6. Most of the room clustered around 2 and 3. The presenters indicated this was mostly an awareness exercise, but noted that those feeling unsafe, usually have a sense that many more people feel much safer than they, which, in one sense, heightens their own lack of safety, being the outcasts, of a sort, in the much larger group. What contributes to the sense of ill ease and lack of safety in organizations was an important element of how the Status Quo affects people, since it is not comfortable to many, however, it is familiar, so they learn to accept it an live within it, increasing the stress under which they function on a daily basis.
As I noted in the SEPG report, one job of the SEPG, using middle management’s support, is to “drive out fear,” creating an understanding of what “safety” means in an organization, i.e., what risk-taking means, and trying to eliminate keys elements of risk aversion, leading, ultimately, to a better technical as well as cultural atmosphere. Recognizing that there are those who do not feel “safe” in a work environment can, by itself, be a Foreign Element that precipitates change, or at least the sense that change is required. As the tutorial suggested, it does not matter if “fears [regarding change] are well grounded in experience, or they are purely imaginary; the feeling is the same.” Without the sense of safety in the organization, the fears associated with change and the risks changes bring can bring out virtual paralysis, though it may manifest itself as everything from active opposition to laissez-faire “wait a while and it’ll go away” resistance.
The transformation of “rules” was addressed next to show how change can be structured in such a way that the Status Quo is not threatened so directly that it appears all historical sense is about to be lost and all prior sense of how to function and what value a person is to the organization is about to be reversed. Getting back to the example noted above, there are seven recommended steps to transforming a rule from a form that may leave people unable to truly embrace and cope with it, to something actionable and acceptable to most people. The presenters took the idea of “We will exceed customer expectations” to show what could be done with even so “good” sounding a rule.
The first step is to articulate the rule (“We will exceed customer expectations”) and realize that the kinds of rules we are discussing are those which may be considered “petrified,” either due to age or the compulsion/totality suggested by the rule. The next step is to identify consequences of not following the rule (e.g., customer dissatisfaction and loss of business, outsourcing of services to others, loss of jobs, relocation of business), seeking to see if the equivalent of “the organization will die if we don’t do this” is mentioned. (If not, the rule may not be as important as one imagines or may not be worded correctly.) The third step is for the people involved to “confirm that the rule has been governing their behavior. And then acknowledge that it has in some ways provided value.” This latter acknowledgement is the important step, in my view, from similar “challenge the rules” exercises in other organizational change approaches. It is important to identify the value that the rule has and what it is expected to do to provide value to the organization so that the transformation can maintain the value while removing the impediments to achieving, and improving, that value.
The fourth step is to change the rule from one of compulsion to choice (“We can exceed customer expectations”), then from certainty to possibility (“We can usually exceed customer expectations”). At this point, with this example, some folks reacted badly to the current form of the rule since it seemed to have eliminated the value. Indeed, even after this exercise during a break, some people urged the presenters to avoid such an example because it would be rejected immediately in a business context, having taken on such an iconic quality theme sense. This, of course, was the purpose of the exercise, to examine such “do or die” Status Quo ideas to see what merit they have and under what situations so implementation can be effective, not simply perfunctory, likely not achieving the goals of the rule anyway, but getting people to behave as if the rule were being honored to preserve safety.
The next step is to transform from totality to non-totality (“We can usually exceed most customer expectations” or “We can usually exceed the expectations of most customers”). This leaves the rule sounding very much weakened in the eyes of many; however, the next step is intended to restore some of the “teeth” that many people want in their rules. This is to change from the general to the particular (“We can usually exceed most customer expectations when:”) and identify instances and conditions when this is true, i.e., things that, if we are sure we do, will make it the most likely that the rule can be followed. The list here got very long, and I can imagine it would be more instructive and enjoyable to do this step (with this or other rules) in actual practice than to read a list of what others suggested.
The next step in the tutorial, after the lunch break, was to address some major “communications stances” people take in dealing with one another. Four were highlighted:
· The Blamer who reacts to threats of rejection by appearing strong and attacking others, but within this resides “the seeds of healthy assertiveness”;
· The Placater who wants to be liked more than anything, avoiding conflict and the word “No,” but within this resides “the seeds of genuine caring”;
· The Super-Reasonable who reacts by making relationships unimportant and detaching from the situation, often with cool, calm reasoning and “facts,” but within this resides “the seeds of wisdom”;
· The Irrelevant who may manipulate others through unstructured activity and behavior to get attention, but within this resides “the seeds of creativity.”
The point of this discussion is not that anyone is necessarily always one or the other of these, but that they are conditioned ways to address stress and resist change and require different ways of handling the result if beneficial value is to be gained from the individual(s) displaying such behavior. The goal is to reach a sense of personal “congruence,” for which the presenters used the example of much martial arts training where a great deal of the focus is on finding “one’s center,” i.e., an appreciation for what is true about one’s self and a confidence in being true to that, “driving out fear,” as it were.
The final major idea addressed after the afternoon break was a new way to handle staff (not necessarily technical project) meetings: the “temperature reading.” The agenda of such a meeting has on it items named:
· Appreciations — letting others in the group know you appreciate something they (or someone not present) did or said, explaining what it was, and why it was appreciated. This starts the who session out on a positive note and does something that is surprisingly hard for people to do, as well as accept, praise from others.
· New Information — sharing something you know but believe others may not, but which would be helpful or important to them.
· Puzzles — things you do not know or understand, but hope someone else can help you with, either immediately, or off-line after the meeting.
· Complaints with Recommendations — raising issues that bother you, but also suggesting what can be done about it. (Having a proposed solution is important here, as opposed to the Puzzle section where you have no suggestion for the answer.)
· Hopes and Wishes — no matter how unlikely they may be, perhaps someone knows how to “grant” the wish or make the hope “come true” and this ends the session on a positive note, letting others know of your plans and desires.
The point of this was to suggest a way to transform typical group staff meetings into something more interactive, meaningful, but still informative, yet in a way which engenders team spirit.
With this, the tutorial was basically over, a few other items were discussed, including references for follow-up reading and training, but the essence of what went on is covered here. I think there are some ideas here that can be useful in process improvement efforts. Certainly eliciting the important organizational written and unwritten “rules” as they relate to development practices would be a place to start — like “We always use the TM-M methodology,” perhaps? The Temperature Reading is also a possibility, though the trust and honesty expected in it may not be any easy place for a lot of groups to start, but it is not necessary that people bare their souls in any of this, just stretch them a bit, trying something not ordinarily experienced in work situations.
And as other SEPG sessions indicated, appreciating the role middle managers will play and how their traversing of the Satir Change Model can be eased will be important in process improvement as well.
As always, if any of this resonates with anyone, feel free to contact me about it.
SEPG 99 — Assumptions
This is a list of the assumptions provided by the participants of the tutorial Facilitating CMM Culture Change: Clarifying Communications, Expectations and Assumptions 8 March, 1999 presented by Kim Caputo and Michael Sturgeon.
We provided a list of assumptions that support CMM process improvement:
· Engineering discipline is required to build quality into products of large size and complexity
· One person can’t track all the details, and error detection is more probable when the work is examined by more than one person
· Our success is dependent on other groups and customers
· Process makes a difference in the quality of the activities and the quality of the products
· The organization uses Process Definition to transmit the culture’s quality values
· The projects use Process Definition to incorporate the culture’s quality values
· Surviving in a business world that is constantly changing requires constant adaptation and learning.
(Copyright 1998, CMM Implementation Guide: Choreographing Software Process Improvement by Kim Caputo, Addison-Wesley Longman, Inc.)
Then we asked for a response to the question:
“What’s it like in your organization? What assumptions in your organization interfere with these?”
The responses we heard are as follows:
Here are some ideas of how to respond when faced with these assumptions in your culture. Use the Positive Response Ideas below as a starting point for you to come up with your own ideas of what you can say to show people a different perspective about their assumptions.