Applications of Software Measurement and Software Management 2000 Conference Notes
I used to attend Agile and quality related conferences years ago and have saved notes from many of them. While some topics have become overtaken by events since then, there still seem to be some useful ideas that I’d like to pass along.
In last year’s report, I mentioned that I hoped to be able to present either a tutorial or session (if not both) this year as well as, perhaps, become involved in organizing the event, i.e., running one of the technical tracks. Since I became involved as a national officer of the ASQ Software Division, I decided on the tutorial proposal/paper route and submitted one tutorial and two session proposals last Fall. While the tutorial was not accepted, both session proposals were. It turns out I was the only speaker at the Conference to have two session proposals accepted.
Attendance at the event, more than tutorials and sessions, is valuable due to the access to speakers and presenters. Unlike some conferences, ASM-SM requires speakers to stick around to the fullest extent possible and mingle with the audience at lunches, breaks, and during defined “Meet the Experts” sessions at the end of several days during the Conference. There is usually an “experts” panel of one sort or another where considerable audience interaction occurs. Thus, I always find it a very good opportunity to “consult,” as it were, with some of the key figures in measurement and metrics. Unfortunately, this year, a number of main speakers could not stay very long. Several of us urged that they try to prevent this from happening next year.
Keynote Address by Watts Humphrey (SEI) on “Changing the Software Culture”
Humphrey is the founder of the S/W Process Program at the SEI. The SEI is a DoD funded organization hosted by Carnegie-Mellon University in Pittsburgh. Its mission is to pursue best practices in a variety of subjects, one of which is software process improvement. Another area is software metrics analysis. Humphrey is now an SEI Fellow which means he travels around the world speaking and working on less day-to-day issues associated with the SEI. This talk was not what I usually expect from him as it was much less just numbers and charts and delivered more smoothly than many of his talks/tutorials that I have heard in the past.
The talk addressed two things: making a case for the importance of improving individual software development capability and then suggesting use of the SEI’s Personal Software Process approach as a way to do this. Watts’ rationale for the hierarchy of business, technology, quality, and people was very interesting material. The PSP is a very good, though rigorous, approach to making individual software developers aware of their estimation, scheduling, and quality practices through use of data from specific programs they are asked to write (in any language of their choice). I have heard PSP tutorials in the past and Humphrey has a book published which is used as the text for the program.
The first point, about making the case for paying attention to staff training, was the major topic of interest for me as this is not the way Humphrey usually presents the rationale for getting to the PSP. Business capability is what a lot of senior management are concerned with, but Humphrey says that can only be achieved, long-term in the software world, through effective technology products being deployed to the market. A substantial aspect of the effectiveness of such products, though, is the actual quality of the product. (Features and the like do count, but, in the end, they have to work for the product to retain market share.) This brings Humphrey to the point that focusing on having well-trained, motivated people is the key to the whole hierarchy.
He points out that he has had experience with many high maturity organizations. He notices that, though many management practices change with regard to planning, scheduling, monitoring, and measuring whole projects, the life and behavior of the average developer/tester stays much the same in terms of how they personally behave. They may have to produce new and different deliverables, etc., but largely they do their work as they always have. This is what led him to the PSP approach as a way to take the organization principles from the S/W CMM and apply them, where reasonable, to how an individual would plan and conduct their own work.
Invited talk by Les Hatton (Oakwood Computing, UK) on “Why Linux is so Reliable”
This was somewhat less useful than Watts’ presentation, but still had some good information regarding the culture behind open software development. Lots of stuff has been written about this as some model for commercial software and I think this talk makes it clear how hard that would be. If you have real clients with real requirements and real expectations, rather than satisfying the desires of a few technology-oriented folks, this approach could be hard to maintain. But the talk made clear how one could leverage individual’s personal quality commitments.
Hatton’s approach was to point out how much more reliable the Linux operating system environment (and Unix, in general) seemed to be than most desktop systems based on Windows (’98 being the least reliable with NT being the best and ’95 in between) or the Mac (between ’95 and NT). Indeed, it was 10–100 times more reliable if we measure reliability as Mean Time Between Failure, i.e., how long between crashes and reboots. He notes that Microsoft gets around this by speaking of system “availability” as 99.6%, or something like that, expressed in minutes of operational time. But this means a crash per day, on average!
What makes Linux work, especially when it comes to addressing problems that are found is a unique organization of the volunteers all around the world who are involved in its ongoing enhancement. There is no management, but there is a central core of 10–15 experts who have risen to that level through peer acceptance and observed performance skill. Around them is a group of 5–10 times as many folks who make many of the actual changes to the software under suggestions and guidance from the experts. And around them are another 5–10 times as many folks who formally use and test out the software, but make no changes. Beyond that are thousands of users who, of course, report problems as well, but there is no formal number to call or address to mail. Things get posted on the Net and circulated until one of the surrounding layer of folks takes up the bug, reproduces it, and reports her/his more detailed findings to those who look into fixing the issue.
This all works wonderfully, except for some things that would make commercial use of this approach hard. For one thing, the experts decide whether a bug will or will not be fixed based on their view of the overall need of the system to have to worry about supporting things. There is no formal requirements process and no way for a user to really expect any issue should be fixed. The software is distributed for free, though some commercial sources exist which provide it with some application level enhancements for a small cost ($30-$50 for what amounts to thousands of dollars of software).
Many important systems (like many key hubs in the Internet) are based on this software and more and more seem to move to it each year. The lack of a central place to go and the ability to exercise some pressure when a user wants something done, though, makes commercial expectations for this model of development unlikely. However, the programming team model of experts surrounded by trusted developers who are further surrounded by diligent, concerned users/testers can’t be argued with.
Invited talk by Ram Chillarege (IBM, Watson Research) on “Managing Business Dynamics for Software Developers”
I found this about equal to Watts’ presentation at the outset since Ram discussed the impact of the software product lifecycle on development process/practice. Fundamentally, Ram highlighted the differences in development practice and quality concerns as a product goes from entry to growth to market stability to legacy status. The critical success factors for a product change along the way, moving from speed to market, to overall functionality (as competitors emerge), to dependability (return on investment beginning), to support (as less new features are provided and more software budget is involved in just responding to field issues from clients).
As the presentation got to the end, it struck me that it lost some of the focused flow and it was hard to relate the visuals (which seemed to be reused from some other presentation) to the initial lifecycle/practice theme.
Invited Talk by Ann Miller (Univ. of Missouri-Rolla, formerly from Motorola) on “Design and Test of Large-Scale Systems”
I did not find this presented effectively or useful. At least it was not what I expected for a keynote. The title sounded appealing, but, as Ms. Miller originally came from a position as an assistant secretary in the Department of the Navy, I found the approach very much skewed toward that environment.
Keynote by Barry Boehm (USC, formerly DoD and TRW) on “Software Cost Management with COCOMO II”
Boehm is good in smaller groups, but I have not found him to be an effective speaker to large audiences over the years. He did provide a good overview of COCOMO II and its underpinnings, but I did not see how this really related to managing costs. That is, how one would use the COCOMO II output to effect changes in-process was not made clear. As a talk about the elements and derivation of COCOMO II, it was good. As a talk about managing software costs, it was not. So, though I found the former useful, my own knowledge of COCOMO over the years was not increased much by this presentation.
Michael Hovan (Bayer Corp.) on “Implementing Metrics in a Level 1 Organization”
This was not bad as a model for going about implementing metrics, but I’m not sure how the Level 1 aspect of it fit in. The speaker explained his meaning and, I guess, implied his organization was Level 1. However, he seemed to have a good bit of support for what he was doing, including $300K in budget, which suggests management, somewhere, was better than what I think most Level 1 places experience. He did, though, note points of resistance to what he was trying to do, but I did not get a sense of what the key points or issues were.
Lee Fischman (Galorath Inc.) on “Function Point Counting for Mere Mortals”
There seemed to be some underlying “current” here between the speaker (and the company he represented) and several folks in the audience, who seemed to represent the more traditional IFPUG approach to FPs. I was expecting a talk that would take regular FPs and explain how to do it more easily. Instead, the speaker just renamed some stuff, added a few things, and proposed that folks learn to count things that way, i.e., another form of the same thing. I have not checked the CD yet to see if there is a paper behind this talk. The VGs don’t make it enticing to try. The speaker did seem to have a few things that were “improvements” to traditional FP counting, etc., but they were just a few of the many things being proposed. I think the issues in this talk were too much for a 45 minute session.
Invited talk by Mark Paulk (SEI) on “Practices of High Maturity Organizations”
I always enjoy listening to Mark as he does a good job of delivering the material as well as making sense out of the data. This talk was no exception. For example, he pointed out that the way a company organizes itself (e.g., matrix, hierarchy) seemed to have no serious impact on the ability to reach a high level of software development capability. Most high maturity organizations, though, were also registered to ISO 9001 while many had active TQM programs throughout the company. Most of them also regularly measure customer and user satisfaction and try to “do the right thing” (in terms of adhering to their sense of quality and process) even if customers resist.
From a project management perspective, most high maturity organizations use some cost model to estimate their projects. Many use earned value and Delphi techniques to track and estimate work. From a quality and product assurance perspective, they almost all have independent SQA groups and embed the SQA function within the development process as well. That is, process assurance is done independently while product assurance (which goes well beyond testing) is part of how roles and responsibilities are defined within development organizations.
When it comes to how they make process information and models available, they almost all use an intranet with web-based materials. Many have extensive process automation and data collection. Some actively do dynamic modeling of their processes to assess change impacts. Few of them, though, use a very formal language or notation for expressing/defining processes.
Mark noted a “wide range of rigor” when it came to new employee training/acclimation to the companies. At one extreme were several companies in India that had 8 week training programs for new hires plus mandatory continuing education. At the other, were companies with no such programs but very active mentoring, i.e., much more than “Here’s the bathrooms and there’s the cafeteria. Call me if you have any questions.”
Surprisingly, most high maturity organizations are using control charts (XmR versions). Many use predication intervals and cost of quality measures as well.
Mark noted that such organizations find their biggest challenge to be “protecting their process maturity during organizational restructuring — buyouts, mergers, acquisitions, rapid growth, etc.” These often result in supportive management move on or being replaced with management which does not willingly support the position the software organization has achieved.
Keynote by Jeannette Horan (VP of Product Development, Lotus Development Corp) on “Using Software Measurement to Effect Change”
Not a bad session, but I was more interested in speaking to Ms. Horan after the talk since I am very much involved with folks using Lotus Notes for a variety of process workflow purposes. Ms. Horan described the Lotus Notes based issue tracking and metrics collection system put into their development environment. They have locations around the globe and use the same system. Interestingly, while they are the product developers, she admits they are system folks, not experienced in application development with their own product. They turned to the consulting side of Lotus to actually build the application and they expect them to maintain it for them.
I guess I got lost in the actual nature/design of the issue tracking/metrics DBs. I never did get the sense of what kind of change was really accomplished, though it was clear some had to occur just to get the DBs used at all. As I said, I was awaiting the chance to speak to her after the presentation.
When I got that chance, I asked about reporting from Notes databases and prototyping Notes applications without a full-blown Domino Designer license/environment. She emphasized that many third-party folks and Lotus partners work in those areas and encouraged looking into such sources rather than in-house development. I will be back in touch with her about some other issues related to our experiences with Notes, too.
Sandee Guidry (DoD DFAS) on “Estimating and Tracking Software Size Without Lines of Code or Function Points”
This was my favorite talk of all the presentations by non-invited speakers. It was very much in line with my own presentation that emphasized metrics which do not involve counting code or function points. Basically, DFAS was doing maintenance and enhancements on a large mainframe application base related to payroll for civilians who work for the Department of Defense. They have taken an approach to estimating project size based on complexity of individual configuration items, which ones are affected by individual new requirements, and a gross estimation of how much change to each item each new requirement will involve.
From the complexity perspective, they identified each kind of configuration item and, based on experience with them, assigned a value from 1 to 20 as its complexity index. The average of all of their items was about 6, so things were definitely lumped at the low end of the scale. This included a lot of documentation and text deliverables. Many code deliverables were of higher complexity, though very few were really above 10.
From a level of change perspective, they have values of from 1 to 8 based on expert experience with change in the past. A “1” means very little change while an “8” means a rewrite (100% change).
When new requirements come in, each one is examined to determine what specific configuration items will have to be changed to meet/document the requirement. Depending on the type, the complexity is derived for each specific item. Then, for each specific item, the level of change is determined. These two values, for each item, are cross-indexed on a matrix DFAS has developed over the past few years. The intersection of the complexity value and level of change value give an effort estimate for the work. Adding up these effort values for every item to be changed for every requirement produces a total effort estimate for the whole group of requirements in the enhancement effort. From this scheduling and staffing can be derived.
Sandee reported that their estimates average about a 6% difference from the actual effort/schedule. This is 3 times better than the average estimation tool produces before being “tuned.” Of course, they spent a great deal of time “tuning” their matrix, trading staff time/effort for tool costs. (Interestingly though, they still apparently count Function Points as well.)
I will definitely be in touch with Sandee and try this with ISD since they expressed interest in this approach. I’m sure we’ll have to adjust things, but the idea of assigning configuration items along a complexity scale in combination with a scale for change impact related to given requirements is an appealing approach to explore.
Johanna Rothman on “What to Do When the Right Person Doesn’t Come along”
I think the talk should have, perhaps, focused on a somewhat smaller set of options. But the visuals were very clear and I think referencing them again will help. Johanna’s talk was focused on whether to wait when the ideal candidate does not show up or try some other approach that might allow a less than ideal candidate to be hired.
Basically, she recommended two approaches:
Hire with the knowledge that some skills will have to be developed in-house and doing that in a very deliberate manner, not just hope a person “catches on fast enough.”
Change something –
the work, by acquiring technology to replace human effort;
the person expected to do the work, by freeing up a more experienced person and having the new people, if hired, apprentice under them;
the project lifecycle, by a variety of testing and development methods;
job responsibilities, by having certain groups do more or less of something they were (or were not) doing before to take advantage of skills already in the organization (or those that could be developed quickly).
Johanna gave a lot of examples of specific instances of what she meant. Many were not on the presentation vugraphs, but one can probably get the idea of what she meant, in general, from what was present. Johanna’s talks are very much people management ones since she has been a project manager on a lot of small projects in smaller companies (both as an employee and as a consultant).
After Action Session by Robin Goldsmith (GoPro Mgt)
I usually stay for Robin’s session. Each year, after the regular Conference sessions, Robin does an “after action” session. I will say, though, that it is sometimes hard to be sure of its real goal. It starts out as though it’s going to be a recap of the week and a discussion of how people responded to ideas they heard, etc. Eventually, though, it is clear it’s intended to be a session on how to get attention for metrics, etc. That isn’t bad in and of itself, but I think the feedback from the audience should be pursued more. Holding this in a much smaller room, knowing the small attendance would really help the interaction. There are people usually spread out around half a ballroom of seating. (It’s given after lunch on the last day so most people have already headed out for planes.)
What Robin tries to do is ask people what they heard during the week and what they thought about it, i.e., did it match their experiences or did they think they could apply it. He then begins to address some ways to “sell” metrics and measurement through financial expression of what it can mean. He uses earned value and cost of quality models to do this.
That about covers the conference. I have the Proceedings and would be glad to talk to anyone in further depth about any sessions as well as pursue speakers for more information and their web site addresses (many of which I know are buried in the Proceedings material).
IDEAS FOR NEXT YEAR
I think some panel presentations would be good. I would not restrict this just the “experts,” either. Some tracks had very similar material from sessions to session. Perhaps taking some closely related talks (like the ODC and defect analysis stuff this time) and making a double session panel would be better than hearing the same overview material 3–4 times and ending up with, perhaps 15–20 minutes of specific information.
Some topics I’d like to see encouraged for panels or invited talks would be:
In-the-Large vs In-the-Small application of metrics, QA, process models — I’m tired of hearing complaints about how folks feel they “can’t do CMM” or apply metrics or whatever in-the-small when I know that size really has nothing to do with it. Sure, if you’re talking about formal assessment, registration, etc., that brings in a host of costs and effort, but to just take some of these practices and apply them is not a size issue. In fact, I find it easier in-the-small. At least, when it is going to be hard, you know up front what the issues are and corporate politics do not shield the real stuff.
Initiating Metrics and QA with Little or No Resources — When one gets to implementing (and not just in-the-small) some of the practices and methods, cost is always an issue. If you have a few willing people and no worse management posture than “benign neglect,” I think you can start to do quite a bit of useful stuff. I’d like to hear more about how folks do this, no matter what the size of the organization. Frankly, I think a lot of attempts which do have funding behind them go south because of the resource “glut” which precedes any real, in-the-trenches experience trying things. They get “top down support” and tend to try to implant some “program” on folks rather than find out what line staff and management need.
Project Estimation without Expensive Tools and Methods — Having used most of the major tools and methods myself, I know they can work, but the start-up (and annual licensing) costs can be prohibitive. I’d like to hear more from folks who have developed in-house estimation approaches that can be tried without such large commitments of time and money. I know this flies right in the face of half the vendors at the EXPO, but I think this is what many people need. At least I think it is a way to get started and to experience data collection and analysis so that application of greater resources will be more effective and successful later.
[I did not attend the next year. You can, however, find links to some presentations for these conferences using this Google search: https://www.google.com/search?q=applications+of+software+measurement+and+software+management+conference&oq=applications+of+software+measurement+and+software+management+conference&aqs=chrome..69i57j69i60j69i61j69i60.14332j0j15&sourceid=chrome&ie=UTF-8.]