BEST PRACTICES FOR SUCCESSFUL IT MANAGEMENT
by Paul Goodman
- - - - - - -
Published by Rothstein Associates Inc.
Available: July, 2004
- - - - - - -
ENDORSED BY THE EUROPEAN SOFTWARE PROCESS IMPROVEMENT
The European Software Process Improvement (ESPI) Foundation is pleased to endorse this
book. We see it as a valuable and helpful contribution to the world wide challenge to achieve
real and measurable process improvement within the software engineering industry. Only by
sharing experiences, both positive and negative, can we hope to accelerate the improvement
of development and support activities within our industry. The author is well known to us
the Foundation having spoken at our annual conference and other ESPI events on numerous
occasions and we can genuinely state that this book positively contributes to the sharing of
experience that we exist to promote.
- Tony Elkington, Director
- - - - - - -
If you have any responsibility for applying measurement to IT Application Development,
Application Support or Outsourced Service supply in these areas, this is the book for you!
You may be the Sponsor of a Software Metrics program, responsible for implementing such
a program or part of a Benchmarking initiative. This book can help you avoid the pitfalls
inherent in such programs so that you make your program a success.
SOFTWARE METRICS: BEST PRACTICES FOR SUCCESSFUL IT MANAGEMENT will
give you a comprehensive introduction to the subject area. Beyond this, the book provides a
wealth of useful case study information and gives a wide range of useful, practical
measurement models, based on years of experience across many industry sectors, that you
can start to use today.
This book is unique in that, as well as giving the technical, subject background necessary to
make Software Metrics work, it presents a full lifecycle for measurement program
development and implementation. This lifecycle breaks the whole, complicated problem of
getting a measurement program up and running into manageable phases; each one defined
and described in detail with easy to follow descriptions.
Even Function Point Analysis, the break point for many a program, is de-mystified and
placed into the context of what it does and how it can work for you.
All of this is presented in an easily understood style that assumes no prior knowledge while
the experience of the author, gained over twenty years of measurement practice in the IT
arena, is such that even experienced practitioners will gain from reading this book
- - - - - - -
TABLE OF CONTENTS
SECTION 1 INTRODUCING SOFTWARE METRICS
1 SOFTWARE METRICS: WHAT AND WHY?
1.1 Definition of Software Metrics
1.2 Areas of Application
1.3 Principle Number One Pragmatism and Compromise
1.4 Principle Number Two Measuring People Don't!
1.5 Principle Number Three Modeling = Simplification
1.6 Principle Number Four Ask Not for Whom the Bell Tolls Ask
Figure 1.1 Test Defects Report
Figure 1.2 Field Defects Report
1.7 Principle Number Five "The Sum of the Whole Is Greater than the
1.8 Principle Number Six Culture Shock!
2 AN OVERVIEW OF FUNCTION POINT ANALYSIS
2.1 So, What Is Function Point Analysis?
2.2 Using Function Point Analysis
2.2.1 Performing an FPA Exercise
2.2.2 Defining the System Boundary
Figure 2.1 FPA Example
2.3 Complexity Assessment
3 SOFTWARE METRICS: MANAGEMENT INFORMATION
3.1 What Is "Management Information?"
3.2 Why Do We Need Management Information?
3.3 Collecting the Data
3.4 Requirements for Information
3.5 Some Portable Models
3.6 What about Usability?
4 COST ESTIMATION
4.1 Cost Models and Beyond
4.2 Why Do We Estimate?
4.3 Some Basic Principles
4.4 Old Data Look in the Bin
4.5 Models and Tools Revisited
4.7 Techniques for Estimation
4.8 a Strategic Template for Cost Estimation
4.9 Modified Delphi Technique
4.10 Bozoki's Ranking Technique and Pert
4.11 "Bottom Up" Estimating or Functional Decomposition
4.12 Estimation by Analogy
Figure 4.2 Role of the Support Group
Figure 4.3 Typical Process Diagram
Figure 4.4 Monitoring and Feedback (Actual vs. Estimates)
Figure 4.5 Monitoring and Feedback (Estimate vs. Effort)
Figure 4.6 Monitoring and Feedback (Variation Actual Against Initial
4.13 What about the Leading Edge, Big Projects?
5 APPLIED DESIGN METRICS
5.1 What Is Complexity?
5.2 McCabe Metrics
Figure 5.1 McCabe Metrics Background Theory - Example of
Figure 5.2 McCabe Metrics Background Theory - Example of
Figure 5.3 McCabe Metrics Background Theory - Example of Reduced
Figure 5.4 Mccabe Metrics Background Theory Example of
Reduced Flowgraph (2)
5.3 Information Flow Metric
Figure 5.5 Aspects of Complexity
6 PROJECT CONTROL
6.1 Feasibility Checking
6.2 Risk Management
6.3 Progress Monitors
SECTION 2 BUILDING AND IMPLEMENTING A SOFTWARE METRICS
7 A LIFECYCLE FOR METRICATION
7.1 THE LIFECYCLE MODEL
Figure 7.1 Software Metrics Initiative Context Diagram
Figure 7.2 Level 1 Lifecycle Model
8 STAGE 1 - INITIATION
Figure 8.1 Initiation Stage of a Software Metrics Program
8.1 the Initial Management Decision
8.2 Assign Management Responsibility
8.4 "We Need a Plan!"
8.5 Subject Familiarization
8.6 Initial Market Research
8.7 Presenting the Results
8.8 Make it a Success!
9 STAGE 2 - REQUIREMENTS DEFINITION
9.1 Things to Remember
Figure 9.1 Relationships Between a Standard Lifecycle, Software
Metrics and Project Management
9.3 A COMMON FRAME OF REFERENCE
Figure 9.4 Functional Linkage
Figure 9.5 Traditional Organization Hierarchy
Figure 9.6 Work, Task and Linkages Within the Requirements
9.4 Initial Publicity Campaign
9.5 Customer Identification
9.6 Market Identification
9.7 Establish User Interface
9.8 Identify Potential Super Champions
9.9 Capture Information Requirements
9.10 Establish Initial Definitions
9.11 Identify Available Data Sources
9.12 Identify Storage, Analysis and Feedback Requirements
9.13 Consolidate Requirements
9.14 Specification Review
10 STAGE 3 - COMPONENT DESIGN
Figure 10.1 Tasks and Links Within Design Stage (Showing
Dependencies Between Streams)
10.1 Pilot Projects
10.2 Metrics Definition Stream
10.3 Model Definition or Goals, Questions, Metrics
10.4 Administration Design Stream
10.5 Map Base Metrics to Available Data
10.6 Establish Links to Data Administrators
10.7 Define Data Collection Mechanisms
10.8 Design Storage, Analysis and Feedback Mechanisms
10.9 Marketing and Business Planning Stream
10.10 Prepare a Business Plan
10.11 Prepare a Marketing Plan
10.12 Infrastructure Design Stream
10.12.1 Define the Infrastructure
Figure 10.2 Example Organization of Staff Support Infrastructure
10.12.2 Define Support Training
10.12.3 Drawing the Streams Together, or Consolidation
10.13 Moving the Design Forward
11 STAGE 4 - COMPONENT BUILD
Figure 11.1 Building the Components of a Software Metrics
11.1 Laying the Foundations
11.1.1 Select the Implementation Champion and Group
11.1.2 Launch Planning and Pre-launch Publicity
Figure 11.2 Targets
11.1.3 Build the Program Components
11.1.4 Document Techniques
11.1.5 Prepare Training Material
11.1.6 Build the Metrics Database
11.1.7 Build the Data Collection Mechanisms
11.2 Review Built Components
11.3 the Final Countdown
12 STAGE 5 - IMPLEMENTATION
12.1 A People-oriented Issue
12.2 The Launch
12.4 Summary: Closing the Circle
13 Section 2 - A Summary
Figure 13.1 A Project-Based Approach
Figure 13.2 Topic Scope
Figure 13.3 Basic Strategy
Figure 13.4 Initiation Stage
13.2 Requirements Specification
Figure 13.5 Requirements Specification
Figure 13.6 Component Design 1
13.3 Component Design
Figure 13.7 Component Design 2
Figure 13.8 Component Design 3
13.4 Component Build
Figure 13.9 Component Build
Figure 13.10 Implementation
Figure 13.11 Implementation 2
13.6 A Recipe for Success
14 ALTERNATIVE APPROACHES TO METRICATION
14.1 Phasing or Scope Variation
14.2 in by the Back Door
14.3 Hitching a Ride
14.4 Hard and Fast
SECTION 3 GENERAL DISCUSSION
15 THE HOME STRETCH
15.1 SEI Assessment
Figure 15.1 The CMM Process Maturity Framework
15.2 Other Measurement-based Techniques
Figure 15.2 Performance
16 CLOSING THOUGHTS
APPENDIX A: USEFUL ORGANIZATIONS
ABOUT THE AUTHOR
- - - - - - -
EXCERPT FROM THE FOREWORD
by Andrew Hiles FBCI, MBCS
I have had the pleasure of working with Paul Goodman, on and off, for almost ten years. My
first contact with Paul was with a client in the Netherlands, working together on what was
Europe's biggest ISO 9000 project for Information Technology. Paul was helping the client
with Software Performance Metrics while I was developing Service Level Agreements
(SLAs) and groping for effective performance metrics for a software development SLA.
As a past developer myself, I had an idea but was not sure whether or not it would be viable.
With Paul's input (and a particularly brave Applications Development Manager!) the result
was highly successful.
It is this pragmatic approach, coupled with a huge depth of practical experience, that yields
Paul the commanding heights of the science of software performance measurement.
The topic is important enough. IT software projects are high risk activities.
For over ten years, the sad statistics on IT project failure have barely changed. Survey after
survey shows that over half of all IT projects fail (especially large projects) and most projects
are delivered over time, over budget. Accurate time and cost forecasting, based on sound
performance measurement metrics, could reduce the number of project failures by helping to
create more realistic cost / benefit cases. And better software quality would help, too.
Hugh W. Ryan, in an article for Outlook Journal, summarized research that showed:
- Only 8 percent of applications projects costing between $6 million and $10 million
- Among all IT development projects, only 16% are delivered to acceptable cost,
time and quality.
- Cost overruns from 100% to 200 % are common in software projects.
- Cost overruns for IT projects have been estimated at $59 billion in the United
- Another study puts the figure at $100 billion.
- IT workers spend more than 34% of their time just fixing software bugs.
The PMI Fact Book is even more pessimistic: it says the United States spends $2.3 trillion a
year on projects and that much of that money is wasted because a majority of projects fail.
- Standish Group International found that only 28% of information technology
projects are completed successfully.
- Of 1,027 projects surveyed by the British Computer Society, the Association of
Project Managers and the Institute of Management concluded that only 130 (12.7%) were
- Only 2% of the successful projects were development projects, yet over 50% of the
projects reviewed were development projects.
- A PriceWaterhouseCoopers survey found that, in the UK alone, over $1 billion a
year was being wasted through poor software quality.
- Cap Gemini Ernst & Young reports that 70% of Customer Relationship
Management (CRM) strategies fail.
- META Group reports 90% of enterprises cannot show a positive return on CRM.
- Peppers & Rogers reports that nearly 80% of CRM projects fail to show a positive
- Gartner research predicts that over 50% of CRM strategies will continue to fail.
- Only around half of Enterprise Resource Planning systems are deemed
- According to the Cranfield School of Management, the more ambitious the return
on investment for the project cited in the business case, the more lacking the project plan is
likely to be.
Imagine the results if less than half of all passenger aircraft flying arrived at their destination!
Badly estimated software projects can waste serious money: money that could otherwise be
invested in mission achievement, new product development and creating a competitive
edge. Poor software quality can lead to poor quality or even dangerous products. It can
cause public relations and marketing disasters, damage brand reputation and market share.
While use of effective software metrics is not the total answer, it is certainly a crucial part of
getting to grips with software development.
Paul Goodman's book makes a valuable contribution to IT Development Project success. It is
comprehensive, lucid and packed with illustrations and practical examples. This makes it as
accessible to the non-specialist as it is to the software guru. I commend it, not just to
developers, but also to:
- IT Project Accountants
- IT Development Project Accountants
- IT Development Project Managers
- Risk Managers
- IT Service Delivery Managers and all those responsible for development,
implementation and management of Service Level Agreements
- Disaster Recovery, Business Continuity Managers
- Business managers playing a role in the initiation, authorization and development
of IT software projects.
Andrew Hiles, FBCI, MBCS, Director
Oxford, United Kingdom
- - - - - - -
EXCERPT FROM THE INTRODUCTION
More years ago than I care to remember, let us say twenty five to thirty, Software Metrics
a curiosity confined to a few university researchers and one or two industrial or commercial
organizations. Now it is a well established discipline with a growing band of practitioners and
adherents. Indeed there is now a whole train of theory and practice that is called "Software
Metrics." But, I suggest, still the train moves too slowly. This book is an attempt to speed the
train up by helping you to add momentum. Having said that, things have progressed
enormously from that dim and distant past!
Today, it seems you cannot attend any software engineering conference or seminar without
coming across at least one speaker who features the subject Software Metrics. We can find
active user groups specializing in particular applications within the general domain of
Software Metrics; there are research projects funded by the European Economic Community
involving some of the largest industrial organizations in the world; and, there are specialist
international conferences and workshops that attract large audiences on a regular basis.
However, despite the huge increase of interest in the use of Software Metrics within industry
(and I include all types of commercial and engineering applications in the term "industry"),
much of what we see is to do with the definition of particular techniques. One of the problems
facing the IT industry today is the application of these techniques in a business environment.
Little has been written about solving the very practical problems faced by organizations who
wish to introduce the use of Software Metrics, and another aim of this book is to go some
way towards correcting that.
The material in the book is based on many years of experience in implementing Software
Metrics initiatives, four of them as a direct employee in two large organizations. This has
been enhanced by experiences gained from acting in a consultancy role to a number of other
organizations engaged in implementing similar programs over many more years. The
suggestions and models presented in the text are the result of having to find pragmatic
solutions to very real, business related problems.
I do not claim that this book is the last word that will ever need to be said about the topic of
implementing Software Metrics initiatives; it is not. What it does contain are a set of
approaches and techniques, presented as a coherent whole, that have worked in those real
business situations together with discussions about specific aspects of Software Metrics
based on those same experiences.
Nor can I claim that all of the components that make up this book are my own. I have been
privileged to meet some very talented people in many organizations across the world who
have been willing to share ideas and concepts freely so that everyone benefits. For this I
thank them and I have provided references to their work, whenever I could, throughout the
body of this book .
This has not just been a sharing of successes, but also of failures and frustrations. I am able
to contribute very easily when it comes to failures! I believe that one of the things this work
has to recommend it is that it is based, in part, on learning from those failures and the hope
that the reader can avoid the mistakes that I and others have made in the past.
Interestingly, I have found that there are great similarities in the mistakes we have made and
also in the successes we have had. It is this that makes me feel that a book like this, which
attempts to illustrate a generic approach to the implementation of Software Metrics
programs, can work.
Turning from the background to the book itself, I hope that you find it to be readable. To help
in this, the material has been separated into three sections.
The first defines exactly what I mean by the term "Software Metrics" and introduces the
reader to the domain of Software Metrics by discussing the need for a measurement-based
approach to the management of software engineering. This first section then, for reasons
which will become obvious, looks at a particular measurement technique Function Point
Analysis before discussing specific areas of application for Software Metrics.
The second section is really the core of the book. This section describes an approach to the
development and implementation of Software Metrics initiatives. Essentially, the approach
centers around a model that breaks the work into a number of stages. This division of labor
into phases is, of course, nothing more than the way in which most successful projects are
handled; it is what makes up those stages that I hope will be found beneficial.
The third section is a collection of chapters that belong in this book, but do not sit naturally in
either of the other two sections. Here we visit the topics that seem to be generating
discussion today and we will also look at some topics that may be key issues in the near
Appendices and references are also provided.
- - - - - - -
ABOUT THE AUTHOR
PAUL GOODMAN has more than nineteen years experience of the industry and particular
expertise in the support of Software Measurement and Software Process Improvement (SPI)
programs for clients.
He first became involved with Software Metrics while working at a major UK Government
department. The measurement program that was initiated by Paul and his colleagues is still
running today and that department is recognized as one of the leading UK institutions in the
field of Software Metrics.
Paul firmly believes that using data to enable better management is vital within IT. If you have
data you gain deeper understanding, and with understanding comes a greater chance of
improving the situation; that is solving the problems. On the other hand, if you don't have data
you don't have facts, all you do have is opinion.
The other main area within Software Metrics that Paul focuses on is implementation. Still
today, too many measurement initiatives within IT organizations fail the implementation
hurdle. There are many reasons for these failures but Paul does believe that some, indeed
many of them are avoidable.
Since leaving the Civil Service, Paul worked to implement a Software Metrics program within
a major telecommunications company in the UK and the United States. Since then, Paul has
worked as a consultant supporting many clients from all sectors of the IT industry. Today he
works out of the UK office of Meta Group Incorporated, an international IT consultancy and
Paul is a past Chairman of the UK Software Metrics Association (UKSMA, previously the UK
Function Point User Group), and was a founder member of the ISO WG12, the international
working party for Functional Sizing Metrics. Paul was also a member of the "Extended"
international IDEAL Enhancement Project team which looked at improving the IDEAL
Software Process Improvement Implementation Model developed by the Software
Engineering Institute of Carnegie Melon University.
He has served on a number of international metrics committees and was also a founder
member of the European Software Process Improvement (ESPI) Foundation.
Paul is a regular presenter at international conferences.
- - - - - - -
August, 2004, 264 pages. Order #DR735
Published by Rothstein Associates Inc.
Available: July, 2004
- - - - - - -
Rothstein Associates Inc.
4 Arapaho Rd.
Brookfield, CT 06804-3104
Telephone: 203.740.7444; 888.768.4783