Skip to content

Concepts in Measurement

Measurement is the process of quantifying project attributes—size, effort, quality, progress, and risk—to make objective decisions. It answers: “How do we know if we’re on track, improving, or in trouble?”

Without measurement, you manage by gut feeling. With measurement, you manage by evidence.

1. The Measurement Hierarchy: Metrics, Measures, and Indicators

It is important to distinguish between these three commonly confused terms:

  • Measure: A direct quantification of an attribute. (e.g., “The code has 5,000 lines” or “We found 12 bugs”).
  • Metric: A calculated or relative measurement that provides context. (e.g., “Defect Density” which is bugs per 1,000 lines of code).
  • Indicator: A metric or combination of metrics that provides insight into the project’s health, allowing a manager to make decisions.

2. Types of Software Metrics

Measurement generally focuses on three distinct areas of the software lifecycle:

A. Product Metrics

These describe the characteristics of the software itself, regardless of how it was built.

  • Size: Measured in Lines of Code (LOC) or Function Points (FP).
  • Complexity: Often measured by Cyclomatic Complexity, which counts the number of linearly independent paths through the source code.
  • Quality: Measured by the number of defects found or the “Mean Time to Failure” (MTTF).

B. Process Metrics

These measure the efficiency and effectiveness of the development process.

  • Cycle Time: The time it takes for a feature to go from “In Progress” to “Done.”
  • Velocity: In Agile, the amount of work (usually in story points) a team completes in a single sprint.
  • Defect Removal Efficiency (DRE): A percentage representing how many bugs were found before the software was released versus after.

C. Project Metrics

These track the overall health and “vitals” of the business undertaking.

  • Cost Variance: The difference between the actual cost and the budgeted cost.
  • Schedule Variance: The difference between the actual progress and the planned schedule.

3. Key Measurement Frameworks

Goal-Question-Metric (GQM) Paradigm

GQM is a top-down approach that ensures you aren’t just measuring things for the sake of it.

  1. Goal: What do we want to achieve? (e.g., “Improve software reliability”).
  2. Question: What do we need to know to see if we met the goal? (e.g., “How many bugs are being reported by users?”).
  3. Metric: What specific data will answer the question? (e.g., “Customer-reported defects per month”).

Function Point Analysis (FPA)

Since “Lines of Code” can vary wildly between languages (like C++ vs. Python), FPA measures the functionality provided to the user. It looks at:

  • External Inputs (forms, screens).
  • External Outputs (reports, graphs).
  • External Inquiries (online queries).
  • Internal Logical Files (database tables).
  • External Interface Files (files shared with other systems).

4. Characteristics of Good Metrics (SMART)

To be effective, a software measurement should be:

  • Specific: Targeted to a particular area.
  • Measurable: Quantifiable, not subjective.
  • Actionable: If the metric changes, you should know what action to take.
  • Relevant: Directly tied to the project’s success.
  • Timely: Available when needed to make decisions.

5. Common Pitfalls in Measurement

  • Measuring the Wrong Things: Focusing on “Lines of Code” can encourage developers to write bloated, inefficient code.
  • Using Metrics as a Weapon: Using productivity metrics to punish team members often leads to “gaming the system” (e.g., opening many easy bugs to look busy).
  • Analysis Paralysis: Collecting so much data that the team spends more time reporting than developing.