- 3,723,348 Users
- 2,244,533 Discussions
- 7,850,425 Comments
- 16 Data
- 362.2K Big Data Appliance
- 7 Data Science
- 2.1K Databases
- 608 General Database Discussions
- 32 Multilingual Engine
- 497 MySQL Community Space
- 7 NoSQL Database
- 7.7K Oracle Database Express Edition (XE)
- 2.8K ORDS, SODA & JSON in the Database
- 422 SQLcl
- 61 SQL Developer Data Modeler
- 185.1K SQL & PL/SQL
- 21.1K SQL Developer
- 2.4K Development
- 3 Developer Projects
- 32 Programming Languages
- 135.6K Development Tools
- 13 DevOps
- 3K QA/Testing
- 334 Java
- 10 Java Learning Subscription
- 12 Database Connectivity
- 71 Java Community Process
- 2 Java 25
- 11 Java APIs
- 141.2K Java Development Tools
- 8 Java EE (Java Enterprise Edition)
- 153K Java Essentials
- 135 Java 8 Questions
- 86.2K Java Programming
- 270 Java Lambda MOOC
- 65.1K New To Java
- 1.7K Training / Learning / Certification
- 13.8K Java HotSpot Virtual Machine
- 16 Java SE
- 13.8K Java Security
- 4 Java User Groups
- 18 Programs
- 147 LiveLabs
- 34 Workshops
- 10 Software
- 4 Berkeley DB Family
- 3.5K JHeadstart
- 5.7K Other Languages
- 2.3K Chinese
- 4 Deutsche Oracle Community
- 16 Español
- 1.9K Japanese
- 3 Portuguese
From Technical Debt to Software Development Analytics
by Israel Gat
When and how is it appropriate to apply technical debt as an analytical technique?
This article proposes moving on from elaborating on Ward Cunningham's classical Technical Debt metaphor to viewing technical debt as "analytics on code." From this perspective, technical debt is an instantiation of various analytical techniques that can be applied to software, in general, and to software development, in particular. Based on this broader perspective, the article recommends judicious use of technical debt as analytics techniques that need to be fitted to the characteristics of the project to which they are applied.
What Exactly Should Be Done About Technical Debt?
Since the term technical debt was coined by Ward Cunningham at the OOPSLA 1992 conference, it has established itself as one of those rare terms that facilitate intuitive understanding of a wicked problem: how "good" or "bad" is the code? By its very nature, debt is a concept that everyone is familiar with. Whether a project team member has coded furiously for twenty years or has never seen a line of code, a phrase such as "we accrued one million dollars in technical debt in the course of implementing application XYZ" is both meaningful and actionable.
It is meaningful because it is expressed in terms of a universal entity—dollars—without requiring an understanding of the subtleties of the code under examination. It is actionable because it enables establishing a clear goal, such as reducing the technical debt in application XYZ by 50 percent, and, thus, easily tracking progress towards attaining this goal. (You can run a technical debt analysis at any point during the application lifecycle.)
Being both meaningful and actionable, the term became an effective "bridge" between technical and nontechnical folks. For example, I carry out a technical debt assessment and valuation as a standard operating procedure in every technical due-diligence engagement I do for venture capitalists.
Underneath the intuitive appeal and easy assimilability of the term technical debt hides a fairly fundamental question of meaning. You could easily hypothesize that one million dollars in technical debt could be "paid back" by a team of three developers and two testers that focus for a year with singularity of purpose on technical debt reduction. ("Paid back" in this context means that the defects identified through the technical debt analysis have been rectified.) However, what technical debt actually means in terms of code craftsmanship and specific actions to be taken is not at all clear. (See, for example, the spectrum of opinions on the subject in the IEEE Software issue on technical debt.)
For example, suppose you decided to pay back most of the one million dollars in technical debt, making application XYZ almost debt-free. Would the users of the application really benefit from a greatly reduced amount of technical debt? If so, in which specific ways?
Three derivative metaphors for capturing the subtle nuances of the original technical debt metaphor are discussed in the next section. Each of these three metaphors highlights an important aspect of the technical debt concept. However, if we take Ward's original definition of technical debt together with these three metaphors, we still do not have a crisp answer to key questions such as "But what really is meant by technical debt?!" and "What should one do about it?" (The standard answer—"reduce technical debt"—is directionally correct, but does not capture the economical aspect of so doing.) Given the lack of crisp answers, this article shows how regarding technical debt as "analytics on code" enables treating technical debt as a pillar of process analytics.
Three Derivative Metaphors
With the following three sentences Ward Cunningham, for most practical purposes, defined a proxy for software quality that is broadly used nowadays:
|"Shipping first time code is like going into debt. |
A little debt speeds development so long as it is paid back
promptly with a rewrite...Every minute spent on not-quite-right
code counts as interest."
While Ward's metaphor is highly expressive, tying technical debt to software craftsmanship and behavior in actionable terms proved a hard nut to crack. Over the past two decades, three metaphors for the original metaphor have been proposed by researchers and practitioners in this area.
The Rusty Car Metaphor
Even prior to the official coining of the term technical debt, various researchers realized that there is more to software than shipping it and forgetting about it. Based on their experience in the 1960s with the IBM OS/360 system, in Program Evolution: Processes of Software Change (Apic Studies in Data Processing), Belady and Lehman characterized software system behavior through the concept of entropy:
|"The entropy of a system increases with time unless |
specific work is executed to maintain or reduce it."
Likewise, in Estimating Software Costs: Bringing Realism to Estimating, Capers Jones described technical debt as a form of decay:
|"All known compound objects decay and become more complex |
with the passage of time unless effort is exerted to keep them
repaired...Software is no exception...Indeed, the economic value of
lagging applications is questionable after about three to five years."
Metaphorically speaking, Belady, Lehman, and Jones view the deterioration over time of software system as the rusting of the body of an automobile.
The Toxic Code Metaphor
In the course of consulting to various companies, I have been exposed to numerous applications that decayed over time to the point that keeping them running was more expensive than the value these systems were generating. Some of these engagements took place at the time that Angelo Mozilo, ex-Chairman of the Board and CEO of Countrywide Financial, testified that
|"[The 100 percent loan-to-value subprime loan is] the most dangerous |
product in existence and there can be nothing more toxic..."
Mozilo's testimony led me to coining the term toxic code—toxic code is software whose technical debt-to-value ratio is greater than 100 percent—and to developing a technique for identifying this kind of code. To this very day, I am often exposed to toxic code in many of my engagements.
The Water Leak Metaphor
Olivier Gaudin, the cofounder and CEO of Sonar Source, explored in what ways the technical debt metric is connected to the software realities underneath. In a private e-mail exchange I had with Olivier, he wrote the following:
|"You wake up in the morning and find water on the kitchen floor. |
You do not want to start cleaning the water before the leak gets fixed,
as if you do so, you will find water again the next day, the next week, or
the next month. So, before you start tackling existing technical debt,
you should make sure added/updated code is under control. Then, once
it is under control, you might want to look at existing debt to
mitigate risk, increase productivity on and longevity of applications."
These three "metaphors for a metaphor" highlight different aspects of technical debt: Belady, Lehman, and Jones focus on the loss of program structure that inevitably manifests itself as an application ages. I am concerned with throwing good money after bad: beyond a certain point in time, it might not be worth it to try to refactor an application in order to reduce technical debt. Gaudin views technical debt as a symptom, not as a cause: you need to find and fix the underlying problems (for example, in terms of improving inadequate technical practices) prior to addressing the symptom.
Viewing Technical Debt as a Form of Analytics
Rather than trying to add layers of interpretation to Ward's original metaphor (as I did in the previous section), we can try to clarify the metaphor by asking this seemingly simplistic question: "How should you use the technical debt metaphor?" The answer, in my humble opinion, is to view and use technical debt as analytics on code.
If you accept the premise of technical debt as analytics on code, the full spectrum and power of analytics techniques can be harnessed to further develop the ways in which the metaphor is being used. Specifically, technical debt can be used descriptively, predictively, and prescriptively. Intuitively speaking, you can think of descriptive analytics as insight into the past; predictive analytics as foresight; and prescriptive analytics as actionable insight into the future.
Technical Debt as a Form of Descriptive Analytics
Technical debt nowadays is routinely used as a form of descriptive analytics. For example, a typical statement in a technical debt assessment report might read something like the following: "In this application the cyclomatic complexity per module is 12.3; the level of duplication in the code is 11.4 percent," and so on. Conceptually speaking, this form of using technical debt is not really different from the bloodwork performed on a patient during an annual physical exam: "Mr. Gat, your blood sugar level is (say) 5.2."
Assume for a minute that my blood sugar is indeed found to be too high. My physician has readily available predictive analytics indicating the risks associated with a blood sugar higher than 5.0. Moreover, he has actionable insights for me in the form of prescriptive analytics. For example, the physician might recommend a low-carb diet.
Technical Debt as a Form of Predictive and Prescriptive Analytics
The origins of using technical debt as a form of predictive and prescriptive analytics can actually be traced all the way back to McCabe's 1976 landmark paper on the cyclomatic complexity metric. In his paper, McCabe went beyond defining the cyclomatic complexity metric to making the recommendation to split modules with cyclomatic complexity higher than 10 into smaller modules with lower cyclomatic complexity.(Based on experience accumulated between 1976 and 1996, McCabe suggested relaxing this number in certain circumstances to allow modules with cyclomatic complexity as high as 15. See NIST Special Publication 500-235: "Structured Testing: A Testing Methodology Using the Cyclomatic Complexity Metric.")
McCabe's cyclomatic complexity metric has been found to correlate positively with defect density. For example, in his 2013 PhD dissertation, Daniel Sturtevant analyzes eight consecutive releases of an application by a successful software firm, concluding that "files with high McCabe scores are expected to have 2.1 times as many bug fixes as files with low McCabe scores." Daniel and his thesis advisor, Alan MacCormack, are currently working on publishing two related papers on the subject.
What Kind of Project Are You Working On?
The example given in the previous section of using cyclomatic complexity to determine whether refactoring a module is needed (and to what extent it should be done) is typical of the many ways in which modern analytics can be used in driving the software development process. Essentially, analytics can be applied to any figure of merit that is of interest to a developer as well as to the developer's superiors, stakeholders, and customers. In this section, let's examine one important aspect of the richness enabled by this ability to apply analytics to any meaningful figure of merit in the software development process.
Recent research on software development analytics by Murray Cantor and me highlighted the imperative need to fit different kinds of analytics (and practices) to different kinds of projects. In a nutshell, we argue that one size does not fit all. Specifically, we characterize three kinds of projects, and corresponding kinds of analytics, as illustrated in Figure 1.
Figure 1: Three Kinds of Projects (©2015 Cutter Consortium)
At first glance, it would seem that technical debt techniques, as a form of analytics on code, would be applicable to all three buckets depicted in Figure 1. However, on closer examination, the practical value of applying technical debt techniques to the third bucket (new platform) is doubtful. Projects in this bucket might go through five, ten, or fifty Minimum Viable Product (MVP) versions before it is determined which one will go to market. The practical value of conducting technical debt analysis on an MVP that could easily be trashed in its entirety the next day is doubtful. It is better to wait to conduct a rigorous technical debt analysis until a promising release candidate emerges from the ashes of the various MVPs. In other words, analyzing whether the software was done right is not meaningful prior to determining whether it is the right software (from the perspectives of the customers, the market, and the stakeholders).
If you accept this premise, technical debt analysis can be viewed as one technique in the broader spectrum of software methods, practices, and tools. As shown in Figure 2, technical debt analysis is most meaningful when applied in the first bucket and somewhat meaningful in the second bucket. As pointed out above, it is not really worth your while to conduct technical debt analysis too early, let alone too often, for a project in the third bucket.
Figure 2: Landscape of Software Techniques (©2015 Cutter Consortium)
Viewing technical debt in isolation is not really appropriate in an era in which analytics are readily available to provide a quantified grasp on just about any aspects of a product under development. Rather, technical debt analysis should be viewed as one analytical technique out of many, and it should be applied selectively in accord with the nature of the project you are working on. In particular, technical debt analysis is not a natural fit in the early stages of projects embracing Lean Startup/MVP principles. In such projects, it is better to wait to perform the technical debt analysis until a credible release candidate emerges.
One of the fascinating aspects of applying modern analytics to software is the potential to establish causality in software development. The general-purpose causality techniques developed by Fenton and Neil are applicable for gaining insights into the deeper nature of software development. In particular, software development models based on Bayesian Networks can be effective in estimating, planning, tracking, and governing software projects. (See the workshop by Cantor and me at the recent Cutter Summit.) The application of tools based on Bayesian Networks as a standard operating procedure in the software development process is a topic that will be discussed in detail in a future article.
- Fenton, Norman and Martin Neil. Risk Assessment and Decision Analysis with Bayesian Networks. CRC Press, 2012.
- Sterling, Chris. Managing Software Debt: Building for Inevitable Change. Addison-Wesley Professional, 2010.
About the Author
Dr. Israel Gat is a Cutter Consortium Fellow and Director of the Agile Product & Project Management practice, and a Fellow of the Lean Systems Society. He is recognized as the architect of the Agile transformation at BMC Software where, under his leadership, Scrum users increased from zero to 1,000, resulting in nearly three times faster time to market than industry average and a 20 percent to 50 percent improvement in team productivity. Among other accolades for leading this transition, Dr. Gat was presented with an Innovator of the Year Award from Application Development Trends in 2006.
Dr. Gat's executive career spans top technology companies, including IBM, Microsoft, Digital, and EMC. He has led the development of products such as BMC Performance Manager and Microsoft Operations Manager, enabling the two companies to move toward next-generation system management technology. Dr. Gat is also well versed in growing smaller companies and is currently serving on the Trident Capital SaaS Advisory Board.