DE

glossary entry

What is Technical Debt?

 Practical relevance

Types of technical debt:

•            Code debt (untidy, redundant, or complex code).

•            Architectural debt (inappropriate or outdated structures, tight coupling).

•            Test debt (missing or unstable tests).

•            Build/deploy debt (manual or fragile pipelines).

•            Documentation debt (missing or outdated documentation).

 

Economic model:

•            Principal (main debt): Effort required to implement the better solution now.

•            Interest: recurring additional costs, e.g., longer development time, more defects.

•            Interest rate: increases in areas of code that are changed frequently ("hotspots").

 

Intent & quality of the decision (Fowler quadrant):

•            Intentional vs. unintentional × prudent vs. reckless → four types.

•            "Intentional-prudent" can be useful (conscious market launch despite compromise).

•            "Unintentional-reckless" almost always leads to problem interest rates.

 

Typical misunderstandings

❌ "Technical debt = bugs" – Bugs are defects; debt affects the maintainability and evolvability of the system.

❌ "Tools provide the whole truth" – SonarQube/SQALE primarily estimate maintainability debt (code smells), but often leave architecture, test, or build debt invisible.

❌ "Rewriting everything solves it" – Big bang rewrites often only shift debt; incremental approaches (e.g., strangler pattern) are less risky.

 

Relevance for organizations

Business impact:

•            A large-scale study of 39 commercial codebases showed that low-quality code contained up to 15 times more defects and that issues took an average of 124% longer to resolve.

•            The authors emphasize: correlation, not causation – results apply to this sample, not universally.

•            Other case studies report mixed or even no effects on throughput times, depending on the context and measurement method.

Conclusion: Technical quality is business-relevant, but the effect is context-dependent – organizations must carry out their own measurements.

 

 Practical example

A financial services provider identified a hotspot service (frequently changed, high complexity value). The result: long throughput times, errors in releases. Hotspot analysis and investigation of temporal coupling revealed dependencies. The team extracted a smaller component, added tests and CI checks. The result: shorter cycle times and a lower error rate in the affected value stream.

 

Strategies: Avoid, manage, reduce

A. Make visible & evaluate

                  •    Debt register (items with principal/interest), heat maps for hotspots, architecture reviews.

                  •    Use tools (SonarQube, SQALE) – but only as a partial indicator.

 

B. Prioritize according to economics

                  •    Prioritize debt according to cost of delay (CoD)/WSJF and hotspot score.

                  •    Assign repayment plans to conscious debts.

 

C. Repayment in flow

                  •    Continuous refactoring during ongoing operations.

                  •    Explicitly plan architecture enablers.

                  •    Use strangler patterns for legacy systems.

 

D. Quality routines & definition of done

                  •    TDD, automated testing, CI/CD, code reviews.

                  •    Explicitly extend definition of done with refactoring/testing.

 

E. Measure impact

                  • Combination of maintainability metrics (e.g., Sonar) and flow metrics (lead/cycle time, defect density, change failure rate).

                  •    Important: Ratings are indicators, not absolute judgments.

 

 How good coaches work

1.        Create a baseline: debt register + hotspot map.

2.        Prioritize economically: WSJF × hotspot score → top items.

3.        Repayment in cadence: fixed refactoring slots or enablers in each PI/sprint.

4.        Manage architecture debt separately: cuts, dependencies, observability.

5.        Measure impact: Focus on outcomes (e.g., shorter cycle time, fewer defects), not just "hours invested."

 

CALADE perspective

We see technical debt as a strategic risk in the portfolio. Our approach:

•            Create visibility (debt register, hotspot analysis).

•            Prioritize economically (CoD, WSJF).

•            Secure binding capacity (refactoring/enabler slots).

•            Repay and measure incrementally.

 

Where organizational policies increase debt (e.g., rigid approvals, lack of platform capabilities), we combine our work with Organizational Debt, Living Strategy® (prioritization, strategy sprints), and Living Transformation® (3-month cadence, Capa/Prio events). This results in holistic debt reduction that combines technology and organization.

 

Related terms & sources

•            Ward Cunningham: Debt Metaphor (1992).

•            Martin Fowler: TD Quadrant.

•            SEI/CMU – Managing Technical Debt (Kruchten, Nord, Ozkaya).

•            Systematic Reviews (Li, Avgeriou et al., 2015).

•            Hotspots & Temporal Coupling (Tornhill/CodeScene).

•            SonarQube/SQALE – Maintainability Debt Ratio.

•            Empirical business impact studies – correlation between quality, defects & cycle time; consider context.

 

← back to list