Legacy Modernization Analysis Methodology
© Copyright 2012, Don Estes
have established your business goals for modernizing your legacy
applications, there are 4 practical technical strategies to reach those
Replace with a commercial off the shelf (COTS) package
Replace with a redesigned and rewritten application
tools to extract your business rules, and re-architect the rules
into a modern implementation framework
tools to modernize your existing code base (renovation)
the best strategy (or combination of strategies) is non-trivial, and
involves understanding the business case, schedule goals, resources
available, residual value in the legacy assets, technical details of the
application implementations and deployments, and appetite for assuming
IT risk. Then, if either outsourcing some or all of the project,
or if a COTS package is under consideration, we add vendor risk to the
We argue that choosing the best strategy for your legacy
modernization initiative is primarily a business question, not a
technical question, except insofar as technology directly
impacts the business. Just like any other investment decision,
legacy modernization issues can be categorized under assets,
liabilities, cost, risk and business value.
is no single path from where you are to where you want to go, if
indeed you have identified your destination platform. Instead,
there are often multiple alternative paths, and no single
alternative may constitute the obvious choice. Each path
will have advantages and disadvantages, and each must be weighed
appropriately in the context of your unique organization. There
is no one-size-fits-all answer.
|The most typical destination platforms are:
- Java in a J2EE environment, frequently with Websphere as
an application server
- C# in a .NET environment
Both will usually include a relational database management
system (RDBMS) as the primary data store, but in some cases an
object database may make more sense.
However, these are hardly the only destinations. Open source
alternatives to proprietary environments are becoming robust
enough to constitute significant competition for proprietary
vendors, though the majority of sites stick with proprietary
products for mission critical applications. Rules engines, or
business rule management systems (BRMS) as they prefer to be
known, are gaining traction in application architectures.
Business Process Management (BPM) platforms can have an adjunct
or a primary role as well.
Large mainframe sites are often times not ready to abandon
the security of the mainframe for newer architectures, and are
running mixed CICS/COBOL and Java/Websphere environments, either
on z/OS with zAAP processors for Java or on z/Linux using
inexpensive IFL engines. The substantial economic advantages of
using these specialized processors, with CPU & software costs a
tiny fraction of general purpose CPU & software costs, is not
always fully understood by staff who focus on technical issues.
Our legacy modernization consulting services are designed to
help you frame your decision on future architectures, and
provide you with clear, business based parameters for deciding
on the best path to transition from where you are now to your
chosen destination. Many times, vendor products and vendor
services can ease that transition, but in other cases a site is
perfectly capable of doing the job internally.
This essay on methodology focuses on how we derive the
decision parameters for the transition. There is usually not a
big decision regarding in-house versus outside vendor services.
Most (but hardly all) sites that are not under a deadline would
prefer to do the job themselves, perhaps with some judicious
assistance or tooling where appropriate. Others would prefer to
outsource the learning curve and the project risk.
When we discuss costs of each alternative transition
strategy, we utilize the preferred approach, whether it be
in-house, outsourced, or mixed. On request, we can cost both
approaches to the same transition strategy, given fully burdened
cost information and productivity information on in-house
resources. Although the methodology discussion following assumes
that all alternatives will be evaluated, if any path can be
eliminated a priori (e.g., there are no candidate COTS
packages), then that step in the methodology is bypassed.
Step 1 - Determining Residual Value in
A difficult task for many sites is to analyze their legacy
applications as financial assets and liabilities. As IT people,
we tend to think of our computer systems and the software
running on them as somehow different from buildings, equipment,
intellectual property, and other investments. Yet senior
management is obligated to manage the organization's assets for
the greatest good, and this requires objective analysis.
meet many people who start off with a preferred strategy.
Some people want to just throw out the old software and start
fresh. And, indeed, sometimes this is the best strategy, but we
argue that it should not be the default strategy. As some
references show, for significant applications the failure
rate and significant cost/delivery overrun rates are simply too
large, on the order of 70%, to blithely assume that all will be
well with a de novo strategy. If this is the best
way to go, it should be pursued in a way to minimize the
considerable risks involved.
people have a longstanding emotional investment in the software
that they have spent a career building and expanding. These
people may be too close to the software to analyze its value
dispassionately. Sometimes updating and expanding the
existing software (renovation) proves to be the best strategy,
but again we don't think it should be the default strategy.
|Step 1 of our analysis starts with separate structural and
functional analyses of the applications under consideration for
modernization. We separate them because an application design
may be trapped in old technology and yet fully serve the
business function. We recently saw an application written in the
1960s, consisting of over 10 million lines of assembler code,
which has a perfectly fine design that is serving the business
Excessive costs and inflexibility may not be a
characteristic of the application design per se, but of
its implementation. Sometimes fixing the implementation can
liberate a great deal of value. On the other hand, sometimes
the problem is the opposite: inherent functional design problems
are implemented in a reasonably modern and low cost manner.
The structural analysis focuses on how each application is
built and on how it does its job, but not so much on what it does.
We look at the language(s), the database(s), the hardware
platform(s), the operating system(s), the user interface(s), and
deeper structural issues such as data model normalization. If
compute or estimate formal complexity metrics to guide
We group these results under
infrastructure issues. These issues can significantly affect
the cost and risk of each of the legacy modernization strategies
analysis reverses this point of view. We focus on what
each application does, and not at all with how
it does it. When we focus on functionality issues, we
completely ignore the language, database, platform and other
infrastructure issues. We also ignore functionality that is
related to the infrastructure. For example, upgrading a
legacy application from indexed files to a relational database
would automatically allow the use of an ad hoc reporting system.
Ad hoc reporting through a tool would not be counted as part of
the legacy system functionality, because it derives from the
infrastructure. We only consider functionality that
results from business logic expressed in the application
If we were to construct a Venn
diagram of the functionality of the old, "As-Is" system and show
its intersection with the functionality of the desired new
"To-Be" system, we would see 4 distinct areas. The most
obvious is usually the Obsolete area, functionality that is no
longer needed at all. Perhaps the least obvious is the
Preserved area, where the functionality is just fine, however
uncomfortable some technologists may be with the infrastructure
and style of the implementation. The third area, Enhanced,
contains the existing functionality that requires functional
enhancements. And, finally, we have the fourth area which
consists of Wholly New functionality.
When we consider Wholly New
functionality, we have to carefully consider the possibility of
using a Business Process Management (BPM) platform to integrate
manual and automated work flow processes. If so, we need to
exclude from this analysis the part of Wholly New that would be
implemented with the BPM platform on top of the legacy
application, and focus on Wholly New functionality within the
legacy application itself.
Now we come to
the most important question of the functionality analysis:
what percentage of the business logic in the legacy programs is
to be assigned to each bucket:
Obsolete, To Be Preserved, and To Be Enhanced. We generally assign whole
program sources to each bucket and count the results, though
estimates by subject matter experts (SMEs) generally
suffice for a qualitative assessment, rounded to the nearest
10%. As an example, let's say that these three percentages were
10%, 50% and 40%, respectively, totaling 100%.
Once this is complete, we estimate
the quantity of code that would be required to implement the
Wholly New functionality, and express that as a percentage of
the existing code base. To extend our example, let's say
that this would be 30%, for a total of 130%.
These percentages drive much
of the ensuring analysis, as we consider ratios of these
percentages. A key ratio is (Preserved + Enhanced):(Wholly
New). Using our example, this ratio would be 3:1,
A similar question is:
How well does the legacy
functionality support the business process?
To answer this question, we ask to
what extent has the underlying business process changed since
the system was designed and implemented, and how much of the
current business process is being supported by the system. This
might seem odd to some people who could reasonably expect the
answer to be 100% by definition, but we have seen systems where
fields in the database have been re-used for new purposes,
output reports imported into spreadsheets and re-analyzed
according to different business rules, and server based
databases containing overlapping data operating in parallel.
Clearly, these are cases where the business process has diverged
from that of the original design, so that the Preserved area
will tend to be relatively small while Enhanced will tend to
dominate. These will tend toward re-architecting if a
replacement strategy is not ultimately selected, as implementing
a renovation strategy will be complex.
More typical is the situation where the legacy system has
more or less kept up with the business process changes, but
there are a backlog of changes to the business process that
cannot be implemented until the software is modified to support
those changes. For these, renovation can be considered without
the negative bias we apply in the case where the business
process has diverged.
Why do we consider re-architecting and renovation at all?
Isn't it better to just redesign and rewrite in all cases? There
are those who say that it will cost you 80% of the cost of a
rewrite to change more than 20% of an existing system, so why
not just bite the bullet and rewrite it? This is a
seductive argument, and for smaller scale applications, we would
sign on to it. But for larger applications, consider the
very high failure and cost/delivery overrun rates found by
Capers Jones and by
This simplistic "80-20 rule" argument does not scale to large
applications. Furthermore, a rewrite suffers from diseconomies
of scale while both renovation and re-architecting approaches,
being tool based, enjoy positive economies of scale. They become
more efficient with larger projects on both a cost and
Clearly, if an application were 0% Preserved and 10%
Enhanced, then the legacy assets have so little residual value
as to be mostly a liability. In this case, a replacement
of some kind, either a COTS package or a de novo
redesign/rewrite, are the only realistic options. If this
is your situation, then the rest of this essay will be of no
If our findings are at the other end of the spectrum
– that 90+% of an
application's functionality should be preserved
– it would make sense to
at least consider some type of renovation and/or re-architecting
strategy. (Remember that we are not considering the
infrastructure, the implementation, external tools such as ad
hoc reporting, external integrated applications such BPM based
workflow, or maintenance cost issues
at this time.) Of course, life is seldom so neat.
The typical case is somewhere in between, so we are dealing with
shades of grey rather than a black or white result.
Our rough and ready rule is that if our key ratio is 1:1 or
then we are biased toward a renovation or re-architecting strategy that will
extract residual value out of the legacy code base, but at less
than 1:1 our bias turns toward a replacement strategy (though
the replacement strategy might still include automated business
rule extraction). As
we will see below, this 1:1 boundary will move up or down
depending on the other parameters of the analysis.
The residual value in the application is expressed
qualitatively for an initial assessment analysis. If the initial
assessment analysis points to significant business decisions
that may be based on the results, then it may be worth the investment
of time and money to derive a quantitative value.
For our qualitative assessment of residual value, we use the
Preserved and Enhanced percentages offset against the effort of
modernizing the infrastructure. A high key ratio value
can be offset if the application is written in an obscure
language on an obsolete platform using a non-standard database
or file system. Conversely, a lower key ratio can be
offset if implemented in a standard language on a standard
platform with a standard database system.
Step 2 - IT Risk Tolerance
second step of our analysis addresses business issues, including
both financial issues and business culture issues. Cost
and business value are clearly financial issues, but risk can be
more of a business cultural issue than a financial issue. These
business issues are mostly how issues, relating to the
nuts and bolts of getting the job done.
Discussions of risk tend to make technology people
uncomfortable. Technologists deal with finite state
machines, so there should be no place for probabilistic
assessments of risk. Everything we do as technology
professionals is underlain by the assumption that these systems
we build and maintain are fully deterministic.
the findings of complexity theory tell a different story. Modern
software systems are fiendishly complex. Once we get beyond the
level at which we can simultaneously hold all elements of an
application design in
our minds, we must begin to deal with probabilities, so we talk
about the probability of an error occurring. Complex
deterministic systems acquire inferential and probabilistic
characteristics in practice, if arguably not in theory.
fundamental error in the way that we control IT risk - testing.
We always ask, "is it tested?" We never ask the correct
question, "how much was it tested?" The reality is that
non-trivial IT systems are never
100% tested, because such testing is not affordable. Programs
may be 99% tested, or 90%, or 50%, or even less, and the higher
the complexity the lower the percentage of testing is likely to
be - simply because of cost. So, we have risk in our
systems everyday, risk that we largely ignore, just as we ignore
the highway risk as we plan our commutes to work. We infer that
the residual risk is trivial, and act accordingly.
And it is rare
for senior management to understand this key fact, and take
proper precautions for the business. In 1995, an Ariane 5 rocket
was destroyed because of a software error that had not been
found in testing. To save money, there was no launch insurance
on that rocket. It's a safe bet that the executive who decided
to forego the insurance did not know that the software testing
was less than 100%.
systems contain latent risks of incorrect functioning or
outright failure, we need to ascertain the level at which people
are willing to tolerate risk in their systems. Obviously,
if the consequence of an error is that you lose a $20 sale,
that's very different from losing a $400 million rocket or a $20
million aircraft with 100 people on board.
As part of the
second step of our analysis, our interviews with senior
management can generally provide a good grasp on sensitivity to
cost issues (available resources) and on the business value
issues (quality of service and business agility goals), but
discussions of risk can expose communication problems. If
you simply ask someone what is their tolerance for risk, you
will usually get an unhelpful answer: "none." This is patently
false, of course. Did they let their children out of their
sight? Did they drive to work this morning? Are their systems
set up for real-time redundancies so that no single event can
take them offline?
that must be confronted head-on in our analysis is that not only
are risk and cost in mutual opposition, but that risk, cost, and
business value (quality of IT service with consequent business
agility) are all intertwined.
It's the old consultant's analysis
in a new form:
- minimum risk,
- minimum cost,
- maximum business value (quality+agility).
Pick any two.
And why is this? We can speak of complexity all we want, but
that is an abstraction. The practical explanation is that, for a
replacement strategy, we only think that we can express
the business rules succinctly enough for an actual software
implementation. It is not unusual to think that 500 business
rules will define a system, only to discover well into the
project that 500 has become 800, or even 2,000. The project risk
derives directly from a human inability to fully specify all
implementation level detail in advance of a project. It is
simply too complex a task for the human mind.
Tool assisted projects using an automated re-architecting or renovation
strategy have lower risk profiles, but these are not zero risk
either, due to the nature of software analysis and
transformation tools. We do consider re-architecting or
renovation project designs to be lower risk and therefore
constituting a more conservative technical strategy. Even
though there are different optimization scenarios, there is no
escaping the relationships among risk, cost and business value.
value (quality and agility) requirements rise, so do costs, so
there is a temptation to skimp on risk mitigation, e.g.,
testing, or to believe vendor assertions about a "silver bullet"
solution. Conversely, as resources are utilized to control
risks, business value will decrease, because resources are
diverted from improvements to quality and agility. This can be
seen in terms of features implemented, the robustness and
flexibility of the implementation, the ease of use of the
system, and other attributes. Similarly, there may be
compromises to save money in the short term that increase costs
in the long term.
fallacy is that risk can be contained by a fixed price bid from
an outsourced vendor placement. However, only financial
risk can ever be constrained in this manner, not opportunity
costs, and even the ability to constrain financial risk is not
100%. We'll return to this point below as we discuss
The only way to
measure risk tolerance in a meaningful way is to ask, "how much
will you pay to minimize risk on this project?" In other
words, how much will you pay as an insurance premium and how
much will you self-insure (i.e., how big a policy and with what
deductible?) You have to put it in dollars and cents. Do they
have armed guards for their children? Are they willing to pay
for a giant SUV and the corresponding gasoline costs in order to
minimize their commuting risk? Are they willing to pay for
geographically dispersed data centers in order to ensure
This is an
exceedingly difficult question to put to management, and
frequently the question will be passed off to IT in the form of,
"how much testing should we do?" But this is a
business question masquerading as a technical question. The
relationship between risk mitigation achieved and cost of doing
so forms a
diminishing returns curve.
point on this curve can you pick that is clearly superior to
All too often IT
will attempt to provide an answer. But what is the basis for
this answer? The proper response is, "we'll never find
all the bugs, we just work diligently to find the ones that
are likely to bite us. We'll test as much as you want, and after
that we just have to assume the operational risk that something
we missed will be revealed in production." There is no technical answer to this question, and
management is ill served by any attempt at a technical answer to
what is truly a business question.
What IT can do
and do well is to optimize the results of testing for a given
testing budget, just as IT maximizes business value for the
budget available. There are a variety of strategies, from
minimum risk project designs to test automation, regression
testing, and SME testing, as we will
As we ask this
question, we accompany it with this explanation to ensure that
it is considered in the proper context. We point out that
there are residual undetected faults in the software running
today, and that maintenance programming can inadvertently trip
one or more of these faults, or create new ones. This is
not a reason to adopt the "do nothing" strategy we will discuss
in a moment.
Once we have
established a meaningful measure of risk tolerance for each
application under consideration, we apply that measurement to
the boundary line between tending toward a replacement strategy
or tending toward a re-architecting/renovation strategy.
The lower the risk tolerance, the higher the boundary, and the
more we are biased towards a conservative technical strategy,
i.e., a renovation or re-architecting strategy. Conversely, the
higher the risk tolerance, the lower the boundary, and the more
we are biased toward a complete replacement strategy. Cost
sensitivity has a similar effect, in that a high sensitivity to
cost will bias toward a renovation or re-architecting
strategy, but a preference for business value can result in a
bias toward replacement (either COTS or re-design/re-write).
So, as we
conclude step 2, we recognize the relative importance of the
risk, cost and business value issues for each unique
organization, and use this to guide the subsequent analysis.
Step 3 - Business
third step is understanding the business case, the why of
pursuing legacy modernization.
Is this trip really necessary? Is "do nothing" an option?
Despite protestations to the contrary, "do nothing" is always
an option in legacy modernization analyses, because the systems
in question are functioning in daily production. Indeed, a not
infrequent occurrence of an issued RFP for legacy modernization is a
decision to reject all proposals, resulting in "do nothing" being the
"winning" strategy. Therefore, the business case analysis must
objectively analyze the pros and cons of continuing as is, because the
costs of transitioning to the proposed new system could exceed return on
investment requirements. As part of this, we ask about the direct costs
of maintenance and indirect costs, such as business opportunities missed
or endangered.Most important of all, what payback period is required
to fund the project? Typically, we find that projects are required to
have a 24 month or shorter financial payback period, 36 months at most,
for internally funded projects. Then, once we know the period of
analysis, we can ask, "what is the payoff for the business if we succeed?"
Although we will discuss cost reductions below, it is almost never true
that legacy modernization can be justified for the savings in hardware
and software within a 24-36 month period. The most professional of
the hardware and software vendors have business analysts too - and they
constantly update their pricing to ensure that they never push their
customers to the point where they have a big payback for leaving their
existing environment. This is a modern variant on Cardinal Richelieu's
17th century adage, "the art of taxation consists in so plucking the
goose as to obtain the largest amount of feathers with the least
possible amount of hissing."
The business case justification for
legacy modernization will be found (or not found) in what it means for
the business - the additional business income opportunities enabled or
the operational cost savings outside of IT resulting from business
process improvements. Going into detail on this topic is beyond
the scope of this essay, but reductions in business operational costs on
the order of 50% are not unusual for Business Process Management (BPM)
implementations when applied to appropriate problems. See
BPM for an overview of what BPM can mean.
However, a proper business case analysis will also ask, "what's
the downside for the business if we fail?" And let's remember
that a significant percentage do fail outright, as well as
experiencing cost/delivery overruns and functionality shortfalls
among the projects that do deliver. The assessed risk times the
estimated costs of a failure must be added to the liability side
of the analysis. Similarly, an estimated probability of cost
overruns (including the financial impact of delivery overruns
and the financial impact of functionality shortfalls) should be
derived and compared to the failure impact. The larger of the
two figures should be used for risk provisions.
This risk provision is too frequently overlooked in IT business
case analyses, usually for reasons of excess optimism but
sometimes because of a business culture that looks on risk
provisions as being unduly negative. Team players should not be
negative, so there can be a perceived career risk from looking too closely at
|However, we argue that this risk provision is
important not only for the financial accuracy of the analysis,
but also because it shows the importance of a proper risk
assessment of the chosen technical and risk mitigation
strategies. When projects do go off the rails, it can be argued
that a failure of analysis at this point was the beginning of
the problems that eventually led to the unpleasant result.
Conversely, a proper analysis can point to risk mitigation
strategies that can immunize a project against negative results.
Prudence should not be interpreted as being negative, but as
conclusion of step 3, we will understand the business case for
and against legacy modernization. If "do nothing" is the clear
winner, our analysis may shut down at this point, but more
typically the analysis will amplify the results of step 2.
However, if the results of the step 3 analysis contradict step 2
findings, we resolve the contradictions before moving on.
Step 4 - IT Cost
of the business case analysis is realistically assessing the extent of
IT cost savings resulting from modernizing the technology. However, this
step is usually relevant only if the current platform is a mainframe.
Although we argue
that IT cost savings can never justify a legacy modernization project on
their own, at least within a typical 24-36 month payback period, there
are significant savings that can be achieved and these need to be
considered. On the other hand, if a project is going to be financed,
say, over a 7 or 10 year period, then it could well show a significant
ROI on this factor alone, using external rather than internal funding.
One of the most highly
guarded secrets at IBM is the relative cost and performance of its
mainframe platforms against its own iSeries (formerly AS400) and pSeries
(formerly RS/6000) platforms, and particularly against commodity
Intel-based servers. In 2004 and 2005, two important benchmarks by
TPC-C rating for mainframes which allowed, for the first time, direct
comparisons of mainframe capacities to both Intel/Windows and
Intel+Windows = 1347 MIPS
2x2.8 GHz Intel+Linux = 875 MIPS
As a practical matter,
we use variance error bars with a range of 200-600 MIPS equivalent for
2x3 GHz Intel systems, to stay conservative across dissimilar
architectures and applications.
costs for mainframes run on the order of $2,000 to $5,000 per MIPS per
year, including the cost of the hardware, maintenance, and software
charges. Using the lower figure of $2,000/MIPS/year, the mainframe
capacity to a 2x3 GHz Dell server costing $5,000-$10,000 one time is
$1-3 million per year, or more than 100 times greater. This
disparity is so large that many IT executives with long-term mainframe
experience cannot accept it until they duplicate the results themselves.
Of course, as a
technical matter, comparing otherwise identical applications running on
platforms with dissimilar architectures, infrastructures, and
implementations must necessarily show some significant performance
variance. And as soon as one raises a discussion about these relative
benchmarks, someone jumps to their feet and talks about the problems
associated with cross-platform benchmarks.
remarks are generally both absolutely true and totally
irrelevant. Typically cross-platform variance is on the order of
+/- 50%, and pathological examples can show variances on the
order of a factor of two or three, as you move an application from platform to
platform. But when we compare platforms in which the difference
is a factor of 10 or 100, technical factors are simply
irrelevant. Only business factors matter: cost, stability and
security are issues that must be satisfied in order to even consider
capturing the cost benefits moving from a mainframe to RISC or Intel
platforms. It is absolutely true that proprietary platforms and
mainframe platforms in general are as rock solid and secure as humanly
possible. But the most relevant question is not “What is the best?” but
rather “What is needed?” and “What can be afforded?” Today, Windows and
Linux on Intel, properly managed, can deliver stability that exceeds the
business requirements of most applications. Mean time between failure (MTBF)
ratings for recent releases of Windows exceed one year.
On the other hand,
some applications can absolutely justify the high cost of mainframe
technology. We recently completed a system design for a new application
for a major US metropolitan police department in which the use of
mainframe technology made sense, and we recommended an IBM mainframe
running both mainframe Linux and z/OS, splitting the workload between
the two environments. For another example, we have reviewed
high-transaction-rate financial applications that have had a downtime
cost in excess of $5 million per hour. From this point of view, $20
million on a mainframe is not a difficult decision to make. There are
also cases where the central administration advantages of a mainframe
outweigh the platform cost disadvantages. And, to be sure, IBM continues
to improve the price/performance of its mainframes.
performing a benchmark or a complex return on investment
calculation misses the point. The best way to find out what
hardware and software platforms are really needed is to ask the
question, "if you were
implementing a replacement application today, and if you had your choice of
any platform, would you or would you not choose a mainframe?"
Far too often we
find the justification for a mainframe to boil down to the fact that it
is currently running on a mainframe, a circular argument that we find
wanting. But where there is a solid justification for a mainframe, we
are very comfortable supporting that recommendation.
The conclusion of our
step 4 analysis is compared to the business case analysis to see whether
this factor increases the bias one way or the other, or whether it
proves the platform question to be irrelevant.
Step 5 - Schedule Goals
When would you like to have the system? When
must you have the system? What are the consequences if the
delivery is late? What is your Plan B? Step 5, like step 4, is a
check on the business case analysis.
Clearly, an aggressive schedule is going to have a negative impact on
the cost, quality and risk parameters. It is also easy to lose
sight of the fact that, although one woman can make a baby in 9 months,
it is not possible to hire 9 women and induce them to produce a baby in
At the conclusion of the schedule
analysis, a finding of an aggressive schedule sharply biases the
analysis toward a conservative technical strategy, in some cases
overriding all other considerations. The less you have to
change, the less you are likely to break in the process, and the
sooner you will be up and running.
Step 6 - Resource Analysis
resources do we have available? How elastic are those
resources? Step 6 serves as a check on the step 2
analysis, but also allows us to establish as early in the
analysis as possible whether or not an organization has a
realistic match between goals and resources. Too
frequently we find a case of champagne tastes on a beer budget,
if not a lemonade budget.
But resources include more than just
money, though money is very, very important. Almost as important is
management commitment and the ready availability of subject matter
experts (SMEs). One of the best encouragements towards a successful
project is the dedication of key SMEs, without interruptions. On the
other hand, interrupted availability of SMEs, particularly interruptions
of unpredictable timing and duration, is one of the best ways to ensure
late delivery, cost overruns, and quality shortfalls. Promising
availability of SMEs and then not providing them consistently is also
one of the best ways to end up in court with your vendor over the
The step 6 analysis goes beyond just checking on the anticipated
budgets. In step 6, we begin to construct straw man project
plans for renovation and re-architecting, at least to the level
of detail whereby we can derive a preliminary budget. If the
budget is not there for these least cost projects, creating a
straw man re-design/rewrite proposal will be waste of time. If
it is reasonable to proceed, a risk and benefits estimate will
also be prepared.
Step 7 - Specifications
For a Re-Design/Rewrite
The cost of a replacement application of course varies
considerably from one application to another, which adds
difficulty to a straightforward analysis of the relative
pros and cons of renovation or re-architecting against
replacement. But let's take a minute and explore the cost
basis of redesigning and rewriting an application, which is
the default strategy for most legacy modernization projects.
This is also the baseline cost against which the costs for
other strategies must be measured. We need at least a straw
man project design with cost estimates to compare to the
results of step 6.
Years ago, the cost of programming dominated the cost of
writing a new application. While the cost of programming
remains an important component of the overall cost, it has
been eclipsed by the cost of writing complete and correct
specifications. Indeed, getting the specifications right is
the major source of the risk that projects will run over on
cost and delivery, while failing to deliver all desired
Recall that the discussion on failure statistics referred
only to major projects, 10,000 function points or great for
the Capers Jones study. This can be roughly equated to
1,000,000 lines of COBOL or C, or 1,000 programs. The
10% of projects that deliver on time/on budget were all of
this size or greater. Although the study does not
specifically state it, we can be assured that almost all of
the sample projects were legacy modernization projects, for
the simple reason that virtually all projects today are
replacing existing applications.
Our recommendations for large scale projects can be very
different from our recommendations for otherwise similar
small scale projects. The reason is very simple: complexity.
As projects increase in size linearly, their complexity
increases geometrically. A system of 1,000,000 lines
of code can easily be 100 times as complex as a system of
100,000 lines of code. This has direct bearing on the
relative risk of different sized projects.
As a result, we are very careful in discussing a rewrite of
a project of this order of magnitude, because of the risk.
By contrast, we were recently asked about automated
translation of 50 programs written in an obscure language
into a standard language, and our recommendation was to
rewrite them (though not to re-design them!)
This is such a small amount of code that our risk concerns
were minimal for a rewrite.
As a real world example, we recently analyzed a library of
COBOL code for a prospective modernization project. A full
re-design/rewrite effort was estimated at about $8.5 million
by our client, a major system integrator, based on the
published specifications. Our code analysis revealed
106,000 independent logical pathways through the various
programs, in 1.1 million lines of code, making this old
legacy application very complex indeed. We estimated
that the published specifications included perhaps half of
the existing business rules, none of which were obsolete.
It is not hard to see how this project would unfold without
providing for this complexity. The vendor winning the
project would start by conducting JAD sessions to add
details to the published specifications, and then would
begin to implement those specifications. Once implemented,
they could not be put into production, because of the
insufficiently detailed logic that would be revealed in
testing. So a cycle would begin of adding additional
specifications, increasing the cost through change orders,
then more testing, then adding more specifications and more
re-work to the cost, and so on. A cycle like this could go
on for years, and frequently does. The application owner
failed to understand the complexity of his own application
and take suitable provision. Sadly, this is not unusual.
Getting complete and correct specifications is the
critical issue for writing a replacement application. If we
actually had truly complete and correct specifications, the
cost of modern programming would result in, relatively
speaking, affordable replacement applications, and there
would be no question of alternative strategies. The reality
on the ground is driving the interest in renovation and
One of my colleagues at the
Jim Highsmith, captured the essence of the problem very
succinctly at one Consortium Summit:
"At the beginning of any project, your specifications are
70% complete ... and 50% correct."
And of course, once you get to 95% complete and correct,
getting that last 5% is where most of the rework occurs and
most of the cost escalation originates.
Jim's solution to this problem is agile programming, a
solution that we agree with, up to a point. Agile is a
brilliant technical and project management strategy for
those smaller scale, low to moderate complexity
applications. However, we recommend against attempting to
scale it up to the level of the 1,000,000 line of code
project. Before you reach that point, the cost of
re-factoring will overwhelm the productivity advantages
because the cost of re-factoring increases geometrically
with the increase in size of the legacy code base. Only if
you can partition the project into 10 100,000 line projects
will agile work as expected, and this is much harder to do
than enthusiasts are ready to admit. Indeed, Capers Jones in
his 2004 study was unable to find any agile projects of the
appropriate scale to include in the study.
The conclusion of step 7 is a rough order of magnitude
estimate of costs, risks and benefits for a straw man
re-design and rewrite project, for comparison to the similar
estimates for renovation and re-architecting.
Rough Order of
Magnitude Cost Comparisons
we go on to step 8, it may be useful at this point to provide a rough
understanding of the sort of differences we are talking about among
these 3 different approaches. Let's review the example of the 1.1
million lines of code project referred to above, $8.5 million for the
published specifications, but probably $12-15 million once all the
cycles of change orders finally finish.
analyzed the As-Is versus To-Be systems during step 1 of our
methodology, we found that Obsolete was minimal, Preserved was about 85%
of the existing code base, and Enhanced was about 15% of the existing
code base. Wholly new code was estimated at roughly 25% of the existing
man estimate for an automated re-architecting project was about $4 million, or
roughly half of the initial re-design/rewrite estimate. This was about
$3 million for the Preserved and Enhanced code, and another $1 million
for the Wholly New code.
this approach is unlikely to have significant cost overruns because the
re-architecting methodology starts by extracting the existing business
rules from the existing code base. The $4 million estimate consisted of
$3 million for reproducing and testing the original (Preserved +
Enhanced) functionality in the new infrastructure, and $1 million to
implement the enhancements + the Wholly New logic. To this we
added a fairly conservative estimate of a 100% provision for cost overruns
in the Wholly New logic due to specifications that might have been
missed, or $1 million, for a total recommended provision of $5 million.
man estimate for the renovation method was about $1 million for the
Preserved + Enhanced functionality, 1/3 the cost of automated
re-architecting, plus another $1 million for the
enhancements + Wholly New code. To this we again added a fairly
conservative estimate of a 100% provision for cost overruns in the
Wholly New logic due to specifications that might have been missed, or
$1 million, for a total recommended provision of $3 million.
only one example, and various technical factors cause each of the straw
man estimates to vary from one project to another. You can be assured
that your mileage will vary when conducting a similar
analysis with your systems.
very rough generalization is that if a redesign/rewrite were
estimated at $10 million, in round numbers, the risk is 70% that
the project will exceed $13.5 million. We recommend
financial provision of between 35% and 100% of the initial
estimates for the risk of cost overruns. We will return to
this point as we discuss vendor risk.
similarly rough generalization is that, for the same project, a
re-architecting project will be $3-5 million, depending on the
effort for implementing the enhancements and wholly new code. We
expect re-architecting to be up to 50% of the cost of a
re-design/rewrite, with minimal risk of cost overruns.
similar rough generalization for a renovation project will be
$.5 million to $2 million, so that we estimate renovation at up
to 20% of a re-design/rewrite, again with minimal risk of cost
summary, of a project estimate of $10 million for a re-design/rewrite:
cost, re-design/rewrite, 70% chance of exceeding $13.5 million.
Automated re-architecting, $3-5 million.
Renovation, $500,000 - $2 million
clear why we tend to prefer automated re-architecting and renovation over a
re-design/rewrite, all other things being equal. But all other
things are not equal. The business value is often greater (or at
least perceived to be greater) in a re-architecting or re-design/rewrite
approach, though the risk is greater as well as the cost. And, of
course, if the residual value in the legacy application is very low,
neither the re-architecting or renovation strategy will have much
However, we have one more strategy that we have to
review – a commercial off the
shelf (COTS) solution – before we can consider hybrid solutions.
Step 8 Straw Man
Estimate for COTS
If we could license a COTS package and
plug it in unmodified, it would be the best solution from a
risk point of view. We could travel to other sites and see
the exact software that we would be using in daily
production, and learn from the experiences of those sites.
Without trivializing the effort of converting legacy data
into a COTS package, which is not without some risks, this
is by far the safest solution
– provided the COTS
package is used without modification to the programs.
Unfortunately, this is
rarely the case. Usually it is a choice of modifying the
package or changing the underlying business process to match
the package. Although there may, arguably, be benefits to
adopting a new business process, doing so is not without
cost. Management may feel that is it less expensive and/or
less risky to change the package than to change the business
So, we are back to analyzing the existing system, even though
this analysis will be to determine the specifications governing
the required modifications to allow adoption of the package (or
packages) under consideration.
If the business process is not going to change, the only fully
complete and correct specification of the rules that govern the
business process is the legacy source code itself. However, we
only have to apply the functionality analysis from step 1 to the
COTS straw man project. The infrastructure of the legacy system
is irrelevant here.
We will still have Obsolete functionality that we can ignore.
We have functionality that needs to be Preserved in moving to
the COTS package, and we have functionality that needs to be
both preserved and Enhanced during the move to the package.
Finally, there is the Wholly New functionality that must be
implemented if not already supported by the package.
that any modifications that are to be made will be outsourced to the
COTS vendor, the COTS project must provide that vendor with the full
list of specifications required to go into production with the new
system. Unless the modifications are trivial, we recommend against
attempting to modify a system with which you have no experience.
as we saw in the re-design/rewrite analysis in step 7, doing this
precisely and completely is the most difficult part of the project. It
is equally so with implementing a COTS solution.
As a result, a COTS project can be subject to the same problems in
delayed delivery, cost overruns, and outright failure as a
re-design/rewrite project, in exact proportion to the degree of
modifications required. The dual tombstone in the graphic provides the
most important lessons learned from COTS projects in trouble.
Step 8 concludes with a
straw man estimate of the costs, risks and benefits of pursuing each
COTS solution under consideration. This estimate is appropriate for
comparison with the other 3 estimates derived in previous steps.
The 1.1 million lines
of code example project referred to above also included a COTS
alternative. Even though it is a single data point, it is illustrative
of what can occur. The gap analysis was significant, with less overlap
with the Preserved, Enhanced and Wholly New functionality of the legacy
system analysis. Based on the published specifications, which we knew to
be incomplete, the estimate was $11 million (including license fees for
the system) and three years elapsed time. In other words, it was more
expensive and just as time consuming as a complete re-design/rewrite.
However, the risk analysis, though presumed to be lower than the
re-design/rewrite case, remained significant, because we knew that the
published specifications were incomplete. Therefore, we felt it was
likely that the overall cost could run to $12-$15 million before it was
Using our example
project, we can summarize our results from steps 1-8:
* The provision
for the re-architecting and renovation straw man projects was for
potential expansion of specifications in the Enhanced and Wholly New
areas. We felt that no provision was necessary for the Preserved
code, nor for the stated specifications for Enhanced and Wholly New.
Note that in this example, "do
nothing" was not considered because a replacement system had been
identified as a business necessity.
Step 9 - Other Issues
Other issues may or may not affect the conclusions from
steps 1 - 8. Step 9 looks at these residual issues.
Testing is frequently cited as the largest single expense in
any IT project, though that may be true only when testing is
done to very high standards. The ugly truth that we faced up
to in step 2 was that there are always residual undetected
faults in non-trivial systems, so that we ask that the
testing budget be based on business criteria. Scientifically
thorough testing is not affordable in virtually all
commercial IT projects. Indeed, even the measurement of
testing thoroughness, known as test code coverage analysis,
is rare in commercial testing.
are two general approaches to testing: determine if a
program is operating correctly ("validation testing"), or
determine if a program is operating the same as another
program ("regression testing"), presumably one that it is
replacing. Though it may not be obvious at first glance, it
is both more accurate and significantly less expensive to
adopt a regression strategy. However, regression testing is
primarily applicable to renovation and re-architecting
projects, and only minimally applicable to others.
Why is this? For validation testing, the tester must
determine what is valid functioning and what is not. That
set of criteria must be documented, and data found or
created that will appropriately exercise the code. Then
the test must be performed, multiple times if problems are
found. Validation testing must prove the program to be
correct, and determining what "correct" means can take a
very long time, in direct proportion to the complexity of
the program code. It's not the execution of the test that is
expensive, but the creation of the test case to execute that
By contrast, with regression testing, we only have to
execute the old program, and compare its results to the new
program. Properly speaking, the tester should be using test
code coverage analysis to ensure that the data being used
tests the program thoroughly enough, but the creation of the
test case is much simpler. The tester does not have to learn
the program and what it is supposed to be doing, only how to
run it. Proving that it is the same as another program is
therefore significantly easier.
There is a further issue specifically with legacy
modernization projects that make testing more expensive than
maintenance testing. Since all of the legacy modernization
strategies impact the whole system, all aspects of the
system must be tested, down to the tiniest detail. By
contrast, during normal program maintenance, we only have to
test the changes to a program or small group of programs.
The cost of such broad brush testing is a direct function of
the complexity of the system. A renovation project may be
significantly less expensive than a re-design/rewrite
project, but the testing budget should be the same. So, we
could end up with a $5 million rewrite effort with a $3
million validation testing budget, which does not seem
unreasonable at first glance, but a $500,000 renovation
effort with a $3 million testing budget will set off
financial alarm bells.
Consider the example system cited above, with 106,000
logical pathways through the program code. Devising tests to
exercise and validate each of these pathways is a daunting
technical effort, and the cost exceeds any budget likely to
be allocated. So human judgment is invoked to triage
pathways that don't need to be tested, in the opinion of the
technician making that judgment. When human judgment is
involved, errors will occur, and thus we will have residual
undetected errors because we never tested those logical
However, in attempting to optimize the greatest business
risk reduction for the testing budget we are given, aspects
of testing may influence the project design and technical
strategy. The lower cost of regression testing has
implications primarily for renovation and re-architecting
strategies, as regression rarely has a significant component
in either a re-design/rewrite project or a COTS project.
Consider the example above. The rewrite approach requires
the $3 million validation testing, because regression is not
applicable, but a regression approach to testing a
renovation project could cut the cost of testing (to the
same degree of accuracy) to perhaps $1 million.
The difference in cost between validation and regression
testing (at the same level of accuracy) can be so
significant that it makes financial sense to break the
project into two parts, if the Preserved percentage is
large. First, prove that the re-architected or renovated
system duplicates the functioning of the legacy system
exactly via regression testing. Second, perform the
enhancements. These have to be validated, but we only have
to validate the changes, not the whole system, just as in
normal maintenance testing. Third, implement the Wholly New
functionality. This new functionality must be validated as
well, but since it is disjoint from the Preserved and
Enhanced functionality there is no additional effort to do
so than would have been the case using solely a validation
The above discussion on validation versus regression testing
presumes that testing will be outsourced or else conducted
by staff without an in-depth knowledge of the system. When
significant system knowledge is available to be leveraged in
testing, this subject matter expert (SME) testing can prove
to less expensive still than regression testing, provided
that SMEs are consistently available to the project. This is
not because SMEs are inherently more efficient than any
other testers, but they can much more effectively triage
what does not get tested thoroughly than someone with
no intimate knowledge of the system. Thus SME testing is
less accurate than outsourced testing, but if the SMEs
really know the system the business risk reduction should be
fairly close to testing triaged by people without their
If a regression or SME testing strategy will have a
significantly positive impact on the straw man analyses for
re-architecting and renovation, step 9 will update them
Note that, in the example above, we compared a $3 million
validation budget to a $1 million regression budget, to
test to the same level of accuracy. In other words, we
would test the same subset of logical pathways through the
program code via each method. But we argued above that a
testing budget should be based on business criteria, not
technical criteria. If $3 million is the correct
business criteria testing budget, then we should spend $3
million on regression testing as well as validation testing.
SMEs can help direct that testing to where accuracy is the
most important, resulting in the greatest business risk
reduction for the budget. The differences between
validation, regression and SME testing are in the degree of
business risk reduction obtained for the budget spent.
Data cleansing is a related issue in legacy
modernization that must be considered in designing and
costing a project. However, data cleansing generally has the
same costs across all legacy modernizations strategies, so
it will not affect a decision directly.
We usually recommend cleansing the data on the original
platform, so that data issues do not create false positives
in testing the modernized application. A full investigation
of data quality issues is properly speaking a separate
project, so that our methodology asks subject matter experts
for their opinion on budgeting for data cleansing. This can
be expanded if requested.
Merging Variant Systems
two or more similar application systems is one of the most
difficult and risk prone activities one can undertake. Yet
it is a business necessity in many cases, particularly as a
result of an acquisition.
Like data cleansing, an assessment of the requirements of a
merger project is properly speaking a separate project.
However, unlike data cleansing, the requirement for a merger
can impact the selection of a modernization strategy. This
can bias the analysis a project towards a re-design and
rewrite strategy or towards a re-architecting strategy.
Pulling requirements from two systems during re-design would
appear to be pretty straightforward. However, this can
suffer from the same problems of specificity in any
re-design project, and we consider it to be unduly risky.
Nevertheless, this may be a necessary approach in some cases.
If merging variant systems is a requirement, it will be
included in the assessment under our methodology. The
costing impact will be assessed based on the degree of
functional overlap, which will be investigated by
discussions with subject matter experts.
Source Code Risk Factor
When any legacy modernization strategy plans to utilize the
legacy source codes, it is necessary to assure that all of
the source code is indeed present and, furthermore, that it
is the correct version of the source code. If the source
codes are not tightly controlled with a change management
system, assuring that the project has the correct versions
of the source code can increase the costs of re-architecting
and renovating the system.
Vendor Risk Factor
some or all of a project is going to be outsourced,
particularly in the case of an application replacement
strategy, then our analysis will consider vendor risk. We do
not consider a fixed price bid approach to provide adequate
protection for a client's interests in all cases.
Vendor risk comes in a variety of forms. It includes
competence issues, such as failure to execute or failure to
execute correctly, and ethical issues, such as what some
call "playing the change order game" to maximize their
An essential problem is the source of vendor risk. All IT
projects have, with apologies to Donald Rumsfield, both
known unknowns and unknown unknowns. These factors are
only partly predictable, and can have unforeseeable
consequences. The client generally seeks to minimize their
risk by shifting it to the vendor. Conversely, the vendor
will seek to minimize their own risk by shifting it to the
client. Therein ensues a struggle that is sometimes the
basis of upfront negotiations, but often is ignored during
bidding and contracting only to reappear once work begins.
This struggle can leave the client open to the change order
game, which can be subtle. A vendor will respond to an RFP
giving stated specifications with a low-ball bid that will
indeed implement the specifications as stated, but knowing
all the while that the specifications are incomplete and/or
incorrect to some extent. When the inevitable changes to the
specifications occur, change orders come at inflated prices
so that the vendor can make a profit on the project.
Unfortunately, this is a strategy that works very well, so
that the unscrupulous vendor will prosper while an honest
vendor will lose out. This is one of the problems with a
client relying on a fixed price bid in order to shift risk
to the vendor. The vendor's tactics will defeat the intent
of a fixed price bid by manipulating the refinement of
specifications, and the client may not get the most
deal with problems like this through game theory. For example, the
change order game can be defeated by cost plus a fixed fee bids,
provided actual cost can be established objectively. Risk shifting is
best dealt with through a formal recognition that risk should be assumed
by one party, the other party, or shared.
another way in which fixed price bids are defeated, though this
tactic carries some risks for the vendor. We have seen vendors
change the rules of the game part way through a contract
essentially by bullying the client. "Yes, it's true that we are
50% over budget and it's our fault, but unless you find the
money to pay us anyway we are going to stop work and you'll have
to sue us." Because the client needs the project completed ASAP,
and because it would be personally and professional embarrassing
to have a project go to litigation on their watch, client
management will sometimes capitulate and pay up. This is a
difficult problem to handle, and it takes careful planning on
the part of the client before letting the contract and starting
There is also
an economic reality that needs to be understood. Small vendors
have shallow pockets. If you choose to do business with a small
vendor, you cannot treat them as if they were one of the giants
of the consulting world. Typically, you will get much better
value for money with a small vendor, properly managed, but to do
so means assuming most of the project risk. If you choose not to
accept any project risk, then you must be prepared to pay the
prices charged by the major consulting firms.
In general, we approach
vendor risk in several ways. In all cases, the personality and
methodology of the project manager is key to controlling vendor risk.
Frequent intrusive observations of the daily work process by a
knowledgeable project architect, both on vendor premises and client
premises, can reveal problems while they are still manageable. Even
weekly project status meetings will not necessarily reveal all problems.
specifically including project risk in any RFP, and establishing upfront
who owns the risk. If the risk is going to be pushed onto the vendor, we
recommend a cost plus fixed fee bid rather than a simple fixed price
bid. If the client is going to assume the risk, we recommend either
agile development methods with frequent deliveries of code, no longer
than once a month, or a minimum risk project design such as discussed
below. If a replacement strategy is taken but the project is too large
for agile methodologies, or if the Enhanced and Wholly New portions of a
project are similarly too large for agile approaches, then a unique
approach needs to be crafted. Remember that all waterfall project
specifications are incomplete and contain errors within the stated
specifications, be vigilant, and be ready for inevitable problems.
Step 10 - Hybrid Project Design
From the analysis in steps 1-9, we derived a straw man
project for each relevant modernization methodology, in
preparation for a comparison and decision by management.
However, in our example analysis summarized above, we felt a
hybrid renovation/re-architecting approach would provide a
superior project of the lowest possible risk, due to improved
technical and operational risk reduction from regression
testing. Step 10 analysis derives and estimates any viable
hybrid project design.
reduction in a major project is based on the fundamental principle
underlying agile development methodologies: many small deliveries of
working code is less risky than a few big deliveries. This hybrid design
took this principle a step further, by allowing the legacy and
replacement systems to operate in parallel against the same data.
to a replacement system is the situation in which we have the
high overrun and failure rates reported by Capers Jones and
Warren Reid. Reversing this principle, if we have as many steps
as possible, the corresponding overrun rates should be taken
close to zero, and the failure rate will be zero
– for the simple reason that we
always have a working system under this design.
Here are a summary of the steps
for a minimum risk legacy modernization project designed for one
- Using renovation
technical strategies, move the application to the target
hardware and operating system environment and place into
- Using renovation
technical strategies, replace the non-relational database
with an RDBMS (Oracle in this case), also running in
production. (Note, these first two steps may be reversed.)
- Using re-architecting
technical strategies, extract the existing business rules
from the legacy code and populate a rules engine or a
componentized design with an appropriate technical framework
to handle the user interface. In parallel, the Wholly New
development begins using agile programming.
- Using regression testing, prove the
business rules extraction by direct comparison with the
legacy transaction, side by side, retiring each legacy
transaction as the new transaction is proven equivalent to
the old and placed into production, one module at a time.
- Prove Wholly New Components using
convention validation testing as they are ready to add into
- Retire the old system when the last
transaction, report and batch programs have been replaced
with proven new technology equivalents.
- Implement the Enhanced functionality
in the designated system modules, using conventional
maintenance programming and validation of the changes.
|As new specifications evolve
and are added to the project for Enhanced and Wholly New
modules, they are integrated into the project just like any
maintenance change. In fact, the transition from software
development mode to software maintenance mode will be gradual
and a matter of definition.
- Use of the rules engine in this case
provided both the ability for business analysts to modify
processing rules in the future, and an easy path into new
technology for the legacy programming staff. However,
this hybrid project methodology could have just as easily
created a C#/.NET or Java/J2EE application.
- Because the legacy and new versions
of the transactions ran side by side, the new could be
phased into production with initially a few users, then a
whole office, then the full user base. In addition, if any
problem is missed in testing and found in production use,
the users can drop back to the legacy transaction screen
while the problem is fixed. This removes the largest risk of
the project – moving
the new code into production in a big bang.
The step 10 analysis
derived this hybrid design based entirely on minimizing risk,
and then estimated its cost like the other straw man designs.
For our example project, this yielded a $4.5 million estimate,
plus a $1 million provision (again, only for expansion of
requirements). In effect, we are doing both the renovation and
the re-architecting design, in order to capture the benefits of
insuring against failure.
project design cannot fail, because we always have a
fully functioning system. Plus, benefits begin to accrue early
in the project along with the invoices, instead of only invoices
during the project and all benefits only at the end.
But this is not the only possible hybrid design. COTS
modification could be combined with business rule extraction,
for example, to feed into JAD sessions. Similarly, business rule
extraction could be combined with a re-design, or renovation
might be combined with any other strategy just to get onto the
new platform as soon as possible. Step 10 looks at all viable
Summary - How to Get to Success
organization’s legacy assets is more like software archeology
than modern software design and implementation, because we look
for ways to extract residual value from those assets whenever
|We have set out a 10 step
methodology to get from legacy assets to a successful modernized
In step 1,
We analyze the
Infrastructure of the application without any regard to
analyze the program logic that supports the business
functionality, and assign the associated program source into one
of three buckets: (1) Obsolete, (2) To Be
Preserved and (3) To
estimate the amount of new code that will be required for Wholly
New functionality, and express this as a percentage of the sum
of the other three buckets.
We ask, of the
code that is categorized as Preserved and as Enhanced, "how well
does the program logic support the current business process?"
If not 100%, we
ask, "to what extent has the underlying business process changed
since the system was designed and implemented", and "how much of
that change is being supported by the system?"
In step 2,
We ask how to
optimize among minimum risk, minimum cost, and maximum business
We ask, "how much
will you pay to minimize risk on this project?"
In step 3,
We ask, "what
payback period is required to fund the project internally and
We ask, "what is the
downside for the business if we fail?"
In step 4, if the
current application runs on a mainframe, we ask: "if you were
implementing a replacement application today and had your choice
of any platform, would you or would you not choose a mainframe?"
In step 5, we ask
if there are any deadlines that must be met, and what is the Plan B
if dates are missed?
In step 6, we
derive straw man budgets for re-architecting & renovation
projects, and perform a sanity check against available financial
and personnel resources.
step 7, we derive a rough order of magnitude estimate of costs,
risks and benefits for a straw man re-design and rewrite
step 8, we derive a rough order of magnitude estimate of costs,
risks and benefits for a straw man COTS implementation and
if regression testing or SME testing could have an impact on the
if change management is enforced and what is the integrity of
the source code library?
if data cleansing will significantly impact the costing of the
alternative strategies and if so to what extent?
if merging variant systems is a requirement, and if so what is
the degree of functional overlap between the variants?
discuss potential areas where vendor risk could be a problem,
and how to mitigate that risk as services are proposed.
step 10, we derive viable hybrid project designs, if any, and
provide a rough order of magnitude estimate of any attractive
Using a real example, this methodology
yielded the following results:
Straw Man Project Strategy
COTS package modified
Hybrid (.NET/rules engine/Oracle)
* The provision for the
re-architecting and renovation straw man projects was for
potential expansion of specifications in the Enhanced and
Wholly New areas. We felt that no provision was necessary
for the Preserved code, nor for the stated specifications
for Enhanced and Wholly New.
We assert that, if there is any significant
residual value in the legacy application, it may make sense to
utilize technologies and methodologies for extracting that
value. Doing so saves both money and time, and reduces