Trapped by Stranded Investments
There is a great
deal of advice available on strategies for implementing new technology
and for new methodologies for project management, but effective advice
is relatively rare for those many sites with stranded investments in
legacy IT systems and their respective support organizations.
government agencies, and other organizations are totally dependent on
applications that may be 20, 30, or even 40 years old. For example, we
are currently working with a state government that has dozens of
applications stretching back to the late 1970s implemented on, it seems,
every hardware and software platform ever built, from mainframes to
business process management (BPM). Reviewing the organization’s problems
is more like software archeology than modern software design and
implementation. But our review is showing that there is an
effective strategy or strategies to escape the cost trap it is in for
each of its high-cost applications.
investments in these applications have paid off over the years and
continue to pay off today because even aging software will still serve
the business purpose for which it was implemented. However, the problem
comes from the high annual cost of operating and maintaining the old
application, which can easily be 10 times or more the operational and
maintenance cost if the same application were re-implemented using modern
technology. We consider these investments to be stranded because this
high built-in cost drains the budget of funds that should be used to
improve productivity both inside and outside of IT, including the
modernization and/or replacement of those legacy assets.
Every year there
will be enough in the budget to keep these legacy systems going, but not
enough to replace them wholesale. Given their high fixed costs, this
state government can afford to rewrite or buy COTS replacements at the
rate of one system every year or two. A bit of arithmetic will show that
this state government will always have applications that are 20-40 years
old. It is trapped by its legacy.
are primarily focused on just keeping the lights on tend also to have
concomitant issues, which can be grouped into two categories: human
issues and infrastructure issues.
Under human issues,
change management — particularly managing changes to established human
processes — can be a huge issue, arguably the largest issue of all.
Since government organizations change their systems very slowly, change
is inhibited at every turn. Entrenched procedures, typically narrow
technological experience bases, and a lack of focus on productivity at
all levels correlates with passive-aggressive or even active-resistance
behavior by key long-term staff when they are presented with
opportunities to significantly improve the situation. This phenomenon
occurs even when the change would create a more positive work experience
for the very staff exhibiting the loudest resistance.
Engaging key staff
in fostering change requires far more than providing new programming and
operational infrastructure and organizing technology training classes.
Indeed, we see cases where some key staff will attempt to hold the
organization hostage because they feel that their unique knowledge is
required on a day–to-day basis and they cannot be forced to change.
There is no
one-size-fits-all tactical solution to these human factors, but there is
a strategic solution for resource-limited organizations. These
organizations are typically very heavy on staff— particularly staff
nearing retirement age — and very light on productivity technology.
Small investments in productivity technology can allow management to
offer early retirement packages to selected staff, thereby freeing more
of the budget for new productivity investments. A relentless focus on
productivity at all levels within the organization, from management to
the operations floor and the back office, will provide the investment
funds needed to fuel an overhaul of the whole IT organization. This is a
multiyear commitment, but the organization did not get to where it is
overnight and it won’t escape overnight either.
opportunities are frequently impressive. Just moving to an effective
interactive development environment alone can improve programmer
productivity by a factor of two or three, and even more in some cases.
The implementation of BPM technologies as an overlay on top of the
legacy applications can show impressive near-term productivity gains in
the back office, with a 50% improvement not at all unusual.
However – and this
is a crucial point – escaping the stranded investment trap will require
IT management to resist the temptation to do a complete rewrite of
applications and thereby postpone the benefits far into the future. The
focus must be on the shortest path to productivity improvement. With
results will come the budget to then rewrite or buy replacement
technology. We will expand on this point, below.
One of the most
highly guarded secrets at IBM is the relative cost and performance of
its mainframe platforms against its own iSeries (formerly AS400) and
pSeries (formerly RS/6000) platforms, and particularly against commodity
Intel-based servers. In 2004, an important benchmark by
Micro Focus established a TPC-C
rating for mainframes and demonstrated that an 8x3 GHz Unisys Intel
server performed at the equivalent rate of 1347 MIPS using an untuned
TPC-C test, or about 336 MIPS for a 2x3 GHz Intel server.
costs for mainframes run on the order of $7,500 to $15,000 per MIPS per
year, so that the mainframe equivalent capacity to a 2x3 GHz Dell server
costing $5,000-$10,000 one time is over $3 million per year, or
more than 100 times greater. This disparity is so large that many IT
executives with long-term mainframe experience cannot accept it until
they duplicate the results themselves.
Of course, as a
technical matter, comparing otherwise identical applications running on
platforms with dissimilar architectures, infrastructures, and
implementations must necessarily show some significant variance.
However, typically this variance is on the order of +/- 50%, and all but
pathological examples will compare to within a factor of two or three as
you move from platform to platform. When we compare platforms in which
the difference is a factor of 10 or 100, technical factors are simply
irrelevant. Only business factors matter: cost, stability and security.
security are issues that must be satisfied in order to even consider
capturing the cost benefits. It is absolutely true that proprietary
platforms and mainframe platforms in general are as rock solid and
secure as humanly possible. But the most relevant question is not “What
is the best?” but rather “What is needed?” and “What can be afforded?”
Today, Windows and Linux on Intel, properly managed, can deliver
stability that exceeds the business requirements of most applications.
Mean time between failure (MTBF) ratings for recent releases of Windows
exceed one year.
The best way to ask
the question “What is needed?” is whether, if you were implementing the
application today and had your choice of any platform, would you or
would you not choose a mainframe? Some applications can absolutely
justify mainframe technology. For example, we have reviewed
high-transaction-rate financial applications that have had a downtime
cost of $5 million+ per hour. From this point of view, $20 million on a
mainframe is not a difficult decision to make.
issues can also relate to human issues. When there are many platforms,
including mainframes, mid-range, Unix, Linux, and Windows (and, on rare
occasions, proprietary legacy hardware from companies like Bull, HP, ICL,
Compaq, Unisys, and others), each will have its own support staff. This
multiplies support costs and may necessitate redundant staffing in
specialized disciplines, provided an organization is able to hire the
staff it needs. Such circumstances cry out for consolidation.
The proliferation of
platforms frequently creates another problem: balkanization of
organizational assets. Interoperation of applications across dissimilar
platforms is a significant and growing problem, as demands grow for an
enterprise view of organizational transactional and data assets.
Isolated islands of
technology are a frequent issue with government processing, as different
agencies have gone their own ways over the years. Moving to shared
infrastructure not only saves significantly on hardware and software,
but eliminates redundant system support requirements. Those staff can be
redeployed into more productive areas that will benefit the state and
In other cases,
introducing a highly effective project management methodology such as
agile software development can not only reduce costs but substantially
reduce project risk as well.
Governments also are
faced with a retirement issue that goes beyond being financially unable
to replace all of the state workers who retire. There is a knowledge
management issue as well, as many retiring workers carry in their heads
the details of infrequently required processes that may be undocumented,
partially documented, or partially out of date. Government must capture
this knowledge while the people are still readily available.
We refer to our
overall strategic approach as “embrace and shrink.” The approach
provides an alternative for management who may see the only strategy as
replacement, which is sometimes referred to as “rip and replace.”
Embrace and shrink is a specifically evolutionary strategy, with some
occasional technological support, and stands in contrast to the
revolutionary strategy of replacement.
specific applications must of course have a part in any overall
implementation plan, but replacement must be rifle shot rather than
scatter shot. Leaving aside significant risk issues, organizations in a
stranded investment cost trap simply can’t afford to replace
Some people have
difficulty accepting these financial realities. If management is willing
to endorse “embrace and shrink,” opposition will arise from those who
see any modernization of legacy assets as “lipstick on a pig.” Escape
requires accepting the reality of limited resources and recognizing that
legacy assets are just that — assets, which should be managed
dispassionately by the numbers just like any other organizational asset.
What the “lipstick
on a pig” point of view fails to appreciate is that legacy assets are
fulfilling the business purpose, and the problem is frequently not the
internal logic of the programs themselves but more typically the
infrastructure of the application. A whole rainbow of alternative
technological fixes, which are usually very cost-effective and low risk,
can address infrastructure issues.
For example, a green
screen application working against indexed files or nonrelational
databases can have a serious beauty treatment that goes beyond mere
lipstick. This is the “embrace” part of the strategy. The green screens
can be replaced by portal server technology serving up Web pages linked
to the legacy transactions. The isolated islands of infrastructure can
be linked in a services-oriented architecture. BPM overlays for
personnel-intensive back-office applications can be implemented quickly,
utilizing the SOA for integration. The nonrelational data store can be
made relational, freeing it to integrate with new technology and
allowing its immediate use for data mining and other opportunities to
improve the organization. What is surprising to many is that they can
usually do all this at far less cost than management might suppose,
using software systems to modify programs in bulk.
organization begins to hum along with the new productivity gains
contributing to improved cost structures, then the “shrink” strategy
starts to kick in. Many legacy programs existed merely to extract data
from non-SQL data platforms. Once the data have become relational, an
organization can replace most of these programs with a reporting tool.
The library may shrink by half or more right there.
Similarly, once the
data are relational, new transactions can be implemented in J2EE or .NET
platforms that operate side by side with the old transactions against
the same database. Maintenance costs, already reduced by automated
program restructuring of legacy code, fall still further with the
introduction of object-oriented programming, accelerating a virtuous
An organization can
usually do all this at 5%-20% of the cost of replacement, at
substantially lower risk, and with time frames that accelerate the
benefits rather than the costs. Of course, mileage may vary and the
exact cost will depend on technical factors and the technical strategy
chosen, but this is a good rule of thumb.
The irony of this
approach is that gradually our beautified pig turns into a princess.
Only it happens as a result of hard work over time that was self-funding
and not in a “poof” of fairy dust when we wave a magic wand. There is no
magic wand, but there is a practical and pragmatic escape route that
addresses the realities that many organizations must live with — day in
and day out.
We see some or all
of the following issues again and again in organizations caught in a
stranded investment cost trap:
Aging software and hardware with
excessive maintenance costs
Being locked into high-cost mainframe
technologies without the business justification for mainframes,
which starves other areas of needed investment
Change management, particularly the
human factors in introducing process changes
Dislocated islands of infrastructure
that could benefit from consolidation
Retraining programmers into new
technologies and with new infrastructure and tools to increase their
effectiveness and productivity
Integrating automated workflow (BPM)
technologies with back-office applications
recommend some or all of the following tactics to address these
A relentless focus on productivity,
initially inside but also outside of IT
environments for programmers who don’t have them
Automated workflow providing
non-IT productivity improvements on the order of 50%, typically
for back-office applications
Introduction of relational
databases and object-oriented programming for new programs and
Training for technical staff in
new, more efficient programming and project management
(especially agile) methodologies
Selective standardization and
modernization of legacy assets through low-cost bulk program
modification technology, with particular emphasis on
Introduction of a
services-oriented architecture to link islands of infrastructure
The introduction of a relational
database to replace nonrelational data stores
Introduction of XML document
exchange for low-cost application integration
Replacement of obscure languages
with Java, C, or COBOL
Restructuring of over-maintained
programs to reduce maintenance cost
Re-platforming of mainframe
applications to Intel (Linux or Windows), mainframe Linux, or Unix
Replace mainframe hardware with
Intel-based servers to take advantage of the 100-fold difference
in cost per unit of computing for applications that do not
require the stability of mainframe computing
Mainframe Linux (zLinux) as an
alternative target platform can be a useful transitional
applications (COBOL, CICS/batch, VSAM/DB2) can be ported easily
to Unix, Linux (including zLinux), or Windows; existing staff
can usually do this inexpensively with a bit of training and
challenge facing all organizations with a substantial portfolio of
legacy assets is how to do more with less. IT needs to improve its own
productivity, but IT is also uniquely capable of improving the
productivity of the rest of the organization as well as its own. As a
result, IT must look outside as well as inside for opportunities to
introduce productivity improving innovations in both technology and