back-view-image-of-businessman-drawing-graphics-on-wall-139675900-698x407Business Rule Extraction in Application Modernization Projects

An essay on business rules and the extraction of business rules (and business requirements) from legacy application artifacts in large to very large scale projects. © Copyright 2015, Don Estes. Comments may be posted at the bottom of the page.

Recorded May 7, 2015 webinar for ITMPI link here which covers part of the material in this essay.

 

Summary

We assert that the fundamental axiom of application modernization is that business rules controlling the processing of update transactions are invariant across the modernization process. Business requirements, other than those which specify that the business rules be obeyed in the new implementation, are not affected and thereby free to change as desired to optimize the use of the new system and provide increased value to the organization, unconstrained by the business rule invariance. This is equally true whether the modernization will be implemented in custom developed code or through use of a commercial off the shelf software package.

This axiom is of critical importance to projects that seek to add substantial business value to the modernized system but want simultaneously to ensure the success of the project. It is almost universally assumed that to add business value one must start from a blank sheet of paper and redesign the system from the ground up, accepting the costs and business risks involved because of the value of the goal. The axiom guides us towards mitigation of the increased costs and risks to the business from a top to bottom redesign without sacrificing the goal of significant business value added.

We further assert, as a corollary to the axiom, that the defects and shortfalls in functionality that occur in modernization projects are a direct result of a failure to extract a 100% complete and correct set of active business rules from subject matter experts and legacy artifacts. As errors and omissions in the set of business rules are exposed over the course of the project, the time at which the new application can go into production and replace the legacy application recedes into the future while present day costs continue to mount. Or, alternatively, the less than perfect new application goes into production and discrepancies occur in daily processing which must then be fixed in both the software and the data, at the most expensive part of the software development life cycle. In addition to the direct costs incurred are the business risks from defective processing, with an unknown potential for impacts on operations and reputation.

For most projects, these future costs originating from errors and omissions cannot be predicted by analysts at the time that the project is proposed and return on investment calculations are made. This is because, at the outset, the analysts don’t know what they don’t know. As a result the analysis is based on what they do know, using the best available information at the time which is always significantly incomplete in large to very large systems.

Therefore, assembling that 100% complete and correct set of active business rules before the actual implementation begins will underpin a more realistic assessment of the time frames and return on investment calculations, and will in most cases justify the cost of doing so by eliminating the impact of unknown requirements over and above the improvement in business risk. It is a predictable cost incurred to avoid future unpredictable (and usually substantial) cost increases and to avoid the business consequences from incorrect processing. In the body of the essay, we discuss a process of dynamic business rule extraction to complement business analysis and static business rule extraction.

We further assert that, for a large or very large application, neither traditional business analysis nor static business rule extraction together or alone will lead to a 100% complete and correct set of the active business rules – only the process of dynamic business rule extraction will do so. Dynamic business rule extraction can achieve this result with a practical level of cost within an established range of accuracy that derives from the business case.

We further assert that testing alone will not provide a sufficient protection to the organization from errors and omissions in a modernized application because existing testing methodologies are fundamentally flawed when applied to application modernization. Standard testing methodologies test against requirements (which include specifying the business rules that must be obeyed). By definition, this means that testing against requirements will never expose errors and omissions in the requirements upon which the testing is based. You can’t get out more than you put in.

Only some form of external comparison against the executing legacy application can do so, because only execution will eliminate the ambiguities inherent in human analysis. The dynamic business rule extraction process described herein creates and validates a set of test cases as the mechanism to expose those errors and omissions. Then the resulting unambiguous test cases can be used to validate the business rules in the modernized system more effectively than testing based on incomplete, erroneous, and ambiguous requirements. Although this is an analytically intensive process, it is cost/effective because the reduction in testing costs and the avoided cost of rework from defects found late in the software development life cycle provides a positive ROI without even considering the mitigation of business risks from undetected defects going into production use. Our assertion that challenged and failing modernization projects occur because of a failure to expose those errors and omissions in business rules derives directly from our experience in applying this methodology.

Assembling a 100% complete and correct set of active business rules is not a common practice. The experience of two projects, at the Federal Reserve Bank of New York and the US Patent and Trademark Office, completed our understanding of what is required and the basic economics of doing so and of not doing so. We describe the process, beginning with standard business analysis augmented by static business rule extraction tools.

We discuss a risk assessment process that can guide a decision as to whether or not 100% is truly required in any given project by focusing on the likelihood and cost of an error in production processing. When that cost and risk are acceptable to senior management, then in that case 100% is not necessary and dynamic business rule extraction need not be a part of the project. For such a project either business analysis + static business rule extraction, or business analysis alone, will be sufficient.

1 Introduction

mythical-man-month-book-0201835959In a 1987 essay, Fred Brooks, author of The Mythical Man Month, wrote that,

“The hardest part of the software task is arriving at a complete and consistent specification, and much of the essence of building a program is in fact the debugging of the specification.”

We can extend this astute observation to application modernization with only a slight rewording: “much of the essence of building a replacement program is in fact the discovery and debugging of the specifications that are implicit or explicit in the existing program.” In other words, we must discover the business rules contained within these specifications in order to ensure success in any modernization effort.

1.1 Risk

Thus, before we start discussing business rules and their extraction from legacy artifacts, let us first discuss the critical factor of risk: the risk and especially the business consequences of getting a rule wrong or (more likely) failing to discover a rule. If your new application processes something in error, it is one thing if you have to write an apologetic letter to a customer, but it is something else again if a $1 billion bomber aircraft and crew were lost or $10 billion funds transfer misplaced. This business risk assessment should drive the establishment of the necessary accuracy of the business rule extraction efforts. In practice, this is seldom if ever done, and thus we have the plethora of anecdotes about failing, late and overbudget projects.

It is our opinion that project failures should not be the primary focus of our risk concerns, since outright failures occur less often than lurid headlines would lead you to believe. We believe that the more serious problem is in the projects that “succeed” only to experience months and years of ongoing problems, some of which can have serious consequences. It can be a case of “mission accomplished” followed by 10 more years of war.

But is this really a problem? Shouldn’t all these problems be caught in testing? There is a fundamental problem in modernization testing, because standard testing methods are based on the requirements. By definition, requirements based testing will never find defects in the requirements upon which they are based. Testing by itself is not an answer to the problem of managing risk in application modernization.

Scale matters in modernization risk analysis. In this essay, we are primarily considering applications of large to very large scale, typically on the order of 1 million lines of legacy source code and up. This is because business rules interact with each other, so that the formal complexity of a set of business rules increases exponentially with the raw size of the application source library. Of course, your experience may vary depending on a number of technical factors, but in general we don’t worry much about systems of 100,000 lines or less, and only sometimes for systems sized between 100,000 and 1 million lines. But if your project is contending with a library of 1 to 10 million lines or more, we urge you to read this essay very carefully.

1.2 100% Top To Bottom Re-Design Is Not Necessary

It is assumed in almost all modernization projects that, in order to secure the benefits of optimized business processes, the new application must be re-designed from top to bottom. As a result, analysts start with a blank sheet of paper and create a new design from requirements elicited from subject matter experts and legacy artifacts.

There is a crucial error that occurs that this point, a failure to distinguish between requirements and business rules (more on that point in a moment). To achieve the benefits of an improved business process, it will be necessary to create a new set of requirements but not a new set of business rules, because the business rules only change when the business itself changes. We will, however, require a complete set of the active business rules in order to protect the business during the modernization.

AsIsVsToBe

Consider this Venn diagram representing the functionality of an As-Is system being modernized into a To-Be system. The inner circle represents the As-Is system. Some of the functionality is obsolete and will not be carried forward into the new system, represented in light grey. The red and grey overlap area represents functionality that is going to be preserved. The bright red area represents enhanced functionality that will be built on top of the existing functionality. Finally, the blue area represents wholly new functionality which is completely disjoint from any existing functionality.

  • The obsolete functionality will simply be ignored, once identified as such.
  • Ensuring that the preserved functionality is carried forward without errors or omissions is the focus of this essay. The preserved functionality utilizes the business rules, and business rules are invariant across the modernization. These business rules are paired with requirements that require that the rules be obeyed.
  • The enhanced area represents new requirements that are not tied to business rules, and as such are free to vary as desired.
  • And finally the wholly new functionality has no relationship to the existing application’s business rules and is treated essentially as greenfield development occurring in parallel with the work on the preserved and enhanced functionality. The wholly new functionality is not addressed in this essay.

Once this is accepted, significant changes can occur in the design and implementation of the project that will control risks to the business from both unpredictable errors in processing and unpredictable cost increases and delivery delays. By correcting this error, we can remove the cause of the project failures and shortfalls that make organizations leery of attempting a major modernization of a mission critical application.

In other words, you can achieve safety in modernization and allow any changes to the business requirements as desired except those requirements that relate to business rules.

Note that this does not limit any wholly new functionality planned for the modernized system which involves new business rules to control update to new data in an expanded data model. Wholly new functionality, either by automating processes currently done manually or by supporting totally new processes, do not have existing operational dependencies that would be disrupted by a failure to bring forward currently active business rules.

1.3 Requirements Versus Business Rules

Requirements are not business rules, and business rules are not requirements. Requirements are what we need to do, and the business rules define how we are supposed to do it. Instead of managing requirements and business rules as if the terms were interchangeable, we can use the differences to refine our management of both. We can also recognize that we will have requirements that relate to business rules, and requirements that don’t relate to business rules.

1.3.1 Non-Functional Versus Functional Requirements

Non-functional requirements have no relationship with business rules. They specify technical characteristics of the system, (performance, high availability, auditability, backup, etc.), technical standards, (code quality, user interface, etc.), security, and the like.  They are expressed as “the system shall be <requirement>”.

Functional requirements do relate to business rules – the “what” also requires the “how”. Functions may be queries or updates to persistent data, and the functional requirements are expressed as “the system shall do <requirement>”.

This distinction can be made clearer by using the technical definition of a business rule from page 5 of the GUIDE Business Rules Project,

… a business rule expresses specific constraints on the creation, updating, and removal of persistent data in an information system.

Therefore, a business rule affects only update transactions in an information system, and the related requirements constrain only the update portions of any business process. Thus, the update functionality will be represented in the preserved part of the diagram above, though of course selected query processing can be part of the preserved, enhanced or wholly new parts of the diagram as desired.

However, as we will discuss in section 1.4 below, business rules must also reference terms and facts to have meaning, so we will also distinguish between the transactional business rules that govern these update transactions and the conceptual business rules that the transactional business rules are based on.

Optimized business processes focus on re-organizing information and the work process in a fundamentally different way, and therefore they affect primarily query transactions. These are reflected in the enhanced functionality part of the diagram in section 1.2. They affect update transactions only as to when and where in a process an update transaction may be submitted, not the business rules that govern the update transaction. Query transactions are also based on the conceptual business rules.

1.3.2 Limits on Functionality Enhancements During Modernization

BusinessRuleDefinitions3From the preceding paragraphs, it should be clear that a modernization project is free to modify, enhance and expand the functionality of a system as desired so long as the rules affecting the actual update of data are preserved intact. The complete set of requirements consists of the requirements relating to business rules (which control updates) and of requirements that don’t relate to business rules.

For example, “the balance in the primary account must never go below zero” is a business rule and the complementary requirement for the implementation of the software is that this rule must be obeyed. However, while “a user must be able to print all details of an account on a local printer” is a requirement, it is not a business rule since it doesn’t control update processing. A programmer will implement either in program source code. A business analyst will capture either one as a requirement of the system.

All too frequently, the business analyst will not make this distinction, and will refer to business rules and requirements as if they were interchangeable terms. Consequently, when capturing the set of requirements that don’t relate to business rules, he or she will also seek to capture new requirements that relate to business rules. It is by allowing full rein on the former but constraining the latter that we can allow optimization while simultaneously protecting the business during the modernization.

The underlying distinction is that business rules are closely related to business processes and operations. The business rules change only as the business process itself changes in a way that causes the update of data to change. If the business process changes only in the way that the work process is organized, then there are no changes to business rules.

Requirements are closely related to the software implementation, so that changes in the requirements will always drive changes in the application software code. However, those requirements can change without there necessarily being a change to any actual business rules.

1.3.3 Impact on Modernization Testing

This distinction between requirements and business rules has significant implications for testing as well. We assert above that requirements based testing cannot find errors and omissions in the requirements themselves. This is by definition correct, but we have to be careful in defining exactly what we mean here. This does not mean that requirements based testing has no value in a modernization project. We just have to think clearly about its value and limitations in a modernization context:

  • Requirements based testing cannot discover errors and omissions in the business rules themselves – another testing method must do so
  • Once the business rules have been proven to be complete and correct, then requirements based testing has a solid foundation and can proceed

Requirements that relate to business rules primarily impact update transactions in the preserved functionality, as discussed above. It is these requirements for which requirements based testing is not particularly useful, since we seek a method to identify errors and omissions in the related business rules that control update processing.

Requirements that do not relate to business rules will primarily affect queries in the preserved and enhanced functionality in the diagram in section 1.2. These requirements are free to vary across the modernization. As such, the requirements are then the definitive statement of how the new system should operate with regard to queries and we do support requirements based testing for these requirements

Requirements based testing also applies to the wholly new parts of the diagram in section 1.2 where the requirements specify both update and query processing. This is outside the scope of modernization, since there is no existing standard of truth which can be leveraged.

1.3.4 Exposing Errors and Omissions in Business Rules

Since the business rules will not change across the modernization process, we can make use of this invariance to expose the errors and omissions in those requirements (which are always present in any significant project). The process that we propose in the body of this essay will create test cases that provide an alternative to requirements based test cases specifically for update transactions in the preserved functionality. Once those tests have been passed, requirements based tests for those transactions can be applied but are redundant.

1.3.5 Implications for Business Executives

Many people will conflate changes in work flow and related operational optimization with changes in the business process, so we need to think about this very clearly and carefully. Changes in work flow and operations affect query processing and the point in a process when an update transaction is allowed, but do not represent a change in the business rules that control that update transaction. The business rules that affect that update do not change unless the business itself is changing. The work flow and business operations can be changed and improved without incurring any risk to the business.

Note that we make a very careful distinction between business process changes that cause the results of processing to be different than in the legacy system, and business process changes that cause the results of processing to include new information. The former is represented by the Enhanced area of the diagram in section 1.2, while the latter are represented by the Wholly New area of the diagram. So, when we compare the results of processing in the modernized system against the results in the legacy system, we can only compare the data that exists in both systems. There are no constraints on the updates to new data, only on the data that is mapped to the legacy system.

If the business rules are not changing, then we can use the invariance principle to protect the business while the implementation is changing. However, if there are changes required in the actual business rules because it is intended to change the business process in a way that changes the results of processing in mapped data, then we can no longer do so. This can expose the project to substantially disproportionate risk unless handled very carefully.

Business executives have a choice:

  • Change business rules or software implementation but not both at the same time, and project risk can be managed very handily
  • Changing business rules while also changing the software implementation eliminates the ability to use the legacy system as the standard of truth, and the risk to the project increases dramatically

In most cases, the actual business process with the established business rules is not changing, so there is no choice that needs to be made. But if yours is the rare case where a change in both at the same time is under serious consideration, then a very careful risk management assessment needs to be made.

The safest course is to change the implementation, prove it correct, then update the business rules. If your project has a low potential impact in case of errors and omissions in the business rules or their impact, then proceeding may be sensible. But if there is a significant potential impact, we recommend that project architects plan very carefully indeed.

1.4 Transactional Versus Conceptual Business Rules

Just as we refined the definition of requirements above, we also need to refine business rules into two distinct categories.

Conceptual business rules define terms (the business vocabulary or semantics, which will be materialized in a data model) and facts (association among two or more terms). A derived fact can be created by an inference or a mathematical calculation. A conceptual business rule  is also described as a “structural assertion” by the Business Rules Group.

In other words, conceptual business rules define what is, i.e., they define the semantics of elements such as “account” entity, “primary” attribute of account, and “balance” attribute of account, signed numeric attribute of “balance”, etc. They also define relationships among the elements: “each primary account will contain a balance”, etc.

Query transactions only relate to conceptual business rules. This make sense because conceptual rules provide a static view of the terms and facts, and no data is changing with functional queries.

Transactional business rules control change, which led to the business rule definition above:

… a business rule expresses specific constraints on the creation, updating, and removal of persistent data in an information system.

This is a pretty straightforward definition, but it was refined a few years later as an “action assertion” by the Business Rules Group:

… a statement that concerns some dynamic aspect of the business. It specifies constraints on the results that actions can produce.

This more formal definition is more general, if perhaps not quite as immediately clear for the purposes of this essay, which is why we use the distinction between conceptual and transactional business rules for greater clarity in a modernization context. An example of a transactional business rule is, “the balance in the primary account must never go below zero”.

It should now be clear that while queries relate only to conceptual business rules, updates require both conceptual and transactional rules. Updates must still follow the same rules as the legacy system – unless the business rules are changing while modernizing in which case we do not consider the project to be a modernization.  It would be either a greenfield project or a hybrid modernization project.

To summarize, RequirementsVsBusinessRules

 

1.5 Different Business Rules For Different Folks

business-rules4To add to the confusion, business rules have one of two very different meanings to different people:

  • When a business analyst discusses a business rule, he or she will refer to a statement of what a rule means – its semantics – at a high level of abstraction.
  • When a programmer discusses a business rule, he or she will usually be referring to the implementation of the business analyst’s business rule, or may be referring only to a specific block of program logic that is only a part of the implementation of the business analyst’s business rule.

Confused yet? You have a lot of company.

In this essay, we will refer to a “business analyst’s business rule” or a “programmer’s business rule” to make clear the distinction. We have have participated in many meetings in which we listened to different people saying “business rules” without any of them being aware that they were discussing very different things.

Many people who work with business rule management systems (BRMS’s, also known as rule engines), may be surprised to learn that they are dealing with programmer’s business rules, at least in most cases. A business analyst’s business rule would be implemented in a declarative fashion, i.e., where the order of execution does not matter, whereas a programmer’s business rule would be procedural, i.e., where the order of execution does matter.

One example of a declarative rule would be a decision table such as implemented in ILOG, which specifies the conditions under which a result is obtained. That is pure declarative. However, most “business rules” in ILOG are implemented as blocks of procedural logic which are then invoked as components. The open source Drools rule engine uses decision tables, but in practice rules are often forced to execute in a defined sequence, rendering even them procedural.

1.6 Business Rule Extraction Methods

Projects typically plan the business analysts’ tasks as if they could reach perfection and create a complete set of business rules from their manual sources of information. Unfortunately, this is not so in the large and very large scale projects which are we considering in this essay. Furthermore, it is unreasonable to expect that any person or group of people could do so given the complexity in large and very large systems. No matter how diligently they work, the complexities, ambiguities and interactions within the components of a large information system and indeed within large individual programs will defeat the best of us.

BPMN Graphical Representation

The strands of the procedural logic which implement the programmer’s business rules can trace all through moderate and large programs, and indeed among multiple discrete programs, with a relatively simple example shown here. In the process these strands of logic overlap and interact with strands from other programmer’s business rules (and thereby create ambiguities in the definitions of the business analyst’s business rules). We need a better methodology to achieve the standard required to protect the business.

Broadly speaking, there are three ways to extract rules from an existing application, and they correlate roughly but meaningfully with ranges of the resulting accuracy in the extracted rules:

  • Traditional Business Analysis – interviewing subject matter experts (SMEs) and reviewing legacy artifacts can be expected to discover roughly 50-70% of the rules and to express them correctly.
  • Static Business Rule Extraction (BRE) Tools – this class of software tools will analyze the source code to legacy programs using parsing technology and map out relationships among the sources; these can be expected to raise these percentages to something more like 80-90%.
  • Dynamic Business Rule Extraction – this is a “white box” process that uses tools to examine the internal execution of legacy programs to expose how they actually execute as opposed to how analysts think they execute. This more powerful analytic technique, aided by white box tools, allows us to discover 100% of the business rules.

Neither business analysis nor static BRE tools will ever discover 100% of the active business rules.Traditionally, the only way to get to 100% is by slow and painful debugging over many years, just the same way we built and expanded the old systems over 30 and more years. (One client told us that their legacy system was just about debugged after 31 years of operation; from our analysis, he was too optimistic.) The only way to get to 100% before you go into production with a replacement system is to complement both business analysis and static BRE analysis with a dynamic business rule extraction methodology, which we will discuss in more detail in the body of the essay.

We were at an application modernization conference a couple of years ago, and sat at a luncheon table with 8 other delegates. We took the opportunity to conduct an unscientific straw poll, and asked the group if their organizations could accept less than 100% business rule accuracy in their new system. One delegate said for his company, 80-90% would be OK, but the others said no, it would have to be 100%. One in particular, from a large and well known bank, put it like this: “We can’t have a veteran come back from combat overseas to find his home foreclosed upon, because we lost the rule that prevents foreclosure for deployed military personnel. We couldn’t stand the PR hit.” That comment has stayed with us ever since.

So, the first question we ask during an assessment is, how much accuracy do you really need? If 50-70% is good enough, you don’t need this essay, but if you need higher, then this may be useful to you.

2 Business Analysis

We have all seen variations on this cartoon:

BusinessAnalysis

We laugh and shake our heads, but it is rueful laughter because we know it’s true – there are serious problems in communication between analysts, users, and programmers, and – indeed – a lack of clear conceptual thinking from the user community in general. Yet, it is unfair to expect users to do any differently. After all, that’s not their job – it’s our job, collectively, because we are the technical experts.

When it comes time to rebuild a legacy application, we run headlong into this problem, except it is much worse than when building a system for the first time. When the original system was first built, replacing a manual process, there was no business operational dependency on the accuracy of the automated system – there was no automated system. So, the newly automated system was compared to the manual process and gradually the bugs were worked out so that even with residual defects it was better than what it replaced.

When our legacy systems were developed, the limitations on peoples’ ability to process the information was similar to the limitations on computer systems. We have calculated that the mainframes on which we first worked had a price/performance ratio that was 10 million times worse than the laptops we carry around today. In those days, we simply could not build very complex applications. Instead, most of the work went into fitting the programs into the limited resources we had available.

However, over the intervening decades we gradually bolted more and more complexity onto the old systems as the steadily improving price/performance of computers allowed us the capability to do so. It would be impossible to replace these complex legacy systems with a manual system today, but day to day business processes are now utterly dependent on the correct functioning of those systems, regardless of how convoluted the program logic has become after being patched, and re-patched, and re-re-patched…

Today, we have the opposite situation from 30 and more years ago – then requirements were relatively simple and programming relatively difficult while now we have increasingly complex requirements and much easier programming. Then the major effort went into programming, now the major effort goes into refining requirements.

We are discovering that it is much, much more difficult to re-discover all the business analyst’s business rules and related requirements that have been built up over that time than it was to develop the original, relatively simple application. No one knows all of the rules in the existing application, and some of what the subject matter experts (SMEs) think they know is wrong. Attempting to go to the users with a blank sheet of paper and learn all the rules doesn’t really work any more. You are lucky to get 3/4 of the rules – 1/2 is more likely. It’s like the cartoon – only raised to the nth power.

The only complete and fully correct (by definition!) set of business analyst’s business rules is contained within the source code itself (encoded as programmer’s business rules), but separating the wheat from all that chaff is a daunting technical and analytic task.

3 Business Rule Extraction Process

3.1 Academic Studies on the Relative Cost of Fixing Software Defects

Numerous studies show that most software defects are introduced at the earliest stages, found at the later stages, and fixed at the most expensive stages of the software development life cycle. To make software development costs reasonably predictable and manageable, these defects need to be found at the requirements and design stages. This is difficult or impossible to do with wholly new software development, but most application software developed today is replacing existing software – and we have the business rules 100% defined within the legacy system, if we can only extract them in a sensible and economically practical manner.

NIST-1NIST-2

For example, according to iSixSigma, referencing studies which had been made by various software development communities and reported in Crosstalk, the Journal of Defense Software Engineering, it is been found that most failures in software products are due to errors in the requirements and design phases – as high as 64 percent of total defect costs. The National Institute of Standards and Technology (NIST) in a frequently cited study (NIST 2002 RTI Project 7007.011) reached a similar conclusion.The NIST study goes on to find that these defects are not detected until much later in the software development life cycle. This means that the cost of fixing the defects will be exponentially greater than had they been detected at the outset.

IBM-1Numerous studies (e.g., Lundblad and Cohen, and Boehm and Basili) show that the relative cost of fixing a defect at the maintenance (production) stage of the software development life cycle can be 200 to 1,000 times as expensive as fixing it at the requirements stage.

So, the combination of defects

  • created early in the SDLC,
  • detected late, and
  • fixed late =

unpredictable costs

leading to project cost and delivery overruns as well as functionality shortfalls.

But we can fix this problem in a modernization project, where we cannot for wholly new, greenfield development.

3.2 Business Rule Extraction Tools and Services

Business rule extraction tools and services are both quite useful and quite frequently misunderstood. When managers review a BRE tool or service, they usually expect that they will be getting business analyst’s business rules as output, not programmer’s business rules. Indeed, they can be quite upset when they learn the results of an analysis unless the expectations have been carefully set.

The problem with static business rule extraction is that we have to analyze the procedural logic implemented by the programmers, the programmers’ business rules + technical logic, and work backwards to derive the original business analysts’ business rules. This is analogous to starting with binary object code and trying to derive the original source code that was compiled to create it. It is difficult, but it is better than business analysis.

Unfortunately, there are no magic tools that will do what people want, and there never will be because software tools do not handle ambiguities very well. People are much better than software in this regard. In the next section we will discuss how to find the kernels of wheat (business analyst’s business rules) from all the chaff (programmer’s business rules and technical logic).

For some perspective, let’s consider a couple of examples. In one application of around 1 million lines of COBOL code, you might have two or three thousand business analyst’s business rules, but 10 or 20 times that many programmer’s business rules. That’s a lot of chaff for relatively little wheat.

In another example, in one project of 2.3 million lines of COBOL, a BRE service extracted 70,000 programmer’s business rules. It is much, much easier to read 70,000 programmer’s business rules than to read 2.3 million lines of COBOL, but neither one is going to yield the business analyst’s business rules which are an abstraction of the programmer’s business rules. This is a much more difficult process, and will be significantly more expensive and time consuming than pushing a button and automatically deriving the 70,000 programmer’s business rules.

3.3 Diminishing Returns in Business Rules Extraction

Busine1When we set about to discover the business analyst’s business rules for a modernization project, we first have to consider the limitations on our efforts. The more research we do, the greater the effort to obtain ever declining refinements in the business analyst’s business rules, in a classic diminishing returns curve.

You can’t study the system artifacts and interview users forever because analysis alone will never yield a complete and correct set of rules on any large or very large system. There has to be a limit to analysis, just for practical reasons, but cost becomes a significant consideration as well. In addition to postponing the time at which the organization can expect to start getting a return on its investment, an analyst’s time is expensive. But where do you draw the line and say enough is enough? Looking at the curve, there is no obvious point of inflection where it makes sense to halt.

On the other hand, if the analysis will never complete, how are we to obtain a complete set of business analyst’s business rules? We know that requirements based testing is inherently limited in that it cannot discover errors and omissions in the requirements upon which it is based. The process cannot yield any more than is put into it at the outset.

We also know that we cannot go into production with significant defects in the rules, or users may reject the system and refuse to use it. But we can’t analyze forever, so what can we do as a practical matter?

Every project reaches this point in the planning and analysis stage, and we all do the same thing – we fudge it, and convince ourselves that what we have is good enough. At some point, we start coding the new system even though we don’t have all the rules. (Fred Brooks wrote about this problem in his classic work, The Mythical Man-Month, almost 50 years ago.)

In this case, we have only the rules that we can get by interviewing the business owners and other subject matter experts (SME’s), just as we did when building the original system, supplemented by some degree of analysis of existing legacy system artifacts. The problem with this case is cost and risk – as we saw above, discovering errors and omissions in the requirements once the system is in late stage development or production occurs at the most costly point in the software development life cycle and makes the project total cost and time frame only partially predictable.

3.4 Our Proposed Process

Our proposed answer is a 3 stage process:

  • Stage 1: business analysis via interviews and review of artifacts, which will result in a roughly 50-70% yield
  • Stage 2: static business rule analysis, which will increase the yield to roughly 80-90%
  • Stage 3: dynamic business rule analysis, which – properly executed – is the only way to increase the yield to 100%

Let’s overlay this onto the diminishing returns curve above, with the blue shaded areas under the curve qualitatively representing the relative effort of each approach:

3Stages

From this relatively simple relationship, it should be clear that you get the most bang for your buck with traditional business analysis, but it is not accurate enough for your project or you wouldn’t be reading this far.

Static BRE, if pursued as recommended only to the point of diminishing returns, will probably cost more for less yield, but should significantly improve the accuracy of your result and reduce the cost of testing, rework and production data repair by more than its cost in addition to the risk reduction.

Dynamic BRE will cost still more for even less yield, offset by the fact that these rules will be the most complex and the source of much of the problems when the modernized system attempts to go into production. Like static BRE, the increase in cost will be offset by reduction in testing, rework and production data repair costs in addition to the reduction in risk.

4 Stage 1: Business Analysis

DiminishingReturnsStage1redAnalytically, the point at which business analysis stops its primary role and gives way to stage 2 static BRE should be when we reach the point of diminishing returns on the stage 1 business analysis. This should be when we believe that the cost of continued analysis will be greater than the cost of finding the errors and omissions during static business rule extraction.

The light blue shaded area represents the rules we can get from interviews and from reading documentation, including reading the raw program source codes. The yield varies depending on the scale of the system and the complexity of its implementation.

Crucially, business analysts don’t usually realize that they have missed this many rules. And why should we expect them to think anything else? Their source of information cannot provide everything that we need. They will capture as much as people and documentation can tell them, but the subject matter experts cannot tell you what they don’t know. In fact, at least some of what they tell you will be wrong, not to mention ambiguities in the analysis. So, the business analysts have no basis of comparison that could lead them to think that they have any significant deficiencies in their analysis, at least not until they try to go into production parallel testing. (If production parallel testing is not part of your project plan, it’s time to find out why not.)

The complexity of those old systems has evolved quite considerably since the original, relatively simple business analyst’s business rules were captured. Even the programmers who have worked with the code for most of their careers, if they are still around, don’t know everything in those programs. More subtly, even if they could recall all of the rules, the interaction effects among the programmer’s business rules have become far too complex for anyone to hold in their minds. I have seen programmers literally spend weeks trying to untangle the interactions within a set of programmer’s business rules and yet expose only one additional programmer’s business rule. The net result is that the new project will typically start with a substantial deficiency in business analyst’s business rules unless the business analysis is supplemented from another source.

The universal assumption is that any business analyst’s business rules missing from the analysis will be caught in testing. But remember that conventional testing is based on requirements (which should contain business rules by reference). How do you find errors and omissions in the business rules when the testing is based on the same deficient requirements? The answer is – you don’t, other than by accident or by comparison with the legacy system. We need a different approach to expose the missing and erroneous rules. Business rule extraction is that different approach, if it is carried through to a sufficient degree.

5 Stage 2: Static Business Rule Extraction

BRE tools and services are based on parsing software. In other words, they use software that can read program source code and organize the results to facilitate understanding by business rule extraction analysts. The technique operates exactly as a compiler does, which parses the code to create output object code. Instead of object code, the parsing software produces a repository of information that can be used for query and reporting purposes.

This section discusses “static” processes which parse the source code to the programs. In section 6 we discuss “dynamic” processes which analyze the executing programs themselves. BRE dynamic analysis can directly yield the interaction among rules that must be laboriously (and sometimes incorrectly or incompletely) inferred using static analyses. And remember that BRE tools and services, both static and dynamic, yield programmer’s business rules that still have to be abstracted into business analyst’s business rules.

There are two basic types of static business rule extraction solutions, both of which rely on parsing technology to extract meaning from source code, and both of which suffer from technical and analytical shortcomings:

  1. Business-rule-extractionFully automated BRE services – these utilize parsing tools to produce an analysis of all conditional (“IF”) tests in a program. These can be expected to focus just on the program logic augmented with reports on related artifacts for reference or perhaps a repository that can be queried.
  2. Semi-automated BRE tools and related services – these also utilize parsing tools to create a repository of the parsed information gleaned from the program source and related operational artifacts (data definitions, job control, etc.). The repository is used by BRE experts to research and infer the rules using pre-defined and ad hoc queries against the repository.

The difference, at the end of they day, is cost and value.

5.1 Fully Automated Static Business Rule Extraction

The 100% automated business rule extraction services are cheap, quick and useful, but not as accurate or as useful as the second type. These three examples were taken from an actual project:

  • IF SORT TYPE IS EQUAL TO SAVE TYPE
    AND SORT MAIL NAME IS EQUAL TO SAVE MAIL NAME
    AND SORT MAIL ADDRESS IS EQUAL TO SAVE MAIL ADDRESS
    AND SORT DEFAULT NAME IS EQUAL TO SAVE DEFAULT NAME
    AND SORT TRIAL DATE IS EQUAL TO SAVE TRIAL DATE
    AND SORT TRIAL TIME IS EQUAL TO SAVE TRIAL TIME
    AND SORT VIOLATION DISTRICT IS EQUAL TO SAVE DISTRICT
    AND SORT COURT LOCATION IS EQUAL TO SAVE LOCATION
    AND SORT TYPE IS EQUAL TO “A1”
    THEN 1 IS ADDED TOTAL A1 RECORD WRITTEN
  • IF DISTRICT LOCATION IS EQUAL TO “1001”
    AND TS SERIOUS INDICATOR HAS A VALUE OF SPACES(1)
    AND ROOM NUMBER IS EQUAL TO 03
    THEN ROUTINE 7850-CHECK-SERIOUS-SCHED-TYPE IS PERFORMED
  • IF TRACKING RELATED CITATION IS GREATER THAN SPACES
    AND TRACKING RELATED CITATION IS NOT EQUAL TO TRACKING CITATION NUMBER
    OR TRACKING RELATED CITATION IS EQUAL TO HOLD TRAFFIC CITATION
    THEN SSA TRAFFIC CASE NUMBER TRACKING RELATED (TRACKING SUB) IS EQUAL TO TRACKING RELATED CITATION

These are much better than reading raw COBOL code, but they do leave a lot to be desired. The programmer’s business rules in this example were linked to the COBOL paragraphs from which they were derived for ready reference, but in general they have lost the sequential nature of the execution which informs attempts to provide abstractions of the detailed programmer’s business rules. They are useful as a reference but are not as useful as the semi-automated approach described in the next section.

5.2 Semi-Automated Static Business Rule Extraction

Semi-automated business rule extraction tools parse the source code just like the automated service, but there the resemblance ends, as the parsed results are stored in a repository for further analysis. Once in the repository, the ability to query the data for useful information is limited only by the power of the query language, the ability and experience of the analyst, and the time allotted. The tool can be used to create complete program documentation, including data layouts, I/O operations, basic program logic flow, and analyses such as individual data elements that are queried or only updated in the course of program execution.

DiminishingReturnsStage2red

These reports are much more useful than the results of the fully automated analysis above, but they are more expensive – because of the interpretation time of the analyst using the system. Note that a fully automated BRE produces many artifacts besides the raw programmer’s business rules, but the ability to query on an ad hoc basis is likely to be significantly less robust.

An example of a semi-automated analysis report is given here. This is derived from a not unusual mainframe COBOL program of approximately 10,000 lines of code, of which roughly half are data definitions and the other half procedural logic. The analysis report boils this down into 7 printed pages totaling 1,280 words, and took approximately one week to prepare from the source by a skilled BRE practitioner. Following is a list of three requirements and three business analyst’s business rules abstracted from this report.

Requirements:

  • I want to be able to reprint a mailing label for an account without updating the mailing date in the account.
  • I must request a mailing label from a workstation that has the ability to print it.
  • I cannot request a mailing label if the account is not in a state that allows a mailing label to be printed.

Business analyst’s business rules:

  • Mailing labels can only be requested for accounts which exist, are active, have a status indicating that some processing has occurred against them, and do not have a status indicating that the currently due mailing has taken place.
  • The date, identity and location of the requester of a mailing label must be recorded.
  • The update of the account record and the information updated must be recorded.

This is a very short list of requirements and business analyst’s business rules, given that the original information came from a 10,000 line program and a 7 page report. To be fair, however, these rules are built on top of the data definitions for the system and the semantics of the data relationships which are defined across the whole system (such as, in this example, the meaning of “a state that allows a mailing label to be printed”).

Therefore,

the specifications for the whole system =

all the requirements and business analysis business rules from all programs summed together +

the data definitions and semantics for the persistent and transient data stores.

This example of extracted requirements and business analyst business rules illustrates our assertion that business analyst business rules are invariant across the modernization while requirements may change. The three business analyst business rules above will not change in the modernized system, no matter how implemented. However, the first requirement may indeed change by implementing the new system such that, for example, all labels printed have their printing date recorded separately from the date of the first mailing label.

Note, however, that the list of business analyst’s business rules and requirements extracted is not given in the BRE report. A business analyst consumes this report and augments his or her own analysis, and in the process loses some of the information from the BRE analysis which is itself less than 100% perfection. Even boiled down, all aspects of a 10,000 line program are still too obscure and convoluted for anyone to comprehend 100% completely, and this is one of the simpler programs in the system.

For comparison, think of a 100,000 word novel (approximately 150-200 pages) which would be roughly equivalent in length to this program – can you remember every single sentence and phrase in the book and how each one contributes to the plot? Precisely? With no ambiguities?

Furthermore, interactions with other programs are not clear (even though implied in the report by recording the updated information), and when the other programs are reviewed their own interactions with still other programs will similarly not be 100% understood. In other words, while it is unreasonable to expect perfection from a BRE process, it is nevertheless still much, much better than pure business analysis alone when viewed from a risk management perspective. The business analysts’ task is more difficult still without benefit of static BRE tools.

Battlemap-ColoringByCoverage-TestCNo one person understands everything in these programs, and even if they did they would not be able to recall all of the incredible minutiae of the logic (i.e., programmer’s business rules plus technical logic) required to implement the business analyst’s business rules and requirements. Nor can you collect what every subject matter expert knows and collect it all together in a seamless whole to provide a complete understanding. But, when you modernize a system, 100% of the active programmer’s business rule minutiae is required to be understood and the business analyst’s business rules derived therefrom must be transferred precisely to the new system or you will get different results from the new system.

Requirements that are also distilled during the analysis may or may not be transferred to the new system depending on the goals of the project. However, missing a requirement has few consequences and can be easily repaired when the missing functionality is identified. This is because a requirement that does not reference a business rule will not affect stored data.

Not so business analyst’s business rules – when a business analyst’s business rule is missed or misinterpreted, the new system’s stored data will be wrong. This result of erroneous processing may or may not be discovered for a very long time, certainly long enough that it will be expensive or impossible to put right. In some cases, the time can be years into the future.

Thus, as we discussed at the beginning of this essay, the critical question for BRE is what level of risk from getting the business rules wrong is the organization willing to tolerate. If the answer is none or the least possible, then the only way to get to that highest level of accuracy is dynamic BRE as discussed in section 6.

5.3 Technical Criticisms

Three technical criticisms can be attributed to both forms of static BRE:

  • Spaghetti code, unfortunately common in many aging Spaghetti_pile_iStock_9076529XSmallCOBOL and other legacy language applications, seriously degrades the yield from the analysis.
  • The analyses are static, in the sense that all the code is analyzed whether or not it is executable and whether or not it represents functionality that is no longer in use. Neither form of static BRE produces a dynamic analysis which can reveal subtle interactions of programmer’s business rules and which allows us to reach 100% business rule extraction.
  • Technical implementation code – the “how” of an application, the chaff – is intertwined with the business logic – the “what” of an application, the wheat we are seeking. Some code that is clearly of a purely technical nature can be filtered out, but this problem can seriously contaminate the results even after filtering.

The most difficult problem with both static and dynamic business rule extraction (and, indeed, business analysis as well) relates not to technical issues but to understanding and managing the economics. Static business rule extraction shows a positive ROI while traveling further up the diminishing returns curve discussed above, but its own diminishing returns will still cause the ROI to turn negative before we reach 100%. We will return to this point after discussing dynamic BRE in the next section

Business analysis and static BRE are discussed as distinct for purposes of clarity, but they do overlap to a significant degree in practice as each informs the other and so should be thought of as complementary. The business analysts will take the reports produced by static BRE analysis and abstract those reports into business analyst business rules and business requirements.

6 Dynamic Business Rule Extraction

6.1 Dynamic Business Rule Extraction Allows Business Process Optimization

The fundamental insight that drives dynamic BRE is that business analyst’s business rules must be invariant across the modernization, unless the business process is also morphing into something significantly different at the same time, which if so excludes the possibility of using dynamic BRE in such a project. For example, if you are manufacturing cars and are going to start manufacturing trucks, then many of the business rules will change, will become obsolete, and will be joined by new business rules in which case dynamic BRE is not practical.

Dynamic BRE does not exclude optimization of the current business process into something that represents a significant benefit to the business. This is because business rules – which do not change – control update transactions in the legacy application and will do the same in the new application. Process optimization, as opposed to a fundamental change in the business process purpose and execution, primarily affects the query transactions, and affects update transactions only to the extent of determining when and where in the process they will be executed.

6.2 Stage 3: Dynamic Business Rule Extraction Methodology

Dynamic BRE is a “white-box” technique, whereby we probe deeply into an executing program to determine exactly what it is doing and why. The technique requires tools, including code coverage analysis – which illustrates which programmatic statements have and have not been executed – and interactive debugging tools that operate on the legacy code.

The dark blue area representing stage 3 dynamic DiminishingReturnsStage3redBRE indicates those residual business analyst’s business rules identified by dynamic BRE that were missed by stage 1 business analysis and stage 2 static BRE. Probably these will be the smallest in number exposed by the 3 methods, but also the most complex and difficult rules – and the ones that will incur the greatest cost if found defective during production operation. Dynamic BRE will get us to 100% of the active business rules with a positive ROI in applications of a suitable scale and risk profile.

The methodology used in dynamic BRE is to create test cases using the identified business analysts’ business rules, and submit those tests to a controlled test environment for the legacy system. The results are examined to see if they are the expected results from the test, and if so then the identified business analysts’ business rules are judged to be correct. This is how we identify errors in the business analysts’ business rules.

Identifying omissions is less straightforward, as it is a case of the dog that didn’t bark in the night. When we execute the test cases representing the identified business analysts’ business rules, we do so using cumulative code coverage analysis. After confirming that the expected results were obtained, then we look at the coverage reports to see what legacy code was not executed. That unexecuted logic reveals the omissions in our understanding of the business rules.

Using those code coverage reports, we analyze the origin of the missing functionality and propose additions to the business analysts’ business rules. These are debated with the business analysts, refined, and added to the test sets. Then the process is repeated until we reach 100%.

In the most complex cases where the details of the logic execution cannot be readily elicited by analysis of the code coverage reports, we will use interactive debugging to step through the executing logic. We will examine contents of data fields referenced by the logic to determine why our analysis was incomplete. For complex logic, this can require the creation of special data cases and can become quite time consuming for complex logic. Fortunately, there are typically few areas in an IT system where the logic is this complex, but at the end of the day if the cases we identify are not obsolete then they must be understood so that the new application will be whole.

6.3 Use of Code Coverage with the Modernized System

Using code coverage on the modernized system can be useful, but it cannot identify omissions in business analyst’s business rules. However, what it can do to is identify newly introduced code which may be unintended, undesirable, or malicious.

If the coverage on the legacy system is 100% and the coverage on the new system is less than 100%, then management can ask what is the purpose of the additional functionality revealed by the analysis? For certain very high risk applications, ensuring that no unwanted new code enters the system is a significant management security issue. This is the only way to comprehensively detect such anomalous code.

6.4 Pragmatics of Dynamic Business Rule Analysis

The dynamic BRE analysis is indeed more complex operationally, it this is not an impractically expensive process. For example, in one project of 2.3 million lines of IMS COBOL, it took 6 people 18 months to map out the missing programmers’ business rules. That represents 9 person years, but in a project budgeted at $100 million and which had the possibility of misplacing literally billions of dollars each day, it was a very worthwhile investment. A more recent project invested in reducing the operational complexity (and therefore cost) through software to manage the dynamic BRE process. There was a net positive ROI just in testing and avoided production defects without considering the reductions in business risk achieved.

On the other hand, dynamic BRE is simpler than static BRE for the programmer analyst doing the analysis. When analyzing a program (or set of programs) statically, you have to understand all elements of what is going on which can become overwhelming even to the most capable programmer analysts. It’s just too complicated to hold it all in your mind at one time. With dynamic BRE, you only have to understand why one single program did not go down a single specific logic path. It narrows the conceptual bandwidth considerably and thereby allows greater depth in the analysis.

Perhaps more importantly, the result is an executable test rather than an analysis document that may or may not be fully complete or correct and/or which may or may not be interpreted completely and correctly. An executable test has no ambiguity: either it returns the result your analysis leads you to expect, or it doesn’t, in which case you dig deeper and you continue to dig until you get the program to follow the indicated logic path, or you determine that doing so is impossible. The dynamic BRE process, by being so thorough, will uncover code which appears executable on initial analysis but subsequently is revealed to be involved with subtle logic defects, i.e., bugs.

There are two key points to understand here:

  • In the process of analyzing the code to create a test to produce the desired result, you come to understand at the code execution level precisely and unambiguously what is occurring and why. The programmer analyst creating the test can then communicate this new information to the business analysts to update their business rules. Like static BRE, this creates an improved understanding that is captured within the set of documented business rules. In other words, the result, obtained unambiguously, is communicated in written documentation which always allows the possibility of residual ambiguity in its meaning as interpreted by the implementation team for the new system.
  • If the new system’s programmers interpret the business analyst business rules incorrectly or incompletely, then the new system will produce a different result in some of the circumstances exercised by the set of test cases, and the business analyst’s business rule discrepancies (or implementation discrepancies) will be exposed at that time and necessary corrections made. It is at this point – not at the final analysis step – where 100% is achieved. We will return to this point in a moment.

The final result will be 100% because it is the execution that is compared between the old and new systems, not the documentation that is compared. Execution is unambiguous whereas documentation may contain residual or newly introduced ambiguities. A failure to completely and comprehensively understand will be caught in the execution of the tests before the new system goes into production.

6.5 Analytical Failures in Business Rule Extraction

Just how pervasive is this failure to understand the business rules in a given program? A case study may shed some light here. In the 2.3 million line project referred to above, we had superb resources available:

  • We had a team of code readers who were documenting the raw source code in detail,
  • We had the results of very meticulous business analysis, and
  • We had the results of a fully automated static BRE analysis.

Our dynamic BRE team worked by analyzing the logic paths that had not been executed in the existing tests, and extended them by defining new sets of data conditions in the test cases until every single logic path in every program had been exercised or declared as obsolete or otherwise out of scope.

Unusually, we were not allowed to directly create the test cases ourselves. Instead, we had to document each test case we needed, and take it to the testing group. The testing group would then use the reference information from the code readers, business analysts, and static BRE analysis to create the test case. Note that this is the exact same information that was simultaneously being used to build the new system.

When we explain this experience to any group, we ask for estimates of how often the testing group got it right, i.e., when they created a test case which exhibited the result specified in our documented requirements. Responses ranged from 10% to 50%. About 50% was the answer, which surprised those who had even lower expectations. On the other hand, 50% of the time they got it wrong, despite having far more information than most projects have available.

We pass this along as a cautionary tale: under the best conventional circumstances, good people will still get complex logic wrong half the time. Executives considering a modernization project are well advised to ponder this statistic.

6.6 Technical Criticisms

Returning to our technical criticisms of static BRE above, let us review dynamic BRE in the same light:

  • The analyses are dynamic, in the sense that all non-obsolete, executable code is analyzed as a result of the code coverage analysis. Obsolete and non-executable code is marked as such and subsequently ignored. This exposes all of the subtleties that are missed in static BRE.
  • Spaghetti code increases the effort to complete the dynamic BRE analysis but does not limit the yield from the analysis; the result will still reach 100%, but it just costs more than properly structured code. (Note that for COBOL at least it is possible to untangle the spaghetti prior to analysis, which for a large code base can be a worthwhile investment.)
  • Technical implementation code will be essentially ignored in the dynamic BRE analysis since we are focusing on what has not been executed in the code and why. The technical implementation will be executed in order to produce the coverage report. The focus will be on the conditional aspects of the logic – what data or combinations of data will cause this or that logic path to be executed. Thus although we are looking at code in the chaff, in the process of performing this analysis we discover the wheat. And, if we fail to discover one particular kernel, it will be revealed in the execution of the tests by the new system.

6.7 Economics of Business Rule Extraction

Once you reach the point of diminishing returns in static business rule extraction, it becomes more economical to shift to a technical strategy of dynamic BRE. The question could be asked as to whether we shouldn’t skip static BRE altogether and go directly from stage 1 business analysis to stage 3 dynamic BRE. It’s a fair question.

Economics drives the answer. Static BRE requires the context of the application to get started. Without business analysis to begin the process, the static BRE analysts would have to perform the same introductory analysis or they would be stuck at the starting gate.

Starting with dynamic BRE directly from business analysis is analogous – dynamic BRE gets very deep into the execution of complex business and technical logic. Without the context of the application from business analysis, dynamic BRE would also be stuck at the starting gate. But it goes further than that, because much of what needs to be analyzed within a program is more efficiently done with static source code analysis. Dynamic BRE is significantly more analytically intensive than static BRE and should be reserved for update programs. We don’t have to apply BRE to the whole library. Only rarely did we use it with query programs.

The economic problem here is that finding the point of diminishing returns in business analysis and in static BRE is more art than science. Each domain tends to see itself as the way to go forward. Business analysts can’t be certain as to when to stop and turn their efforts to static BRE, and similarly static BRE analysts can’t be certain as to when to stop and turn their efforts to dynamic BRE. Thus, managing this process to get the greatest value for the least cost requires someone who understands all 3 processes sufficiently to say, “just this much and no more.” Otherwise business analysts and static BRE experts would just analyze until the cows came home. Analysis paralysis then ensues.

It should be added that this process is not going to put the business analysts and static BRE experts out of a job. In the process of dynamic BRE, questions will be asked of the static BRE analyst. (Ideally, the same people should be involved in both static and dynamic BRE because then they will automatically take to the least effort path between the two, but this is not always practical.) The output of dynamic and static BRE analysis still needs to be documented as business analyst’s business rules and requirements at a higher level of abstraction than most programmer analysts are likely to produce. For this reason, dynamic BRE results, like static BRE results, should be communicated to the business analysts.

Business analysts seek out the abstract rules and requirements, the wheat, while programmer analysts who will be doing both static and dynamic BRE have been immersed in procedural logic their whole careers, so they see only the chaff. While many can adapt to the abstractions required for eliciting business analyst’s business rules, the individuals doing the work should be managed to ensure that what they are producing are in fact business analyst’s business rules and requirements, or they should be paired with business analysts.

In a conventional project, using business analysis alone or using business analysis + static BRE, the alternative to dynamic BRE is a fix on failure strategy in production. There are two significant problems here: (1) we must ensure that any defect will be recognized in a timely manner and (2) the cost of fixing programs and data in production is orders of magnitude higher than fixing the problem at the requirements stage of the SDLC. These problems can be difficult to solve and must be approached on a case by case basis if dynamic BRE is not considered. Frequently there is no solution and people just do the best they can, with the results we hear about.

A decision whether or not to use dynamic BRE cannot be based on technical criteria. Business management must be involved in a decision whether or not to invest in dynamic BRE, since only business management can completely offset the competing issues of cost versus risk. In some cases the risk profile of the application does not warrant the additional cost of dynamic BRE if the cost of an error caught in production is sufficiently low. For these cases, we recommend stopping after static BRE and using the fix on failure strategy, but these are recommendations only. Management may have other considerations and may derive a different answer to the question.

7 Modernization Testing

The fundamental problem to be solved in modernization testing is – how do we detect a failure? This is the problem with fix on failure strategies – many failures lie hidden in the data for extended periods of time, and affect large numbers of data rows before being eventually identified. Looking just at the output or the data stored in the modernized system makes it very difficult to recognize defects.

In standard testing, we test against requirements so that a defect is a failure to produce the documented expected result. However, in application modernization it is the requirements themselves that require testing. How do we detect errors and omissions in the requirements?

JusticeScales1The only answer to both these questions is to use the legacy system as the standard of truth. It is defined as correct since it is supporting the business today. This is the basis of our Test Driven Modernization methodology. We need to compare the results of processing on the modernized system to the equivalent processing on the old system. This should occur (1) for the test cases output of dynamic BRE and ideally (2) for real-time comparisons of processing in a production parallel environment.

Dynamic BRE crosses into the testing domain, and we will touch briefly on modernization testing here. (We plan a separate essay on modernization testing essay to address testing issues in detail.)

With dynamic BRE, we create and validate test cases on the old system using code coverage analysis and interactive debugging. The result will be a complete set of the documentation of business analysis business rules and an equivalent set of executable test cases.

This documentation of the business analyst business rules will be sufficiently accurate to ensure that the scope of planning and return on investment calculations considers a complete design. However, it will not be sufficient to ensure that residual errors and omissions did not creep into the implementation of the new system. Ambiguities of expression and failures in interpretation could still result in something less than 100% of the business rules in the new system even when the documentation is 100%. To assure that we have 100% of the business rules correctly implemented, we execute those test cases in the new system in a fully equivalent controlled test environment to satisfy (1).

Setting up for production parallel testing (2) will be discussed in the modernization testing essay currently under development. (Feel free to contact us if you have an urgent question in this regard.)

Creating this fully equivalent testing environment requires sufficient equivalency between the old and new system to be able to use the tests. What this means in practice is that, for a correctly modernized application, a semantically equivalent transaction (or set of transactions) submitted against equivalent data will produce a semantically equivalent result, regardless of how different the new implementation is, because the business analyst’s business rules will not have changed. Conversely, if the input and initial database state are semantically equivalent, and if equivalent transactions are submitted to both, then a discrepancy in results indicates that the business rules were not implemented equivalently.

Note that semantic equivalence is mandatory – the database data model does not have to be identical but there must be a complete mapping between the two data models for non-obsolete data elements, including translation of data elements whose expression (such as coded values) have changed. This is a practical necessity to allow for comparisons between the two data stores, but, more subtly, if semantic equivalency cannot be established then there has been a change in the business rules that may not be apparent to the architects of the new system.

In other words, semantic equivalency is the foundation upon which a successful replacement application is built. Conversely, the lack of semantic equivalency when the business process has not changed constitutes a fundamental flaw in that foundation which will be expensively exposed sooner or later.

Note that this testing of business rules only requires equivalent testing of the update transactions. Since business rules only apply to update transactions, all tests involving update transactions must be executed. We do not have to compare the actual transmitted results of the tests. We only have to compare the database records after processing.

However, it does not preclude testing of the queries, nor does it preclude comparing the contents of the transactional replies, if the implementation of the new system allows us to do so. If practical, this additional step will improve the quality of the testing, even though it is not necessary to prove equivalency of the business analyst rules.

Note also that this only applies to the functionality that will replace existing functionality. Wholly new functionality is treated separately, as it has no equivalence in an old system for comparison.

8 Separating the Wheat from the Chaff

anafghanfarmNow we come to the crux of the matter. How do we separate the kernels of actual business analyst’s business rules from the chaff of the technical logic, particularly when the strands of that logic can literally flow all over a sizable program like a plate full of spaghetti? We’ve discussed the nuts and bolts of the process above, but what about the larger picture?

Legacy programs written in procedural languages like COBOL, PL/1 and various 4th generation languages tend to be large to very large, unlike modern Java/C# programs which typically consist of a larger number of much smaller modules. (Let us not even speak about those application systems written in mainframe assembler.) We regularly work with legacy programs of 10,000-50,000 lines of code, and some have worked with single programs of as much as 300,000 lines of code. One colleague worked with a single program of over 1 million lines. Yes, these are extreme cases, but we mention them because the same problems occur in all legacy programs.

Frankly, it’s often an unholy mess and it’s no wonder that analysts and programmers are reluctant to dig into it all, but failing to do so completely and thoroughly costs more – sometimes much, much more – than gritting their teeth and diving into it.

8.1 Wheat = Business Analysts’ Business Rules

We want to end up with the pure business analyst’s business rules because those will be invariant across a modernization project while everything else can change, as we discussed at the beginning of the essay. The program implementation language may change, the hardware and operating system may change, the database may change, indeed functional requirements on the system may change – but the business analyst’s business rules only change when the business processes change in some fundamental way. If we can prove that the business analyst’s business rules controlling update transactions for the new system are precisely equivalent in function to the legacy – no more and no less – then the new system can proceed into production with the confidence of all concerned, not the hope and false confidence which attends almost every large or very large modernization project today.

In stage 1, business analysts will produce business analyst’s business rules, but all too often key elements of specificity are not documented authoritatively, such as precise calculation formulas, the conditionalities applied to sets of data to determine eligibility, and any of millions of other logic patterns which must be abstracted into a rule. This is why in a stage 1 only analysis, programmers will frequently open up specific programs and attempt to resolve ambiguities by reading the code and then explaining it to the business analysts. This is much more difficult than using a static BRE tool, and so it is done only when considered absolutely necessary.

8.2 Chaff = Programmers’ Business Rules

As we discussed in the static BRE section, we have to wade through all of the programmatic logic to find our kernels of wheat. The logic consists of programmers’ business rules and technical logic. Usually we can filter out the technical logic, but we still have to wade through the procedural logic of the programmers’ business rules that implements the business analysts’ business rules. We have to boil down the 1 million lines of code in the example above to end up with maybe two or three thousand actual business analysts’ business rules.

In projects that go on to stage 2, the BRE experts will take a steady stream of questions from business analysts while analyzing the code base with the parsing tools that are the methods by which static BRE works. The process will also work in reverse as the BRE experts propose rules they have abstracted from the code and debate them with the analysts, to the benefit of both.

8.3 Only Execution Can Prove Equivalence

When static BRE runs out of steam, for projects in which dynamic BRE is justified, the stage 3 dynamic BRE process operates similarly to static BRE in the interactions with the business analysts. The dynamic BRE experts will take proposed rules and requirements back to the business analysts (and probably the static BRE analysts) and debate them. For example, a programmer analyst performing dynamic BRE might uncover circumstances which may or may not be obsolete, and goes back to the business analysts who in turn may go back to the business owners, and eventually it becomes clear whether something is or is not obsolete.

However, ambiguities remain in both processes. Both business analysis and static BRE are dependent on human analysis. Both require understanding, and this is why neither process can ever reach 100% for large to very large systems. The interactions and permutations of the procedural logic will defeat anyone or any group from achieving a perfect, 100% understanding. The complexity exceeds what a person can hold in their mind.

The reason that we focus on business rules is that this allows us to reduce the complexity of the task to a level that we can understand. Then, we use these rules to build the new system, and then we validate that the rules are complete by demonstrating functional equivalence of the update transactions of the new system compared to the update transactions of the old system.

Ultimately, it is the test cases themselves, proven to provide 100% coverage of non-obsolete code, that ensures that 100% of the active business analysts’ business rules have been implemented with functional equivalence. Only execution, not analysis, can prove equivalence because the results of execution are unambiguous. Analysis produces an assertion that only execution can prove or disprove.

The cost of this analysis and testing can be high, but the cost of not doing it can be higher still, in some cases far higher. This is why we started with a risk analysis and asked, “what is the potential cost of an error or omission in business rules in production operation?” Or, perhaps a better way to put this is, “what are you willing to pay to minimize the cost of an error or omission in business rules during production operation?”

If you are not prepared to pay this price but also not prepared to run the business risk, perhaps you should consider a more incremental strategy to modernization which does not involve a complete overhaul of the application. We believe it is better to get a realistic grip on the likely full price at the outset than it is to begin the process with unrealistic estimates and have to go back to the well again and again for more resources.

9 Preventing “Legacy” In The Future

Once you have your business rules, then what? What happens in most projects today is that the business rules are interpreted by programmers and rendered into a modern, object oriented language such as Java or C#. The problem with this approach is that business rule expressions rendered into a procedural (or “imperative”) language will eventually turn back into “legacy” again.  After all, when the COBOL systems were new, their business rules were (or should have been) as cleanly expressed as newly written Java is today.

The problem is that Java and C# based systems, while better than COBOL in several key technical respects, are nonetheless just as subject to the “legacy” problem as COBOL. Just recently we were asked about a “legacy Java” system of over a million lines of code that needed to be modernized. Let us briefly discuss the alternatives to Java and C#.

9.1 Logical Expressions – OWL and SWRL

Web Ontology Language (OWL) and Semantic Web Rules Language (SWRL), standards published by the World Wide Web Consortium (W3C, www.w3.org) allow the expression of extracted business rules in logical form.  For example, given the following informal expression of a business rule governing the articles of incorporation for a state government:

Article 4 – An effective date may be specified. The effective date can be
up to 90 days AFTER the Articles of Incorporation have been filed by the
Office of the Secretary of State

It can be expressed as:

if ?sub is a app:UnSubmittedSubmission

and ?sub osos:effectiveDate ?subef

and ?sub osos:dateForSubmissions ?subdate

and swrlb:subtractDates(?dataDiff, ?subef, ?subdate)

and ?dataDiff > “P90D”^^xsd:duration

Then ?sub is a app:InvalidSubmissionsEffectivity.

The very significant advantage of this approach is that the result will never become “legacy” again.  We won’t have to repeat this exercise in 20 or 30 years time.

OWL and SWRL are hardly the only such logical languages. A language gaining in popularity is Semantics of Business Vocabulary and Rules (SBVR), based on standards published by the Object Management Group (www.omg.org).  The major difference between the two is that SBVR is more easily read by human beings but is not executable unless a special subset of the language is used.  OWL and SWRL are directly executable by an appropriate run-time environment. There are other efforts going on at the moment, such as attempts to use visual modeling languages such as UML as the basis for generating executable code.

Regardless of the form, if the result of our business rule extraction can be separated from the technical logic that invokes the rule, then we can futureproof our new application.

9.2 Domain Specific Languages (DSLs)

DSLs are an approach that can provide the same separation between technical logic and business rules as the logical languages in the previous section. The major difference is that DSLs are, by definition, a language for a specific domain such as your business. By hiring a compiler writer in place of a programmer, you can define your own language in which to express your rules and futureproof your application. DSLs are applied to many domains, the best known of which is Structured Query Language (SQL) for accessing data in relational database management systems. The major disadvantage of a DSL approach is that the result when used to solve a specific IT requirement will not be standards based language, and eventually the cost of support will become problematic.

9.3 Rules Engines

A popular approach is to use a commercial or open source business rule management system (BRMS), commonly referred to as a “rules engine”. These are based on the Rete algorithm.  Different systems allow different forms of rule expression, one of the most readily understandable being decision tables using a spreadsheet forms, such as:

Drools_row_col2

In such a decision table, each column is a condition and each row results in a rule:

Drools_row_col1

(Examples taken from Jboss documentation).

There are also rule languages, such as the one supported by the open source Drools. It is interesting to note that the Drools language is designed to support both domain specific and natural languages.

9.4 Pragmatics

Futureproofing your modernized application can be accomplished in a sensible and cost effective manner if you utilize a method that fully separates the technical logic that runs the application from the decision logic that implements the business rules. All of the methods discussed above will do so to a greater or lesser extent.

By far the best approach will combine an ontology (the data definitions + the conceptual business rules) with a run-time implementation that will execute the transactional business rules. This is the least amount of effort and provides the cleanest separation between technical logic and business rules, but any of these methods will work if utilized properly. Conversely, any approach that allows for expression of business rules in procedural logic intermixed with technical logic will turn back into legacy again, sooner or later.

10 Conclusions

The fundamental problem with modernization is that business operations depend on the consistency and functional reliability of the existing IT application software, but bringing that consistency and reliability forward is often significantly underestimated in terms of cost and the risk and time required. Only 100% complete and error-free extraction of active business rules can produce that consistency and reliability as operations transitions to the new system.

Those business rules are incorporated into the system requirements, but requirements based testing, even with the best methodology employed, can never discover errors and omissions in the rules within the requirements upon which the tests are based. Only comparison with the legacy system at the execution level can reveal errors and omissions in the business analysts’ business rules. Note, however, that comparison with the legacy system will not reveal errors and omissions in the requirements that do not relate to business rules. We can only detect errors in the business analysts’ business rules.

As we have discussed at length, identifying all of the active business analyst’s business rules in a legacy application is difficult, and gets progressively more difficult as the size of the application source code library increases and as the process proceeds. The tedious nature of the analysis makes it very tempting to say, “this is good enough! Let’s go code!”

Unfortunately, this attitude leads to cost overruns and particularly functionality shortfalls in the replacement application. It has been well established by research over 3 decades that the cost of discovering and fixing a defect in production is 200-1000 times more expensive than fixing it at the requirements stage. But at the beginning of a project, such defects are a distant prospect and the programmers are itching to get started now. (This is the time to re-read The Mythical Man-Month.)

In some cases, it is acceptable to start the project with less than 100% of the active business rules, providing that management signs off on it explicitly, such as cases where the cost of an error is writing an apologetic letter to a customer. In other cases, it is not acceptable to misplace $10 billion in a complex funds transfer or to lose an aircraft and crew due to maintenance failures. This is of particular concern in highly regulated industries where non-compliance can become quite expensive. There is no one size fits all answer, but the only certainty is that the rules that you miss will be the ones that hurt you the most when you are trying to get the finished system into production.

 

References

Barry Boehm and Victor R. Basili, “Software Defect Reduction Top 10 List”, Software Management, January, 2001.

http://www.cs.umd.edu/projects/SoftEng/ESEG/papers/82.78.pdf

The Business Rules Group (formerly, known as the GUIDE Business Rules Project), “Defining Business Rules ~ What Are They Really?”, Final Report revision 1.3, July, 2000.

www.businessrulesgroup.org/first_paper/BRG-whatisBR_3ed.pdf.

Michael Lundblad and Moshe Cohen, Software Quality Optimization: Balancing Business Transformation And Risk, IBM, March, 2009.

http://c3328005.r5.cf0.rackcdn.com/d94f00d6-de05-4891-b364-a8b2df45a51c.pdf

 


 

One Response to Business Rule Extraction Essay

  1. An excellent article and a process (Dynamic BRE) that I have and will recommend to anyone doing a “like-for-like” Transformation Project! (Note: Jim was an active participant in the NY Fed project.)

Leave a Comment

Your email address will not be published. Required fields are marked *

*