In the 2000s, large organizations questioned their dependency on their mainframes and legacy code. At the time, it appeared that a clean break-up would be impossible, or, best case scenario, it would have unbearable consequences. They had to accept to put up with this costly relationship. It is now time to reevaluate this position and browse the new technologies to step out of this mainframe-legacy-money dogma.
Cumbersome, expensive and complex.
In the early 80s, mainframes seemed to live their last golden years, cumbersome, expensive, complex. The 90s witnessed the launch of many new projects aiming at decreasing the critical importance of the device in large public and private companies. It was a big failure.
Indeed, mainframe skills were becoming scarcer everyday, due to an incomplete documentation, aging resources and a decreasing interest towards those outdated technologies among new recruits. Reaching functional equivalence through a migration on modern platforms turned out to be a real headache. In the end, the fear of the millennium bug got the CIO to shelve these projects for good.
Today, no one can deny that most critical IT activities are still hosted on mainframes and written in COBOL, PL/I or another obsolete programming language. Are we then bound hand and foot?
It would surely be counterproductive to consider this as an irremediable fact. Do we just have to live with this so-called technical debt? Fatalism is a luxury that budgetary pressure and lack of flexibility of the mainframes make ineffective.
Spend money for future investments or to get rid of a technical debt?
To achieve functional equivalence, many are those who preach the rewriting: rewrite one’s legacy system into a modern language. It seems to be THE good idea: finally, get rid of ancient legacies and go for Java or C#. However, it actually would be extremely time-consuming, expensive, dangerous, or to say the least, pointless. We are indeed talking about millions of lines of code and it is impossible to stop these huge systems, even for a short moment, as too many critical and essential applications depend on them. Moreover, what benefit would there be to rewrite these programs, replicating bugs, or even create some new ones?
How not to be affected by modernization and actually benefit from it? How to optimize safely one’s IT system?
Well, simply by not modifying the source code whatsoever and hand it directly to a compiler, within a rehosting process.
Compilers allow the transformation from a programming language to a machine code. The migration of the databases and/or the transactional processing monitor to up-to-date devices or emulators completes the rehosting. The compiler also comes with technical and financial benefits, making it the perfect match to answer the market’s needs.
This process is reversible and therefore, presents no technical risk, as the functional source code remains the same. The code can be recompiled and duplicated on a new machine, preserving the mainframe in parallel. This way ensures a service continuity for the company, with the possibility to transfer the programs gradually, to debug the code as it goes along, and, if necessary, to go backward, in a continuous, risk-free project. The icing on the cake is the total cost of the rehosting. Lower than the mainframe expenditures, it opens the door to investments in new developments.
Moving forward, step by step
To achieve this, several steps are essential:
- An inventory, to collect the system’s metrics (size, complexity, full coverage of the technical platform’s functionalities);
- Pointing out errors, bugs, deletion of the dead code and code refactoring;
- Action plan and gradual processing;
- Sketching new leads for the target code.
Service continuity, reversibility and low cost of the evolved rehosting process are the assets, guaranteeing the stability of the company. We must now consider legacy systems as potential optimization sources, rather than a technical debt.