By Barry Tait, Director of Modernization and Cloud Strategies, Modern Systems &
David Tanacea, Chief Domains Officer, Ness Digital Engineering
As children of the 80s settle into middle age, we’re seeing a lot of short-lived, nostalgia-driven tech trends that remind the rest of us just how old we really are. Everyone from Ariana Grande to indie bands in dive bars are now offering an interesting throwback on their merch tables: cassette tapes. In 2018, cassette tape sales soared 125 percent to a 15 year high in the UK, amounting to nearly 50,000 albums purchased on the old format. Fads come and go, and even though it’s a far cry from the tape’s heyday in 1989, when close to 83 million cassettes were bought by British music fans, it’s a stark reminder that pop culture knows no bounds, and nothing really lasts forever.
Interestingly, this example is both illustrative of the explosive impact of retro trends, and a cautionary tale of the power of disruptive innovation. When it comes to disruption, most major industries are weathering the change as we speak. For example, in consumer banking, young technology-centric companies are out-pacing their larger, more established counterparts. Many centuries-old banks are struggling to pump out mobile apps and features fast enough to compete with cloud and mobile-first disruptors, whose business models are almost entirely digital. In the automotive space, companies like Tesla have driven expectations of quality and experience to an entirely new plane, threatening to leave their fossil-fueled rivals in the dust.
Without applying digital thinking across everything they do, established companies will continue to fall short of their newer, more agile counterparts, and their biggest roadblocks are the legacy systems that make them tick. Unfortunately, simply adding a digital frontend on top of a mainframe legacy system runs akin to ‘slapping lipstick on a pig.’ The required timeframe to develop and release new features on top of an existing monolithic legacy architecture is far too lengthy, and businesses, whether they be in banking, transportation, retail or government, wind up suffering by way of slow forward progress.
Luckily, most CIOs are well-aware of the existential threat of slow transformation, and they’re throwing their top minds (and a lot of capital) at the problem to address the problem, but it’s not a simple task.
The reality is, all business units and teams are impacted by the upheaval that comes with retooling the business to quickly react to change. But, mainframe owners and operators arguably stand to feel the greatest impact. The move from waterfall and siloed development operations in relatively modern IT environments to a DevOps-centric land of continuous integration is a difficult task, but escaping the confines and ancient hieroglyphs of the mainframe world is an entirely different monster. So different, that many have shied away from it entirely, kicking the proverbial can down the road as far as they can, hoping for a miracle before they run out of road.
But the facts say retiring big iron as soon as possible is the right move. Organizations who choose to move away from the mainframe to distributed systems are estimated to achieve monthly cost reductions of more than 60 percent. Plus, the anxiety of keeping rotting infrastructure running and looking for developers who understand mainframe assembler, COBOL, or CA Gen disappears, and applications and databases that were once black boxes of mystery are suddenly open for rapid evolution, integration, analysis, and change at the rapid modern pace that’s required to survive.
There are two major macro-level topics to address when it comes to a legacy modernization project:
One: Not All Legacy is Created Equal
One of the biggest issues facing customers today is the continued confusion surrounding the industry’s lack of mutual agreement on the definition of the term ‘legacy.’ Legacy in the context of mainframes is a completely different, unique starting point compared to other legacy modernization projects where monolithic Java is refactored into microservices. Legacy mainframe projects involve decades-old complex integrated applications developed in different languages and accessing different data stores—all supported by a complex operational and infrastructure landscape requiring unique, yet highly diminishing skills.
One thing’s for sure: before determining the ideal path to take with a legacy project, it’s critical to kick off with a comprehensive assessment in order to understand the details of your environment. An assessment provides a guidepost with clear recommendations for various disposition strategies and pathways. The good news is that there are several different approaches available when looking to modernize your legacy-based assets. The ideal approach (or combination of approaches) will depend on several factors, including budget, timeframe, available skill sets, the future direction of your company, to name a few. A recent blog post, Disposition Strategies to Consider for your Next Legacy Modernization Project offers advice and specifics about various disposition options, when to select each and why.
Two: Finding Optimal Results in the New World Order
The second macro-level issue when it comes to legacy modernization is finding the optimal operating model and technology solutions to best increase productivity and better scale in the new world. This operating model consists of four main areas:
- Modern Application architectures
- Automated Testing
- Agile Practices
For each of these four categories, the first step is to identify the principles and practices that are best for the organization. It’s critical to deeply dissect the business problem, before you determine which flavor of service makes the most sense post-conversion.
For example, before determining the right service discovery/communication and container orchestration strategies within a modern architecture, you should determine how they fit into your desired business goals.
When agility and/or horizontal scalability is a business goal, then a microservices architecture is often employed. The design of this architecture, covering all non-functional requirements, should be based squarely on the business objectives. The same is true for the functional requirements. Services should be designed with business objectives in mind (bounded context) and aligned with product teams in the organizational structure. In this way, more agile business and delivery principles can be employed. For some applications, the goal might be to move it to a cloud native state, taking advantage of a microservices architecture. The most important thing to remember is that not all applications require the same treatment, nor should you move everything to the cloud just for the sake of doing so. Disposition strategies need to be rooted to, and aligned with specific business goals. For instance, if cost reduction is the only goal (MIPS and licensing reduction) a lift and shift strategy can be employed without re-architecture to microservices.
Modern delivery pipelines and continuous integration strategies are quite different than the typical release schedules of mainframe applications. Attention must be paid to the cultural issues, principles and practices involved in these functions before employing automation and tools. Likewise, “shift-left” test automation principles and integration into delivery pipelines must take principles and training into account before implementing changes.
The modernization of legacy mainframe applications is a little like walking a tightrope between an old world and a new world. On the one hand, monolithic mainframe legacy systems have been reliable workhorses of business computing for decades. Legacy systems built on mainframes were unquestionably powerful and stable. On the other hand, as digital transformation has moved far beyond a buzzword, ‘nice-to-have,’ and into the mainstream realm of ‘must-have,’ the constraints presented by legacy environments such as slower time to innovate, sluggish TCO, and lack of agility, to name a few– have become stifling and restrictive to the point of being destructive.
Always Forward, Never Back
While it’s fun to reminisce on the excitement you felt when you finally got the Huey Lewis and the News: Fore! cassette from Columbia House back in ‘86, you probably won’t be turning in your smartphone and closing your Spotify account to pivot back to a JVC dual cassette deck any time soon; the technology is just too inconvenient and restrictive.
It’s time to show your IT applications and infrastructure the same respect you’ve shown your music collection and listening habits.
Now more than ever, organizations must reduce their risk associated with aging technology, and focus on growing their businesses via access to newer, elastic IT methods.
While technology selection is an important aspect of these projects, cultural changes as part of overall operational structure are equally critical, and where organizations often need the most help.
Whichever path you decide to take with your legacy environment, be sure to seek out an approach that leverages automated tooling combined with proven methodologies and conversion processes that are time-tested and continuously evolving. Striking the best balance between what’s old and new is a familiar problem. With legacy modernization this means approaching projects with an attitude of respectful admiration for the system to be modernized.
Rather than viewing it through the lens of what is available today, look at the choices that were available when that system was selected. You will usually find a lot to admire in the inventiveness and resourcefulness of those who cobbled the system together, and successfully overcame the existing technological limitations to produce a platform that answered the company’s needs. Read this blog post, Admiration and Modernization: A Lesson from Vacation to learn more, or visit Ness’ blog for more great articles.