Composite image of Sana Remekie in front of arrow leading from

Are you looking to modernize your tech stack but feel hamstrung by your legacy systems? If so, you're not alone. Many organizations are in the same boat—stuck with old, outdated systems that are holding them back from taking advantage of newer, more innovative technologies.

According to Gartner, by 2024, 80% of CIOs surveyed will list modular business redesign, through composability, as a top five reason for accelerated business performance.  

So, what exactly is the problem with legacy architectures:

  • They are not composable; they don't have APIs to interact with other platforms
  • They are not scalable or stable and often have performance issues
  • They are difficult to maintain or update, increasing IT costs in both time and resources
  • Extensively customized and third-party elements of the application may no longer be supported
  • The system is no longer agile to emerging customer needs

Organizations are sitting on age-old infrastructure and applications, built over many years to meet the needs of their business. For that reason, these applications are not easy to replace. So, what options do organizations have to modernize?


Even though on the surface this sounds like the ideal solution, practically speaking, it's often very disruptive. The primary reason for this is the reliance on these applications and mission-critical workflows tied to them. They're powering many external or internal touchpoints and experiences, and you can’t afford to destabilize the backend these touchpoints so heavily rely upon.  

Integrations with various upstream and downstream systems and customizations have been made to the core capabilities of these systems to meet evolving business needs over the years. This means it’s not easy to find one system that can do everything that your legacy system does, even though it’s been put together with glue and duct tape.  

Finally, no one wants to spend the time and money needed for a migration project. Most organizations only invest in projects that will bring them immediate ROI. Complete system overhauls just don't fit into this model of business thinking because they take too long-term of an investment without any tangible benefits at first glance (and many end up costing more).

Re-Architect / Re-Build

This involves rewriting the legacy application from scratch. Similar to the migration option, this is also highly disruptive, time-consuming, and costly. If you do decide to go forward with a rebuild, you'll want to employ a composable strategy. However, chances are you will have a hard time justifying this approach to your steering committee.

Retain / Encapsulate

What if you could encapsulate your existing legacy system with a modernization layer? The idea here is to create a modern facade on top of your legacy systems to interface with modern applications while replacing pieces of the legacy system one small step at a time.

The “Strangler Pattern” is a technique for slowly replacing legacy systems with new ones. As the name suggests, the idea is to "strangle" the old system by gradually adding new features and functionalities on top until the legacy system is eventually replaced. This approach has several advantages, including that it doesn't require a complete rip-and-replace of the old system.

How the “Strangler Pattern” Works

There are two main ways to implement the Strangler Pattern: top-down and bottom-up. 

With top-down strangulation, you start by creating a new system that mimics the functionality of the old one. Once this new system is up and running, you slowly start routing traffic away from the old system and towards the new one. Eventually, all traffic will be directed to the new system, at which point you can decommission the old one altogether. 

Bottom-up strangulation takes a different approach. Instead of starting with a brand-new system, you begin by identifying pieces of functionality that can be migrated from the legacy system to a new one without disrupting business operations. These migrated pieces are then wrapped in an API that exposes them to other parts of the system. This process is repeated until all functionality has been relocated to the new system and the legacy system can be decommissioned. 

This is the most practical approach organizations can employ to charge ahead toward digital transformation while maintaining stable operations. For instance, if you have a homegrown CMS, you create an API layer that encapsulates the legacy system and exposes only the content and functions that are required by the new customer experiences that you'd like to launch.

Legacy Systems Need More Than APIs 

Okay, so you’ve added APIs to your legacy applications, so we should be good to go, right?  

Well, not quite. Legacy systems were never built to support today's traffic loads from modern touchpoints, and their infrastructure is weak—they can't keep up with the demand of your modern apps or websites. This means if you want these old apps running smoothly on an API-driven front end, they need something more: a fast-performing data layer that indexes what would have been stored locally in legacy applications.

What if you have multiple legacy systems that hold onto related data and you need these relationships to be exposed for the front-end experience? An example of this in an e-commerce domain would be a grocery brand with a product catalog, ingredients, and recipes that may all be stored in separate homegrown, proprietary, or legacy applications. This content from multiple systems may need to be connected before exposing it to an experience such as a product details page on an e-commerce website. 

Now, you can argue that we should be able to modernize each individual system with an API and then simply call the APIs from the front end and connect the data on the client side. This is theoretically possible, but it’s not sustainable. 

If you want to build multiple front-end applications relying on the same underlying data and business logic, especially if the frontends are built on several different frameworks, you have to repeat the same work every time, creating redundancies and inconsistent experiences. To make matters worse, what if the data in each of these legacy systems have gaps, inconsistencies, and inaccuracies? 

Adding APIs on top of bad data and content doesn’t solve this problem. After all, garbage-in means garbage-out.  

So, What’s the Best Way Forward?

I hope it is evident that a rip-and-replace or full rebuild are not practical solutions for legacy modernization. Piecemeal, step-by-step, replacement of functional components using the Strangler Pattern is definitely a more viable option. However, when you implement this pattern, you have to be aware that the system performance, scalability, and data unification are real concerns that must be addressed. 

What you need is a way to create a scalable, persistent, and API-enabled data layer that syncs with your legacy backends and is able to unify, normalize, and optimize the data before it is consumed by the front end. This data layer should be able to model the complexity of your data sources without shoehorning you into a pre-built schema of any sort. This not only removes the need to perform gymnastics on the front end, but it also ensures a separation of concerns in your experience stack.

At, we provide an elegant solution to this problem with the DX Graph – and I invite you to learn more about our innovation at

About is an Experience Orchestration Platform that empowers digital teams to create personalized, omnichannel experiences in a composable tech stack. offers two standalone products, the DX Graph and the DX Engine.

About the DX Graph

One of the biggest roadblocks in the journey toward composability is legacy systems and data silos. If your data is trapped in one or more data sources that are not accessible via APIs, the DX Graph allows you to create an API-first modernization layer on top of your existing systems of record. Secondly, the DX Graph allows your digital teams to sync with these systems of record to create a 360 view of all your siloed content and data, where you can relate, search, enrich and distribute your data to downstream applications and touchpoints.

About the DX Engine

With the shift towards headless and composable architectures, marketers have lost the ability to intuitively manage experiences; the control over who sees what content when, and where now falls squarely into the hands of the developer. The DX Engine puts marketers back in control to activate personalized and intelligent experiences on all channels from a centralized, intuitive interface. The DX Engine connects to all of the backend content and data sources (including the DX Graph) through real-time APIs and abstracts out the complexity of these integrations for business and marketing teams. 

CMS Kickoff 2023 banner

Join Sana at CMS Kickoff 2023

January 17-18 in St. Petersburg, FL

Don't miss the only conference tailored for everyone working with content management systems, from beginners to experts! Featuring two packed days with a carefully curated mixture of talks, workshops, activities, world-class facilitators, thought provokers, speakers, and session leads – including Sana Remekie's presentation, "The Journey from Monolith to Composable."

Register today at the CMS Kickoff 2023 website