Legacy technology is often debated. Recently, we’ve seen many high-profile privacy intrusions (Cambridge Analytica with Facebook) and system malfunctions (mass flight cancellations in Europe in March 2018), yet we are amassing even more data into increasingly large black boxes (how much data did Equifax have again?). Decentralized and distributed systems have started becoming topical and popular. When will the unbreakable black box finally clear for new systems?

Like any large-scale shift, it’s far easier to see the direction than it is to predict anything close to an accurate timing of said shift. Legacy, monolithic, colossal systems are clearly not the future, and more cracks in the foundations are going to be made public in the months and years to come. However frail these systems may be, getting a large-scale shift of any system requires not only years of R&D, it also requires years of validation. But it is happening.

Short of directly overhauling information systems, we can see organizations launching their greenfield projects based on modern technologies, often a step removed from their core infrastructure. For instance, look at what Goldman Sachs did with Marcus, which their technology team has been quite open about. They then rolled out the Marcus platform into GS Bank, but only after taking it to market as a relative standalone. This makes sense – why hold a novel new approach down with archaic systems at the start when the value is minimal, when you can simply link it back in the future? Should it actually be seen as valuable?

Yet, the more new greenfield projects an organization has and launches, the more crucial the question of interconnectedness and interoperability becomes. Tying together these new projects, product, and services, we see a new infrastructure emerge. Riding on the API economy principles, this type of connectivity can and should be native and allow a modular architecture, which, by the way, ends up being future proof given the malleability and high specialization (switch out one call for another and you can swap out an entire product).

Although modularity and longer-term future-proofing are not only limited to technology, it really becomes a question of delivering the best service to the end client. Collaboration between different parties to achieve certain aims, such as syndicating a larger loan to diversify risk exposure, ultimately yields the client what they are looking for and creates more value.

This isn’t without its challenges in a culture which seeks control over client relationships, yet the shift in thinking has to become more purely client-centric. In the case of the syndicated loan, if our bank, in collaboration with various lenders, can provide the client with further value, does that not reflect well on our relationship with the client? Does the client actually care that others are part of the syndicate and if so, does that lessen the perceived value? I would argue that not only is it a more technical nature of the loan for the client, it may even enforce the value created.

Yet creating this new infrastructure allows for an interesting question on timing. At what point is the new infrastructure proven enough that the legacy system can be wound down? Surely, it’s under 50% of usage, but is it 15% or 25%? How long is too long to wait?

Winding down a successful system is also a political question: why fix it when it is not broken? But history shows us that this thinking and the failure to anticipate change bodes dramatically poorly for organizations that exist in innovative or disrupted industries. The financial services industry is clearly in flux.

Crises present themselves as great opportunities, or forced opportunities, and with the future looking full of crises, we are presented with an ever-growing list of opportunities for financial services organizations to leverage. There is, of course, the option that old service lines and offerings simply die out as unused or unpopular following a mass exodus of users, and maybe that is relevant in a few cases, but larger scale change does have to be managed.

With financial services systems, having users in the millions and capital in the billions (and more) taking down a system in any manner includes a major disruption. However, that may be unavoidable, so a proactive path may allow you the chance to determine when that disruption takes place. In private meetings, I’ve even been told of systems that are expected to “go down in 2018 – we just don’t know when!” This seems to be an interesting predicament to try to manage. But even a proactive strategy is not without its issues, imagine telling the board that you intend to take down a multi-billion-dollar business for “maybe a few hours,” how will that go?

We have to conclude that change in an interconnected legacy systems market is messy, and organic change, even more so.

While we wait for these organizations change agents to act when presented the opportunity, we can also be curious about the new silos we are building. We are constantly amassing private and sensitive data behind a central agent’s control all around the world. And the data sets we can today create and generate are truly amazing in their technical nature and terrifying in their implications – with personal identification data, meta-data, smart connected devices in our pockets and homes, the list goes on. To think that central agents and their systems are immune to failure and or attack is naive and time alone will show their integrity.

With all the data we have and our seeming determination to store it into centralized systems, it does make one wonder how well all our change agents are lined up. It also has one hoping they have a proactive plan in motion.

 

This post originally appeared on MEDICI.