MARKET FORCES 2020
Banking technology for the new decade
We are undoubtedly at a point in history of unprecedented technological change. The fourth industrial revolution has thrust the entire financial services industry into equal parts opportunity and confusion, with social, economic, political and regulatory pressures creating challenges for which technology offers a seemingly infinite number of solutions.
On top of this, there are fewer bigger psychological milestones than the beginning of a new decade – a time at which both individuals and institutions take stock and set fresh objectives.
To meet this head on, we have produced our own perspective on the biggest market forces affecting financial services today, and the current and future ways in which we in the tech sector are tackling them.
The biggest influencing factors – which we believe are currently common to most if not all financial institutions – are a mix of customer wants and needs, regulatory and evolutionary demands, risks and threats, and even complications arising simply from the sheer pace of digital transformation.
To compound these challenges, many of them contradict one another. For example, improving security could mean compromising user experience, or commercial competitiveness could lead to regulatory risk. To navigate all these forces is by no means easy, but with strategic deployment of the right technology the answers can be found.
We have summarised our main considerations under six headings:
- Customer experience
Perhaps the most immediately obvious: often the first step when developing a tech strategy is to analyse what the customer wants. This is easier said than done in the age of instant gratification. Consumers have grown accustomed to a certain level of convenience, experience and engagement thanks to the likes of mobile retail and digital entertainment – and any banking provider that fails to keep pace with this expectation can expect to fall into irrelevance.
- Risk
This is a broad area, because such different risks affect different stakeholders in the chain. At the front end, the customer is at risk from fraud, identity theft or cyber crime, while the institution must also consider factors such as asset risk: how can we use technology to make better-informed investment or underwriting decisions?
- Operational resilience
A hot topic in the industry at the moment, driven equally by the Bank of England and the regulator, this is another area where delivering on brand promise is critical. Providers of banking tech ecosystems have to manage all outsourced relationships to deliver robust, tested, proven compliant, regulated systems and platforms – and are expected to do so with zero failures.
- Omnichannel
An industry buzzword we have all heard before, omnichannel yet remains a concept that few are truly delivering. Multichannel financial services have been available for years, but there is a difference between offering the ability to conduct transactions via multiple channels, and having all those channels truly integrate and cooperate with one another. Here we look at how this can be done.
- Migrations
Mergers and acquisitions are continuing to increase in both loans and deposits markets, which means £billions in assets and legacy books are traded and migrated from one platform or organisation to another. First we need to truly understand the risks and costs associated with this, and then – as with other aspects of operational resilience, be able to carry it out with zero per cent fail rate.
- XaaS
Software as a Service (SaaS) caught on in a big way for obvious reasons – not least because it allowed organisations to quickly build highly complex tech infrastructure with a controllable cost model. Now the “[X] as a Service” concept is broadening out to other tech requirements such as cloud data management for example. There are pros and cons to consider when comparing building your own tech architecture versus spinning up an outsourced equivalent – and we acknowledge that much of this will come down to cost.
- Customer Experience
Perceptions of good customer experience (CX) in the financial services sector are inevitably shaped by what consumers encounter elsewhere in their lives. Global heavyweights like Amazon are constantly moving the goalposts in terms of ease and efficiency of service, as well as changing the way people shop entirely, through the introduction of transformational technologies such as voice-activated ordering.
With expectations rapidly evolving in almost every facet of consumer life, it makes it difficult for technologists in financial services to keep up, and deliver a good customer experience at every stage in the journey. Just as the sector successfully conquered the switch from web to mobile apps, many of the leading banks then immediately began grappling with the introduction of voice recognition for security checks.
The use of voice biometrics for security checks and simple account accessibility is one thing, but when consumers are increasingly accustomed to on-demand service from their in-home virtual assistants, how soon will it be before we are using voice to make payments and transfers? If the future of mortgages, for example, can be gleaned through looking at the path forged by retail, will applying for mortgages through Alexa become the norm?
With the pace of change so rapid, banks, building societies and other institutions need to ensure that they and their software partners are equipped and ready to meet those expectations. Their APIs need to be able to retrieve and serve data in any way both current and future technology demands.
There are some paths carved out by retail, however, that we as a sector cannot follow. Amazon’s famous ‘1-click’ purchasing, for example, has brought about a model of instant gratification that simply cannot be replicated in a regulated environment. In this sense, customer expectations may be somewhat at odds with reality. In answer to the age-old question ‘why can’t my bank do it if Amazon can?’ the answer boils down to two fundamental principles: security and trust.
A one-click mortgage application, however desirable it may seem in theory, would not in practice satisfy customer expectations. For that to work, the security level would be so low that the risk would be untenably high. In a regulated environment, where we are dealing with personal finances, customers need the security blanket of multiple authentications. We need to strike the right balance in friction levels – achieving just enough to reassure customers, but not too much that the process becomes a burden.
When a well-known retail bank brought its mortgages to market several years ago, it created a process that was so easy, customers actually ended up calling the bank to confirm – in disbelief that it was in fact that straightforward. This is an excellent example of where too little friction has unnerved customers, to the point where they go out of their way to break what the bank probably envisioned as the ideal customer journey.
In this sense, technology, or an inability to keep up with technology utilised in other sectors, is not the inhibitor in many banking processes. Instead, it is consumer trust – bolstered, of course, by the regulator and banks’ own legal and risk departments. Above all else, the customer expects to be able to trust their banking service provider, and often they do not trust technology.
Where we can look to meet and surpass customer expectations is in using customer data responsibly to better anticipate their wants and needs. Currently, the sector is not utilising data analytics to its fullest potential. Where a one-click process will likely always be unattainable, we can use data more intelligently to cut out many of the steps in certain processes, and take a more proactive approach to reduce effort-levels on the part of the customer – whilst still allowing them to feel in total control.
For example, customers expect access to the best deals on the market. Instead of them having to undertake the research themselves, and go through the various administrative hurdles of switching to a new lender or service provider, why can’t their bank proactively assume this role and go to them with a packaged-up switch option?
Banks are already sitting on the data to enable them to do this. They know how much their customers earn, what their outgoings are, and when their loan expires, so they are in an ideal position to harness that data and use it to create more seamless experiences. We work with our clients to make customer data exportable in an analytics-ready format, helping them to make better, quicker decisions that serve to enhance the experience of consumers, and ultimately prolong the customer lifecycle.
Customer experience within financial services should not just become a race to be the slickest, quickest, and most convenient. Instead, CX should be intelligently designed to be as efficient as possible, and strive to add real value to the customer, whilst preserving that most valuable of assets – consumer trust.
- Risk
As technology continues to evolve, so the market’s attitude towards risk has to advance alongside it. As a wider variety of mobile devices are every day used for banking and payments, so the data that is gathered, stored, and handled online is growing exponentially. Unfortunately, the convenience and abundance of these tools comes with various risk factors.
Insider threats are still very prevalent – in 2018, more than half of companies surveyed had suffered from an internal attack in the previous year. These can be intentional or unintentional; malicious insiders take advantage of their access to inflict harm on an organisation, whilst negligent insiders are people who make errors or disregard policies and therefore place their organisation at risk. For example, easy-to-guess credentials such as “password”, “123456”, and “qwerty” are still shockingly widespread.
Infiltrators are external actors that obtain legitimate access credentials without authorisation. Identity is key; you want to be sure that the person attempting to access a system by presenting certain credentials is the correct person. Combatting the external threats of hacking, phishing, and ransomware is challenging.
Despite widespread coverage of organisations that fall victim to these threats, many companies still do not take the risk of cyber crime seriously enough. The financial services sector is guilty of this. A focus on both cost reduction and ease of use of systems has resulted in two factor authentication, encryption and data access controls – including role-based access – putting data at risk. A better balance between cost, usability, and security needs to be found to combat this.
All companies should also be holding their digital suppliers to account. Currently, there is a lack of organisations ensuring that suppliers hold the appropriate levels of certification, and asking for evidence that the organisation’s data will be secured. This proof could include penetration tests, ISO27001, SOC 2, or Cyber Essentials plus.
Threats to money and data are is not the only definition of risk however. Asset valuations – particularly properties underlying mortgage books – is an area that urgently needs to be brought up to date, as inaccurate information can lead to poorly-informed (and therefore higher risk) lending or insurance underwriting decisions. Many of the methods used to value properties are out-dated, and don’t provide or incorporate enough data on the true performance or rounded picture of the asset.
For example, an emerging risk factor is energy efficiency, which holds the potential to decimate the potential yield of a commercial property asset if it is found in breach of new energy standards legislation. Banks need the tools and analytics to give the full picture in real time.
It’s not all bad news. Although new risks are being identified all the time, the rapid pace of innovation in the fintech space promises opportunities to pre-empt and mitigate problems as they emerge. Predictive analytics can detect and head off potential fraud, accurately identify credit risk, and potentially improve impairment losses. Blockchain technologies will offer a way to preserve information in a traceable and unalterable audit trail. To achieve maximum potential in this space, established financial services companies must collaborate with fintechs. The latter may have unrivalled capabilities to innovate at pace, however they still many not be able to compete with the more robust, tried and tested risk management procedures of the established players.
The stakes are only going to get higher. The market will already be acutely aware of the regulatory, reputational, and financial threats, but collectively we have all the technology and expertise within our grasp to meet them.
- Operational Resilience
A renewed spotlight has fallen on the issue of operational resilience since December 2019, following the joint publication of a new policy summary and consultation papers by the Bank of England, the Prudential Regulation Authority (PRA), and the Financial Conduct Authority (FCA). Operational resilience can no longer languish as an afterthought in financial services, as a problem to be addressed reactively in response to a disruptive incident, rather than proactively.
The new proposals mean that from now on, firms and Financial Market Infrastructures (FMIs), will be responsible for ensuring their operations, including outsourced services, are resistant to disruption, and that the necessary investments and adjustments are made throughout the company and its supply chain to ensure that important business services can continue to be delivered in the event of an incident. By cultivating greater resilience in the financial sector, the regulatory bodies hope to prevent future damage to consumer confidence or market integrity, and promote consistent stability throughout financial services.
Four main steps are expected to be undertaken by relevant companies under the proposed policy: important business services must be identified; impact tolerances must be set for each of these services, quantifying the maximum tolerable level of disruption; the people, processes, and resources necessary for delivery of these services must be identified; and actions must be taken to ensure that the firm or FMI can remain inside impact tolerances in severe hypothetical scenarios.
Of course, prudent companies and service providers in the financial sector have processes already in place to ensure their and their clients’ operations are resilient, in order to protect their reputation, their business, and, most importantly, their customers. Though it sounds deceptively simple, it is ultimately appropriate management and control that cements ongoing operational resilience, particularly with regards to change. If, prior to implementing new technologies or processes, a firm has gone through multiple levels of testing, several dry-runs, and ascertained a series of controls to follow implementation, then their resistance to disruption is greatly increased. Organisations should continue to use the abundance of available materials to assist in this, such as ITIL and PRINCE2, and learn from the best practice of those who have consistently demonstrated operational resilience.
The necessity for thorough management and control is intensified with regards to outsourcing, particularly given the regulator’s ongoing consultation regarding the responsibility of firms to ensure the resilience of their outsourced services. Some institutions have historically struggled with managing the risks associated with outsourcing, and furthermore run the danger of diluting their internal skills and knowledge as operations are undertaken externally. Outsourced services are no excuse to relax control over services, and, while firms should only seek to partner with reputable, proven services providers, with processes in place for managing their operations, every effort must be made in-house to ensure that outsourcing does not result in a decline of knowledge and management.
However, as enterprise continues to evolve, the complexity of its landscape is ever-increasing. There is a greater push towards digitisation, a greater reliance on “X-as-a-Service”, and the prospect of having this overseen entirely by human managers is growing steadily more difficult.
It is for this reason that the critical focus in guaranteeing operational resilience, and thereby full regulatory compliance, should be automation. With the enormous advances made in AI and machine learning technologies in recent years, the prospect of increasing automation has never been quite so comfortable.
The unfortunate truth is that human error is one of the most significant contributors to operational failure, if not the most significant. The breadth of causes of human error – from lack of attention, to inadequate training, to extenuating personal circumstances, is immense. Given the ever-increasing complexity of enterprise landscapes, the points at which failure is liable to occur are only multiplying, and it is through automation that human touchpoints can be phased back down and operational risk reduced.
The regulators’ requirement for the setting of impact tolerances is something ideally placed for the introduction of automated processes: when disruption occurs and the tolerances are close to being breached, an issue can be identified and addressed by an automated process far quicker than by a human actor. Stress and uncertainty can be reduced with the knowledge that, once a firm is made aware of an issue, mitigating steps have already been taken.
In an ideal enterprise, automated processes could become the dominant aspect of business operation, with automation orchestrating these increasingly complex environments, new solutions built and released through automated processes, and monitoring and operation similarly automated. If a new feature is automatically released into an environment to enhance customer experience, automated processes should be in place to remove it from the environment should something go wrong.
Though its advent might currently be somewhat distant, end-to-end automation is undoubtedly the future in ensuring operational resilience, removing critical human touchpoints and key person dependencies, and making organic and immediate response to disruption as much an intrinsic part of the operation as the delivery process itself. As with all significant operational changes, the move towards automation must be carefully project managed and implemented with appropriate contingencies, but the sooner automation begins to become a central, indispensable aspect of financial services operations, the more realistic fully end-to-end automation becomes.
- Omnichannel
Best practice in omnichannel strategy can usually be found in the retail and entertainment sectors today, and it is generally acknowledged that financial services still lag a long way behind. Customers can now expect to shop via mobile, desktop or emerging channels such as voice recognising smart home devices – and they can receive the same experience, products and services – personalised and targeted to them – seamlessly and regardless of how they choose to engage with the retailer.
There are reasons why the financial services sector, even retail banking, has been slower to adopt such modern customer engagement – mainly due to the risk averse mindset that is understandably endemic in our industry.
Multichannel – the foundation and start point for omnichannel – is already widely in use here, but the difference is that the channels work relatively independently of one another. This means that a customer can, for example, apply for a loan via whichever channel they prefer – be that via a PC, a mobile device, telephone or in-branch. But the main limitation to this way of banking is that should they encounter a question or interruption in the process, they cannot switch channels without a certain amount of repetition such as repeating the identity verification stage.
Omnichannel, by contrast, makes full use of Open Banking technology to allow better automation, analytics and predictive algorithms behind the scenes to give the customer a far more seamless experience at the front end. Every channel, including the workflow dashboards used by staff in-branch, is in constant real-time communication with each other, so the customer gets the desired outcome with no effort or frustration. It is even possible to predict certain behaviours, decisions or risks in order to pre-empt and tailor products and services to the individual, taking their engagement to the next level.
As we have explored in the customer experience section, this demand is rapidly becoming the norm.
Retail banking is getting better at meeting this challenge, but so far only within isolated product areas. One of the end goals of Open Banking is to offer a genuinely holistic view for every customer, with a user experience that is singular and consistent for the individual, regardless of the nature of their transaction.
The technology exists to bring together all of these existing systems and products, but there will be a requirement for one central technology provider to do this without having to re-engineer those elements that are already working well.
Tech experts tend to agree that in order to deliver this, we need to integrate three layers: the core banking platform (the perhaps pre-existing engine of the institution); the middleware – which is where every external third party connects, from mobile and paytech to identity and credit checking; and the front end – where the customer experience plays out, ideally in whichever way the individual prefers.
Key to delivering this will be to ensure that every technological requirement – from data transmission, hosting and analytics to transactional and infrastructure security – happens flawlessly behind the scenes with just the right amount of customer awareness. Just as with other aspects of customer experience, they need to know that it’s happening and they can trust it, but they don’t want it to slow them down.
- Migrations
Mergers and acquisitions activity is showing no sign of slowing down within both lending and deposits markets. Wherever this happens, this of course means that entire portfolios of assets – often worth multiple £billions – such as loan books or customer accounts are traded from platform to platform, organisation to organisation. This of course brings its own set of challenges.
Undoubtedly, fintech has unlocked the power to manage bigger migrations, more quickly than ever before. At Phoebus we have been undertaking large-scale migrations since long before the term fintech was coined, and we have developed a reputation for porting books from multiple third party systems into the Phoebus application, including the in-house systems of clients.
Today when handling migrations at a large scale, which may regularly be thousands of individual accounts or mortgages, the inherent risk comes simply from the sheer amount of data bring transferred – before we even factor in compatibility issues between the sending and receiving systems.
Mitigation against security breaches at every step of the process is vital. As we explore in Risk, the market requires the highest possible protection from cyber threats, including elements from dark web derived preventative software to cyber insurance policies, backstops and disaster recovery processes.
Assuming all these measures and protections are in place, the major consideration around migrations is cost. Speed and efficiency are essential, and these depend on a range of technological factors: the quality of the APIs, the reliability and speed of connectivity, hosting, and processing power of the hardware used, all of which come with associated costs.
The customer, the banks and the regulator will all agree that the migration must go right first time. Industry standard should be a zero per cent fail rate: if the trigger is pulled at midnight, the customer should be able to manage their account in the morning with no interruption in service.
Another consideration is whether clients want to migrate on a self-serve basis. Over the past five years, we have developed our capabilities so that banking organisations or servicers can manage the loan migrations themselves, should they choose to do so. The migration capability can be used in various scenarios, from implementation projects, acquisitions, initiatives to bring servicing in-house, mass migrations when a loan servicer wins new business, or the migration of individual accounts when connecting to a loan origination platform.
A number of key steps enable this self-serve capability whilst preserving the zero per cent fail rate. Firstly, validating the format of migration import files: ensuring all necessary “columns” are present, that numeric fields contain numeric data, date fields contain dates, mandatory fields have values, and so on. Then, a step ensures that the migration import file is logically correct, in that the opening balance plus the transaction detail equals the closing balance. Finally, a check ensures that all necessary activities have taken place in order for the Phoebus platform to accept the incoming data. Should any errors occur during the import process, the information is provided in an understandable format so that the data can be corrected within the import. A recent migration into our system included 8,000 accounts and approximately 11 million transactions, with zero errors.
We will continue to develop these capabilities to meet the expanding demands of the markets. Whilst there is a need for continuous technological advancement to address these needs in the new decade, it is important that parallel efforts are made to fully prepare for the risks and costs.
- XaaS
Software as a Service has been growing in popularity in business technology as long as the Internet itself has existed, and the reasons are not hard to see.
If an organisation can spin up an entire tech infrastructure and switch it on overnight, already tested and compliant, for a fixed cost model, the benefits are clear.
There are however pros and cons, and there are still instances where a business might prefer to build and manage some of its own proprietary specialist systems in-house. In the age of Open Banking, often a hybrid model proves most appropriate – with all the attendant requirements of collaboration, compatibility and open APIs.
Generally speaking, the “[X] as a Service” (XaaS) model has really taken off in the past decade, and has grown to take in more and more aspects of an enterprise’s technological requirements, from back-end mobile to virtual desktop and the vast array of dashboard apps they use every day. Almost every business today will use an element of XaaS as part of its tech infrastructure, because of its scalability, speed, reliability and the cost savings the cloud brings with it.
Cloud platforms such as Amazon Web Services and Microsoft’s Azure are the same type of virtual infrastructure used by brands like Netflix and AirBnB, the scalable solution is seen as more reliable compared to physical hardware which can overload or fail. But cloud isn’t just appropriate for the world’s biggest tech firms, it can also be the most effective way of managing even small quantities of data.
Many organisations might wrongly assume that their data will be more secure if held in an internal system, and less costly to manage. But if cost is a huge determining factor, the cloud does not need physical infrastructure to accommodate growth, this cuts down on storage overheads, maintenance of data centres and results in lower end costs for the customer.
XaaS means that an organisation does not require the capital or human investment to build any software or hardware capability itself, nor the accompanying investment in the associated support, maintenance and end-of-life management.
XaaS models (should) allow flexible scalability, whereas the “traditional” model requires a design for worst case. For example, with a traditional model, if a new product launch is estimated to drive a ten fold increase in business, then the organisation’s applications and associated infrastructure would require an up-front investment to ensure all the solutions are in place to manage at least that ten fold upsurge in usage.
XaaS, on the other hand, is designed to be able to scale for the peaks, and return to business as usual levels for the remainder of the time. Importantly the enterprise only pays for its usage on a consumption based model.
We acknowledge that many stakeholders, including regulators, perceive risks with using XaaS models. After all, the storage and transmission of data in the cloud does of course mean relinquishing it outside our own closed environment. However, all perceived and potential risks are comparative – here are some example questions that a bank might face daily:
Is a cloud based XaaS model more or less stable than my current five year old servers in a cupboard on the third floor of my office block? Is an AWS datacenter with multiple power supplies and countless network connections more of less resilient than my current datacenter with a dual power supply (tested once a year) and a dual network connection? Is a Microsoft Azure service which is automatically patched every month, and upgraded well before end-of-life more or less resilient than my environment which is managed by a single admin, when he has the time to do so? Does an XaaS platform which is “PCI-DSS, HIPAA/HITECH, FedRAMP, GDPR, FIPS 140-2, and NIST 800-171 compliant” give our customers more or less comfort than a standard ISO27001 platform?
Of course we recognise that there will always be aspects of an institution’s ever more complex digital real estate that they will insist on creating and owning themselves, or plugging in from trusted third party vendors. But once again this is where the concept of the ecosystem definitively comes into its own.