Organisational Use of APIs
Application Programming Interfaces (APIs) have become a strategic necessity for business because they aid flexibility and agility, yet many organisations have remained cautious of their use due to worries about the data that APIs expose. The concerns are fanned by regular reports of IT security breaches that cause downtime, data loss and financial problems for victims.
At the same time, for some organisations being in more than one cloud has brought competitive tension to the supplier dynamic, potentially preserving some negotiation power and enabling them to pursue a better deal.
In instances where organisations restricted themselves to a single cloud, it may or may not have been by choice. Local regions or zones are often not available in all localities, and that constrains options. However, even single cloud environments have evolved to be complex, with multi-account structures that let more teams manage their cloud consumption themselves.
As organisations delved deeper into cloud integration, they had to optimise their application infrastructure for cloud operations. Traditional monolithic applications, characterised by self-contained codebases hosted on servers in a single data centre, underwent a transformation. Suddenly, applications were deconstructed into smaller, cloud-hosted components known as microservices, which could be orchestrated to execute business logic and fulfil specific functions.
Breaking up applications into microservices had the consequential effect of shifting individuals (and even teams) from overseeing the entire codebase to managing specific, smaller components. This specialisation enabled them to accelerate the process of refining existing code or building new features. Internal guardrails were also relaxed to further facilitate faster release times and quicker time-to-value, though generally on the assumption that a new feature would be brought up to compliance standards as soon as possible after being productionised.
The other major effect was on traffic flows: particularly the number of potential entry and exit points for data traffic in and out of an application, and therefore the size of the organisation’s attack surface. Application programming interfaces or APIs allow microservices to ‘communicate’ with one another to form a single cohesive application. The application then communicates with other machines or human users in the outside world, once again employing APIs.
This has fundamentally changed the application landscape for enterprise and government organisations and, by association, the application security landscape. Many organisations and security teams are now playing catch-up.
Unpacking the Three-step Process
In the era of monolithic applications, there was one ‘door’ in and out of the corporate data centre through which all traffic flowed. Now, an application typically consists of numerous microservices, which may be hosted in the same cloud or across many clouds. Some microservices may be third-party operated, handling a complex or regulated part of a process such as payments.
Every microservice acts as a ‘door’ into the application, and depending on configuration, potentially the organisation’s broader IT environment. Some of these ‘doors’ are undocumented, or the documentation is out-of-date. In our experience, few people internally have visibility into what kind of access all of those ‘doors’ grant, such as the extent to which they handle PII or payment data without compliance checks and balances being in place.
To address this challenge, organisations are initiating comprehensive programs to assess the number of such ‘doors’ existing in the application environment, and what can be done to reduce or remediate associated risk.
This is being performed as a three-step process.
The first step is understanding how many APIs exist and where they are hosted. Organisations can utilise an API attack surface discovery tool to determine what APIs exist today and to catalogue them, because having a real-time inventory of APIs is extremely important.
The second step involves asking: “What risk do the APIs pose to my environment?” Not all APIs are created equal – an interface that confirms a postcode for a business address will naturally have a lower risk than an API that exposes driver’s licence information or health records. Performing a risk assessment is particularly important in organisations with hundreds or thousands of APIs, because it narrows down where to begin. It would be prohibitively time-consuming to treat every API the same, regardless of its risk profile, whereas if higher risk APIs can be identified, these make an obvious start point. Risk criteria should also be applied to new APIs as well. One way to do this is to incorporate checks into developers’ CI/CD pipelines, which ensures continuous visibility and assessment of the evolution of the API landscape on an ongoing basis.
The third and final step is to understand what normal and abnormal behaviour of each API looks like, and what business logic is being exposed. This ensures prompt detection and prevention of any abuse directed towards these APIs. A Web Application Firewall (WAF) is often put forward as the answer, but its capabilities are insufficient for detecting and thwarting API abuse. Abusive traffic can be syntactically correct and present as a legitimate-looking request. What’s important is the intent behind that request, and if the intent is wrong, that the request can be stopped. A unified API protection platform is needed to handle this nuance.
By following these three steps, organisations can proactively manage their API landscape and be more confident that it is secure and resilient against abuse and attacks.