The Three Steps to Mitigate API Abuse in Cloud-hosted Applications
Posted: Wednesday, Jan 17

i 3 Table of Contents

The Three Steps to Mitigate API Abuse in Cloud-hosted Applications

Organisational Use of APIs

Application Programming Interfaces (APIs) have become a strategic necessity for business because they aid flexibility and agility, yet many organisations have remained cautious of their use due to worries about the data that APIs expose.ย  The concerns are fanned by regular reports of IT security breaches that cause downtime, data loss and financial problems for victims.

At the same time, for some organisations being in more than one cloud has brought competitive tension to the supplier dynamic, potentially preserving some negotiation power and enabling them to pursue a better deal.

In instances where organisations restricted themselves to a single cloud, it may or may not have been by choice. Local regions or zones are often not available in all localities, and that constrains options. However, even single cloud environments have evolved to be complex, with multi-account structures that let more teams manage their cloud consumption themselves.

As organisations delved deeper into cloud integration, they had to optimise their application infrastructure for cloud operations. Traditional monolithic applications, characterised by self-contained codebases hosted on servers in a single data centre, underwent a transformation. Suddenly, applications were deconstructed into smaller, cloud-hosted components known as microservices, which could be orchestrated to execute business logic and fulfil specific functions.

Breaking up applications into microservices had the consequential effect of shifting individuals (and even teams) from overseeing the entire codebase to managing specific, smaller components. This specialisation enabled them to accelerate the process of refining existing code or building new features. Internal guardrails were also relaxed to further facilitate faster release times and quicker time-to-value, though generally on the assumption that a new feature would be brought up to compliance standards as soon as possible after being productionised.

The other major effect was on traffic flows: particularly the number of potential entry and exit points for data traffic in and out of an application, and therefore the size of the organisationโ€™s attack surface. Application programming interfaces or APIs allow microservices to โ€˜communicateโ€™ with one another to form a single cohesive application. The application then communicates with other machines or human users in the outside world, once again employing APIs.

This has fundamentally changed the application landscape for enterprise and government organisations and, by association, the application security landscape. Many organisations and security teams are now playing catch-up.

 

Unpacking the Three-step Process

In the era of monolithic applications, there was one โ€˜doorโ€™ in and out of the corporate data centre through which all traffic flowed. Now, an application typically consists of numerous microservices, which may be hosted in the same cloud or across many clouds. Some microservices may be third-party operated, handling a complex or regulated part of a process such as payments.

Every microservice acts as a โ€˜doorโ€™ into the application, and depending on configuration, potentially the organisationโ€™s broader IT environment. Some of these โ€˜doorsโ€™ are undocumented, or the documentation is out-of-date. In our experience, few people internally have visibility into what kind of access all of those โ€˜doorsโ€™ grant, such as the extent to which they handle PII or payment data without compliance checks and balances being in place.

To address this challenge, organisations are initiating comprehensive programs to assess the number of such โ€˜doorsโ€™ existing in the application environment, and what can be done to reduce or remediate associated risk.

This is being performed as a three-step process.

The first step is understanding how many APIs exist and where they are hosted. Organisations can utilise an API attack surface discovery tool to determine what APIs exist today and to catalogue them, because having a real-time inventory of APIs is extremely important.

The second step involves asking: โ€œWhat risk do the APIs pose to my environment?โ€ Not all APIs are created equal – an interface that confirms a postcode for a business address will naturally have a lower risk than an API that exposes driverโ€™s licence information or health records. Performing a risk assessment is particularly important in organisations with hundreds or thousands of APIs, because it narrows down where to begin. It would be prohibitively time-consuming to treat every API the same, regardless of its risk profile, whereas if higher risk APIs can be identified, these make an obvious start point. Risk criteria should also be applied to new APIs as well. One way to do this is to incorporate checks into developersโ€™ CI/CD pipelines, which ensures continuous visibility and assessment of the evolution of the API landscape on an ongoing basis.

The third and final step is to understand what normal and abnormal behaviour of each API looks like, and what business logic is being exposed. This ensures prompt detection and prevention of any abuse directed towards these APIs. A Web Application Firewall (WAF) is often put forward as the answer, but its capabilities are insufficient for detecting and thwarting API abuse. Abusive traffic can be syntactically correct and present as a legitimate-looking request. Whatโ€™s important is the intent behind that request, and if the intent is wrong, that the request can be stopped. A unified API protection platform is needed to handle this nuance.

By following these three steps, organisations can proactively manage their API landscape and be more confident that it is secure and resilient against abuse and attacks.

Shreyans Mehta
Shreyans Mehta is an innovator in network security and holds several patents in the field. Before co-founding Cequence Security, he was Architect and Technical Director at Symantec, where he led the development of one of the most advanced network security platforms and intrusion prevention technologies based on real-time packet inspection and cloud-based big data analytics. Itโ€™s responsible for detecting more than half of the billions of threats that Symantec identifies every year. Shreyans has a Masters in Computer Science from the University of Southern California.
Share This