Introduction
Plain English is becoming increasingly vital in securing APIs against coding errors and business logic abuse, especially as the abuse of APIs by threat actors continues to grow. The current shortage of cybersecurity personnel and the lack of expertise in some development teams have underscored the importance of making API security more accessible to non-technical stakeholders.
The exponential growth of API numbers has created a challenge for organisations to keep track of their APIs effectively. Development teams, often dispersed and lacking centralised security oversight, own a diverse array of API categories, including managed, unmanaged, shadow, zombie, third-party, internal, and external APIs. Communicating this complexity in Plain English has proven essential for executives and business risk assessment.
However, the need for Plain English extends beyond executive understanding; it has started influencing how security tests for AI are specified, thanks to the emergence of generative artificial intelligence (GenAI) technology.
Shifting Left for Security Assurance
To ensure API security, development teams are adopting a “shift-left” approach, integrating security thinking and testing into all stages of the API development lifecycle. This involves incorporating security into the design phase and frequent testing to catch coding errors before APIs go into production. Yet, the shortage of cybersecurity expertise in development teams poses a challenge.
Automated Testing with GenAI
Automated API security testing is usually conducted via static and dynamic application security testing – SAST and DAST.
SAST is used early in development to analyse the source code for vulnerabilities. It integrates directly into IDEs, bug-tracking systems, and CI/CD tools to detect common implementation issues in APIs like erroneously exposed APIs, parameters, error codes, and messages. DAST, by contrast, has no access to the source code. Instead, it scans actively running applications for vulnerabilities, whether the application is running in a production or non-production environment. DAST looks to discover issues from the user/attacker angle and tends to be used in later stages of the development process.
While both are well-established for API testing, they donโt necessarily offer the complete coverage necessary to detect all possible security threats. They can also be challenging for developers to use.
The complex workflows associated with APIs can result in an incomplete analysis by SAST. Additionally, DAST cannot provide an accurate assessment of the vulnerability of an API without more context on how the API is expected to function correctly, nor can it interpret what constitutes a successful business logic attack.
As a result, API-specific test tooling is gaining ground, enabling things like continuous validation of API specifications.
Testing GenAI
A major challenge with any application security test plan is generating test cases tailored explicitly for the apps being tested before release. In an API context, this might involve checking to see if the API is returning the correct data when called, an issue that, if it goes wrong, could easily see application data being compromised.
Many organisations create test plans manually, leading to errors and requiring developers to know security test cases to associate with their APIs. One of the ways around this may be to allow developers to specify tests in Plain English, using prompts in GenAI-enabled tooling.
A security analyst might, for example, state, โGenerate a test plan for my Payments API to ensure PCI data complianceโ via an AI-enabled API security tool, avoiding the need to input the query or the detailed test plan. Such a test plan would then cue an automatic inspection of payment API endpoints and the payload characteristics and associate the appropriate test cases to test those endpoints for compliance with the PCI DSS.
In the event of a test failure, the remediation workflow can then be exported to third-party systems for remediation, with details into the cause provided by GenAI, and the test results can also be integrated into the CI/CD pipeline.
Itโs still too early to determine the full impact of GenAI on API development and security testing. However, early evidence indicates that it has the power to reduce the time taken to generate test cases significantly and harmonise testing across development and security teams.
Plain English prompting should make it much easier to query and gather the requisite information to demonstrate compliance with industry regulations. It builds upon recent advances in integrating API security testing within API security tools, meaning the sector no longer needs to rely purely on SAST/DAST tooling.