Service

  • Technology and Engineering

Blog

by Nortal

How to validate hundreds of APIs in the blink of an eye

With approximately 300 applications in production — half being deprecated in the near future and the other half being upgraded or rewritten, including unknown applications where we have partial or no access to most of the dependencies — we find ourselves reliant upon our client’s systems for feedback and delivery statuses.

CI/CD, the standard for the industry, can be a slow process of moving old, massive systems. And if the client case involves Appdynamics for monitoring applications, APIs, and statistics with restrictions to data availability, this can result in a lot of guesswork. This conjecture presents a critical problem when kicking off architectural refactoring with 15 active applications and REST services in production and more than 30+ applications that are dependent on ours.

Scaling in the upcoming years by an equal number, it is easy to find yourself in a situation where you are working fast but don’t have a way to quickly know if delivery in one of the projects might create an issue for your development partners or other unknown applications.

Plotting the master plan

Bulletproof functioning. Verifying that all APIs consumed by the given application are still available and work as expected. It is important to fail fast if the API contract is broken. Otherwise, the failure will happen when a user performs an action — an undesired outcome.

After a new version of an application is deployed, a solution verifying intact communication is required.

Tackling the tactics

The initial idea was to validate APIs by versioning and to have an info page show after deployment. The aim was applications being tested with particular dependency versions and, from then on, prompting the user to check whether some services versions were off.

It sounded like a great idea in theory — but only in concept since there is no agreed-upon versioning of API services. And not all developing partners version their APIs; therefore, we cannot track applications tested with particular versions. 

This led us to the following staging process:

  • We have an Integration Environment where all developing partners put their latest working version applications;
  • If integration sanity checks as OK, we ship it to our client’s TEST environment, which has application versions for the latest feature requests;
  • If TEST checks as OK, it will be sent by us to a PRE-LIVE environment, which has application versions for deliveries that will go LIVE shortly. Additionally, hotfixes deliver through PRE-LIVE.

The problem is: We don’t have control over what versions are delivered and when because projects are moving at different speeds and legislation. Furthermore, there is no coherency regarding API versioning between projects and business partners. Therefore, we took the next best thing — we made the path simpler — by tracking usage of the actual APIs, not arbitrary version numbers.

Luckily, we had our annual Christmas hackathon coming up, so we intended to reinvent the wheel, have fun, and do something useful.

Dependency Verification

First, get the list of the API consumed by the given application and verify if these APIs are currently available.

Goal

As a user, I want to see the list of services and their APIs. Additionally, there should be an indication of whether the service and API are responding at any given moment.

Output example:

[
{
"name": "third-party-backend",
"status": "OK",
"endpoints": [
{
"url": "http://hostname/third-party-backend/api/client/endopoint1",
"status": "OK",
"statusCode": 200
},
{
"url": "http://hostname/third-party-backend/api/client/v1/endpoint2/parameter",
"status": "OK",
"statusCode": 200
},
{
"url": "http://hostname/third-party-backend/api/client/v1/endpoint3/parameter",
"status": "OK",
"statusCode": 200
},
{
"url": "http://hostname/third-party-backend/api/client/v2/endpoint2",
"status": "OK",
"statusCode": 200
}
]
},
{
"name": "third-party-backend2",
"status": "NOK",
"endpoints": [
{
"url": "http://hostname/third-party-backend2/internal/api/v1/companies/parameter/ratings/compliance",
"status": "OK",
"statusCode": 200
},
{
"url": "http://hostname/third-party-backend2/internal/api/v1/companies/parameter/ratings/risks",
"status": "NOK",
"statusCode": 404
}
]
}
]

Implementation

The implementation is based on the assumption that the application should have a full URL to request a third-party web service. It should be irrelevant how the app gets the URL (service registry, property file, database or hardcoded value).

That aside, we should be able to gather all URLs throughout the application for further processing. First, there should be a minor refactoring. Let’s put all URLs into a single place.

@Component
public class ExternalServiceUrlProvider {

@Inject
private ServiceRegistry serviceRegistry;

public String getServiceName() {
return "external-service";
}

public String getExternalServiceOneUrl() {
return serviceRegistry.getExternalServiceOneApiUrl() + "/client/v2/externalServiceOne";
}

public String getExternalServiceTwoUrl() {
return serviceRegistry.getExternalServiceTwoiUrl() + "/client/v1/externalSources/externalServiceTwo";
}

public String getExternalServiceThreeUrl() {
return serviceRegistry.getExternalServiceThreeApiUrl() + "/client/v1/summary/externalServiceThree";
}

public String getExternalServiceFourUrl() {
return serviceRegistry.getExternalServiceFourApiUrl() + "/client/externalServiceFour";
}

}

Similarly, we create a separate “URL provider” component for every service. Next, how can we collect all URLs for processing?

Spring’s dependency injection may be used to collect all instances of some particular class or interface. Let’s create an interface.

public interface ServiceUrlProvider {
String getServiceName();
}

Change our “URL provider” to implement the interface

@Component
public class ExternalServiceUrlProvider implements ServiceUrlProvider {
// …
}

As soon as we have refactored the code to recognize “URL providers” as Spring components, we can inject them into service.

@Service
public class DependencyService {

private final List serviceUrlProviders;

public DependencyService(List serviceUrlProviders) {
this.serviceUrlProviders = serviceUrlProviders;
}

Now, let’s perform reflection magic. For an instance of ‘ServiceUrlProvider,’ we need to call every method that returns a URL. Actually, “every” method is too much. We want to call only the methods with URL values, so we need to skip all non-public methods and the “getServiceName” method. Also, the less obvious methods we would like to skip are the synthetic methods. If the notion of “synthetic” constructs in Java is unfamiliar to you, here is a definition from Java specification JLS §13.1:

A construct emitted by a Java compiler must be marked as synthetic if it does not correspond to a construct declared explicitly or implicitly in source code, unless the emitted construct is a class initialization method.

Collect all URLs from a service URL provider:

private List getAllEndpointsInfo(ServiceUrlProvider serviceUrlProvider) {
return Arrays.stream(serviceUrlProvider.getClass().getDeclaredMethods())
.filter(method -> !method.isSynthetic())
.filter(method -> isPublic(method.getModifiers()))
.filter(method -> !method.getName().equals("getServiceName"))
.map(method -> invoke(serviceUrlProvider, method))
.collect(toList());
}

private String invoke(ServiceUrlProvider urlProvider, Method method) {
try {
return (String) method.invoke(urlProvider);
} catch (IllegalAccessException | InvocationTargetException e) {
throw new RuntimeException(e);
}
}

Though it cannot be that simple, there are often URLs with parameters and corresponding methods as well. To issue a request to such URLs, we need to provide arbitrary values for those parameters.

One way to do this:

private static final Map<Class<?>, Object> DEFAULT_VALUES_BY_TYPE = ImmutableMap.<Class<?>, Object>builder()
.put(boolean.class, false)
.put(Boolean.class, false)
.put(int.class, 100)
.put(Integer.class, 100)
.put(long.class, 100L)
.put(Long.class, 100L)
.put(float.class, 100f)
.put(Float.class, 100f)
.put(double.class, 100d)
.put(Double.class, 100d)
.put(Date.class, new Date())
.put(LocalDate.class, LocalDate.now())
.put(LocalDateTime.class, LocalDateTime.now())
.build();

private String invoke(ServiceUrlProvider urlProvider, Method method) {
try {
return (String) method.invoke(urlProvider, getParameters(method.getParameters()));
} catch (IllegalAccessException | InvocationTargetException e) {
throw new RuntimeException(e);
}
}

private Object[] getParameters(Parameter[] parameters) {
return Arrays.stream(parameters)
.map(param -> param.getType().equals(String.class) ? param.getName() : DEFAULT_VALUES_BY_TYPE.get(param.getType()))
.toArray();
}

But what if parameters are validated, and our default value is invalid? For our goal, it is irrelevant. We will not send a request in the same way the application will. Also, there is no automatic way to verify the business logic of the third-party API. We are only checking whether the API is available.

The last part is: How to check the availability of the API. There are two possibilities: Actual request (GET, POST, DELETE), or the OPTIONS request.

The actual request may be troublesome as the ServiceUrlProvider would need to have, besides the URL, information regarding the HTTP method and maybe even the request body. Moreover, it maybe dangerous as POST and DELETE requests may alter the application data and state, which is unacceptable.

OPTIONS request, however, seems to be a better match as stated in RFC 7231 §4.3.7:

The OPTIONS method requests information about the communication options available for the target resource, at either the origin server or an intervening intermediary.  This method allows a client to determine the options and/or requirements associated with a resource, or the capabilities of a server, without implying a resource action.

That is, in fact, exactly what we need. Now we can use any HTTP client to issue a request to check the response. For example, using a ‘RestTemplate’:

ResponseEntity<object width="300" height="150">responseEntity = restTemplate.exchange(url, HttpMethod.OPTIONS, new HttpEntity<>(requestHeaders), Object.class);

Any response other than ‘404 Not Found’ is going to be considered successful. Once again, we are not controlling business logic but merely the availability of the API.

Dashboard

Considering there is more than one application with an integrated API Cover, it would be beneficial to have one place gathering and showing all information from all apps. So that one look will be enough to understand whether or not there is a problem, a simple API Cover dashboard was created.

The idea is straightforward. There is a primitive graph with nodes representing used APIs per application. Green is OK; red is NOT OK. The specific erroneous API can be found with a couple of clicks.

Limitations

Although the result has met expectations, there is one major limitation. There are cases when the third-party service provides a library, which requires only a base URL of the service. All exact API paths are hidden inside. There is no way to extract and verify such URLs. Thus, unless such library is modified to depend on the ServiceUrlProvider interface, the service’s API will not be reflected in the “dependencies” output.

Conclusion

Even though it has its limitations concerning third party service providers libraries, we still see that in future cases we can implement this dependency in our own libraries. This should give us at least in a long run a better overview of our own applications’ dependencies.After minor refactoring we have all the URLs that the application uses in one place, so we will have only one place where the changes have to be done and also a better overview of what is used.We started implementing this library in other projects to get feedback from teams to see if there would be more ways we could improve on this idea. And we definitely need a better dashboard…

Related content

Article

  • Data and AI
  • Strategy and Transformation
  • Technology and Engineering
  • Enterprise
  • Industry

From continuous analysis to continuous improvement - unlock the value of your industrial data with AI investments 

With AI, industrial organizations can seamlessly bridge the gap between structured and unstructured data, liberating experts from manual analysis and propelling them toward success.

Case study

  • Data and AI
  • Strategy and Transformation
  • Technology and Engineering
  • Consumer
  • Enterprise

Data architecture unlocking new business opportunities for one of Finland’s largest food services company 

Nortal built a data platform that functions as the central ecosystem of Compass Group Finland’s data and intelligence.

Case study

  • Cloud Transformation
  • Strategy and Transformation
  • Technology and Engineering
  • Healthcare

Moving healthcare to the cloud

Nortal built a Microsoft Azure cloud environment for Finland’s Kanta-Häme county that allows multiple IT vendors to develop services independently with shared underlying infrastructure and governance.

Get in touch

Let us offer you a new perspective.