Pact evolved from two main concepts: unit testing micro-service integrations and Consumer Driven Contracts (CDC). Consumer Driven has always been a big driver for us, as we felt it has a lot of benefits for building useful APIs. And an API is only useful to the consumer.

The main benefits can be articulated as:

  • By focusing on the needs of the consumer, you build services that are fit for purpose and avoid over engineering.
  • You have a mechanism to know when you've broken any consumer, so it is easy to evolve and refactor your service.

But there are a number cases where CDC can be problematic. One is where a service is used by a lot of consumers. The strategy doesn't scale well with lots of consumers all providing a contract that may be a just little bit different.

The other is where the consumer does not really understand the intricacies of the service and is unable to provide a meaningful contract, or has no appetite to do so and just wants to get the integration out of the way so they can move on to more important things. I've worked at a number of large corporations and have seen this be the case in those types of environments. In fact, at a large bank I have been working at recently the consumers of our services were sometimes three degrees of separation away.  We did not really know who they were, apart from the business unit they were part of or the name of the initiative that their requirements came from.

We've got a lot of comments from people over the years asking as to why some consumer of their APIs can break their build. The prevailing attitude seems to be that the provider of the service should not be concerned about breaking the consumer with a release. It is the responsibility of the consumer team to ensure they are compatible with newer versions of the provider service. In fact, if a consumer build breaks then the provider team gets to mock them for their weakness. Assuming they know who they actually are.

Contract testing in an environment where the provider is the driver

Provider driven contract testing is definitely achievable if we take the ideas distilled in consumer driven contract testing, but reverse the direction. In fact, it probably works nearly identically as long as all the parts are there. The main issue I find in environments where the providers drive things is that there is no feedback loop. Everything is one way. The Right Way, of course, but still only one way.

With Pact and consumer driven contract testing we built things based on the following two principles:

  1. The contract is generated from running actual consumer code. This avoids drift between what is documented and what is implemented.
  2. The consumer publishes their contract if their consumer tests pass, the provider publishes the results of verifying the consumers published contract.

This forms a very important feedback loop. The provider always knows that a contract was generated from passing tests, and the consumer knows if a version of the provider was verified successfully against that contract. And either side can build validation of this data into their build pipeline using tools like can-i-deploy.

To implement provider driven contract testing, we need to have the same principles:

  1. The contract is only published when it is validated against the providers' service. This avoids drift and promotes confidence in the contract.
  2. The consumer verifies their code against the published contract and then publishes the results of the verification. This creates the feedback loop.

Using OpenAPI/Swagger Specifications as contracts

OpenAPI/Swagger has become the de facto way of specifying APIs. Especially in large organisations where there are just too many people for there to be effective avenues of communication and levels of trust. And OpenAPI/Swagger is also now becoming used to configure API gateways like Apigee and AWS API Gateway. So it makes sense to use it as the mechanism for contract tests.

However, its use has a number of issues and gaps that need to be addressed before we can use it for this purpose. Here are a few that I have observed:

  • The specification is sometimes written upfront by an architect or solution designer and has a lot of assumptions and may not represent the actual running service.
  • The specification is published to all parties, but is not maintained along with the service. This is a problem when there are a lot of assumptions in it that turn out to be wrong or the service is updated later and the specification is not.
  • Nobody knows what the latest version of the specification is or the actual version for a particular version of a consumer or service. Especially when the specification is shared via email.
  • Consumers are built against mock servers that run off the specification, but there is no feedback on how they run against updated versions of the specification. If you knew that a change to a specification did not fail any consumer build, you would be very confident of rolling that change out to production without lengthy integrated testing.
  • The OpenAPI/Swagger is more geared towards documenting API endpoints. You can validate that a given request or response adheres to the specification or not, and you know all the parameters that are accepted and the possible responses that can be returned. But there is no way to automatically know which set of parameters lead to which type of response.

Overcoming these problems will not be too difficult. Already products like SwaggerHub allow you to publish and collaborate on the specifications, and has webhooks to tie build pipelines to the published specifications. But there is no feedback loop.

Provider driven contract testing with OpenAPI/Swagger and Pactflow

Pactflow already supports publishing contracts and verification results and webhooks to tie the publishing of the contracts to consumer and provider builds. It requires the versions of the consumer and provider to be published along with the contract or verification results, so you will know what the latest supported versions are.

What is missing is how to generate the results of the verification of the consumer and provider against the specification. This is going to be our focus at Pactflow. How we can make this easy to setup and execute?

One of the ways we could accomplish this is to annotate the OpenAPI specification using vendor extensions so we could tie example requests with the associated responses. That would make it easy to then automate the consumer and provider verification tests. We could then easily extend tools like Pact to support OpenAPI specifications as contracts.

We would love to be able to here your thoughts on how useful this would be. Let us know what you think at hello@pactflow.io.