Reading:
How to get started with contract testing

How to get started with contract testing

Matt Fellows
Updated
How to get started with contract testing

This is part 2 in a series where we answer key questions from our community channels. Join us on Slack if you're not already part of the our main forum.

If  you want to catch up, read Part 1: What is contract testing and why should I use it?

In this post, we consider the next question people ask us once they are convinced of the benefits of contract-testing - how to get started.

Love the idea, but how do I get started?

There are several motivations behind such questions, such as how to get buy-in from others in the team, where and what application would be a good place to start, how to measure success and educating the team. Let's tackle each of these in turn.

How do I get buy-in from the team?

Building software is a team sport, there is no shying away from this reality, and introducing a new tool into a team - be it a team of 5 or 50 people - requires collective agreement: usually via talking to one another.

Sometimes you have members of the team who don't initially buy in to the idea - this can be frustrating, but in our experience most of the time it's because they can't see the value in doing so. Adding a tool usually requires a cognitive investment, and developers need to be convinced it's worth their time.

So, before you can convince somebody, you need to answer the question of "why"? There are many reasons to introduce a tool, but in the case of Pact and contract testing, they can often be posed as a set of questions.

Once you're clear on the goals and objectives, you can often turn a detractor into a change agent for your cause.

Do we have a problem we need to solve?

Often Pact is added because there is a stated or implied issue with the existing system. Answering yes to any of these questions will help you identify compelling reasons to consider contract testing:

  • Are your end-to-end integration tests difficult to manage, slow or brittle?
  • Do you spend more time maintaining tests than building software?
  • Do you regularly find issues late in the development process, or worse - in production?

How might we move faster or have higher quality?

Motivation to simply do better, is also a great driver:

  • Do you want to do want to increase your test coverage?
  • Do you want to deploy multiple times a day (CD)
  • Do you want a better picture of your system integrations?

How can we explore another approach (aka the Experiment)?

Perhaps you have a new green-fields project, and it's a good opportunity to re-evaluate the current process, even if it's not causing you massive headaches. Perhaps you had seen Pact and consumer-driven contracts mentioned on the Thoughtworks Tech Radar and thought it was worth trialling.

How do I measure success?

Posing the questions in this way, also gives us another benefit - they are measurable, so we can compare baseline metrics and demonstrate value as we introduce a change.

Below are some metrics you can use to drive change.

Starting with quantitative metrics, we find that the four key metrics from DORA relating to high performing organisations are great ways to measure and report on your contract testing initiative. These metrics are:

  • Deployment frequency: how often do you release?
  • Lead time for change: how long does it take to get a release from from commit to production?
  • Change failure rates: how often does a change result in a failure as a percentage of all changes?
  • Mean time to recovery (MTTR): how long does it take to recover from a failure (e.g. fix/rollback)

These metrics are usually easier to measure, have above the line impacts, and they don't require tracking low-level details. However, you may want more granular feedback, for example, on things that you have the power to change such as:

  • Reduction in build duration
  • Reduced feature cycle time
  • Reduced issues caught in various stages of your SDLC
  • Time spent writing/maintaining tests

You should consider getting representative baselines over  period of time by doing some manual activity accounting within the team if you're unclear of these numbers.

Qualitative metrics are also worthwhile, because these contribute to developer satisfaction:

  • How is the team feeling with the new tools?
  • What are the pains and gains  (if you have pains - or gains - do let us know!)
  • How active is the tool?
  • How active and supportive is the community?

Where should I get started?

Because every situation is going to be different, there is no easy answer here! Let's start with some generic advice and then look at some common archetypes.

  1. Get clear on the goals, objectives and how you will measure success
  2. Seek approval from key stakeholders and propose a time-boxed experiment (perhaps its a sprint goal or objective)
  3. Find some advocates who are willing to participate
  4. Start small - find one consumer-provider pair, and ideally one with the smallest / simplest set of interactions
  5. If possible, keep the initial test within a single team - you'll learn about how to communicate across service boundaries and be better prepared for conversations related cross-team collaboration when you get there
  6. Review how you went against your objectives, design your next experiment and iterate

Usually, your first experiment will be fairly contained and won't move the dial too far in any direction. Furthermore, you'll often be in a situation where you have added a tool, before removing - so be prepared for that uncomfortable situation. As you build confidence in the tool and process, you'll be able to remove others.

Here are some common situations we see and how you might deal with them.

When you have a distributed monolith...

A distributed monolith is where although nominally you have a microservice architecture, each microservice must be tested and/or deployed together. This is what takes the worst properties of each architectural style, and combines them.

Goal: Your goal here is to remove that coupling and enable independent releases (see any of Sam Newman's excellent talks on this)

Our suggestion is to find two services within that architecture that communicate with each other that share one or more of the following characteristics:

  1. They are small and easy to change
  2. Their combined interfaces are well understood
  3. They are the most plausible candidates for slicing away from the broader distributed monolith
  4. They are managed by a single team

Find this ideal candidate, and iterate on the testing until you have all of the interactions captured. At this point, you're free to deploy your 2 components without relying on others - this is a milestone to celebrate! But, the rest of the system will still require your component to release. Because you now have confidence in your deployment process, this is where you can have the discussion about removing the end-to-end tests, or at least shifting who runs them and manages them (hint: whoever owns the service).

One final note: if the situation is so bad that every day feels like a race to a release deadline and your builds are constantly RED, scrap the above rules - find the most painful problem, lean into it and show that it can be done in a better way. There's nothing like a fire to inspire change!

When you want to transition from a Monolith to microservices...

The rules here are similar to a distributed monolith. But instead of looking at services, you're going to need to think about domains (read this overview or follow this excellent guide). You'll want to find areas of code that are responsible for a single bounded context and look to split that off from the monolith - ideally start with the ones that change the most or have the most value, so that you can gain actual benefits from the architecture and have good arguments to convince others (e.g. management), creating a virtuous cycle to do more. Contract tests can support this transition, because it is an iterative process, and each subsequent interface that is extracted can be covered by a set of contracts.

Goal: Your contract-testing goal in this type of scenario is to be able to safely evolve your architecture, and to be able to unlock value previously stuck in slow-moving parts of your code base.

Who should do the starting?

The most successful model we've encountered is the Community of Practice.

"A community of practice (CoP) is a group of people who share a common concern, a set of problems, or an interest in a topic and who come together to fulfill both individual and group goals... Communities of practice often focus on sharing best practices and creating new knowledge to advance a domain of professional practice."

Source: http://www.communityofpractice.ca/background/what-is-a-community-of-practice/

Within that community, there are usually a few members who initiate the community and will become the initial contract-testing champions. This coalition will first put together the business case for contract-testing, identify the specific changes required in the organisation to make it successful and implement the initial POC. It is important throughout the initial implementations beyond the POC, that there is some continuity and coaching from these champions to ensure a consistent rollout, that lessons are quickly incorporated into common practice and that teams have up-to-date knowledge of how Pact is being used in the organisation. Once the first few implementations have been successfully rolled out, these champions can take a less active (hands-on) role in implementation, instead fostering the broader community of practice.

Training, Education and Scaling

Once you're through your POC and demonstrated value in moving forward, the challenge becomes scaling contract-testing throughout the organisation.

Whilst there is no one way to go about it, a good starting point is educating teams on the value of contract testing, how it provides value in your context, and training teams on how to use it effectively. We have numerous training and educational materials at Pactflow University. There you will find demo packs to explain the problem and concepts that you can use internally, workshop materials to train your team and numerous examples across languages and technologies.

An example of what a contract-testing initiative with Pact looks like in the form of a gantt chart might look as follows:

Indicative contract-testing plan from PoC to team rollout

Most of the hard up-front work is in the POC phase, where the success criteria is defined, the baseline measurements are determined and the core team who will champion the change throughout the organisation is formed.


Customer stories

Here are some stories published on the web from those using Pact and contract-testing to improve their software quality - these could be great places to start if you're looking for inspiration.

Pactflow Case Studies

Monolith to microservices

Faster and Safer Microservices

Do you have a story you'd like to share?

We would love to be able to be able to share your story with others, let us know and we'll get you added!

What is contract testing and why should I try it?
17 November 2022

What is contract testing and why should I try it?

Learn how contract testing fits in your test automation strategy, and how you can reduce reliance on integrated and end-to-end testing to increase speed to market

6 min read

Contract Testing Vs Integration Testing
15 September 2022

Contract Testing Vs Integration Testing

Contract testing is a technique to calm the chaos caused by sprawling, complex microservices & APIs. Here's how to use it alongside integration testing.

11 min read

Proving E2E tests are a Scam
18 March 2021

Proving E2E tests are a Scam

"We can see that in this example Contract Testing requires approximately a tenth of the compute resources yet provides twice the number of test fixtures. An important takeaway for me was that the cost of testing your contracts doesn’t depend on the size of your system."

14 min read

arrow-up icon