This is part 2 in a series where we answer key questions from our community channels. Join us on Slack if you're not already part of the our main forum.

If  you want to catch up, read Part 1: What is contract testing and why should I use it?

In this post, we consider the next question people ask us once they are convinced of the benefits of contract-testing - how to get started.

Love the idea, but how do I get started?

There are several motivations behind such questions, such as how to get buy-in from others in the team, where and what application would be a good place to start, and how to measure success. Let's tackle each of these in turn.

How do I get buy-in from the team?

Building software is a team sport, there is no shying away from this reality, and introducing a new tool into a team - be it a team of 5 or 50 people - requires collective agreement: usually via talking to one another.

Sometimes you have members of the team who don't initially buy in to the idea - this can be frustrating, but in our experience most of the time it's because they can't see the value in doing so. Adding a tool usually requires a cognitive investment, and developers need to be convinced it's worth their time.

So, before you can convince somebody, you need to answer the question of "why"? There are many reasons to introduce a tool, but in the case of Pact and contract testing, they can often be posed as a set of questions.

Once you're clear on the goals and objectives, you can often turn a detractor into a change agent for your cause.

Do we have a problem we need to solve?

Often Pact is added because there is a stated or implied issue with the existing system. Answering yes to any of these questions will help you identify compelling reasons to consider contract testing:

  • Are your E2E tests difficult to manage, slow or brittle?
  • Do you spend more time maintaining tests than building software?
  • Do you regularly find issues late in the development process, or worse - in production?

How might we move faster or have higher quality?

Motivation to simply do better, is also a great driver:

  • Do you want to do want to increase your test coverage?
  • Do you want to deploy multiple times a day (CD)
  • Do you want a better picture of your system integrations?

How can we explore another approach (aka the Experiment)?

Perhaps you have a new green-fields project, and it's a good opportunity to re-evaluate the current process, even if it's not causing you massive headaches. Perhaps you had seen Pact and consumer-driven contracts mentioned on the Thoughtworks Tech Radar and thought it was worth trialling.

How do I measure success?

Posing the questions in this way, also gives us another benefit - they are measurable, so we can compare baseline metrics and demonstrate value as we introduce a change.

Here are some metrics tied to the above questions you can use:

  • Reduction in build duration
  • Increase in number of deployments a day
  • Reduced feature cycle time
  • Reduced issues caught in various stages of your SDLC
  • Time spent writing/maintaining tests (consider getting representative baselines over  period of time by doing some manual activity accounting within the team)

Qualitative metrics are also worthwhile

  • How is the team feeling with the new tools?
  • What are the pains and gains  (if you have pains - or gains - do let us know!)
  • How active is the tool?
  • How active and supportive is the community?

Where should I get started?

Because every situation is going to be different, there is no easy answer here! Let's start with some generic advice and then look at some common archetypes.

  1. Get clear on the goals, objectives and how you will measure success
  2. Seek approval from key stakeholders and propose a time-boxed experiment (perhaps its a sprint goal or objective)
  3. Find some advocates who are willing to participate
  4. Start small - find one consumer-provider pair, and ideally one with the smallest / simplest set of interactions
  5. If possible, keep the initial test within a single team - you'll learn about how to communicate across service boundaries and be better prepared for conversations related cross-team collaboration when you get there
  6. Review how you went against your objectives, design your next experiment and iterate

Usually, your first experiment will be fairly contained and won't move the dial too far in any direction. Furthermore, you'll often be in a situation where you have added a tool, before removing - so be prepared for that uncomfortable situation. As you build confidence in the tool and process, you'll be able to remove others.

Here are some common situations we see and how you might deal with them.

When you have a distributed monolith...

A distributed monolith is where although nominally you have a microservice architecture, each microservice must be tested and/or deployed together. This is what takes the worst properties of each architectural style, and combines them.

Goal: Your goal here is to remove that coupling and enable independent releases (see any of Sam Newman's excellent talks on this)

Our suggestion is to find two services within that architecture that communicate with each other that share one or more of the following characteristics:

  1. They are small and easy to change
  2. Their combined interfaces are well understood
  3. They are the most plausible candidates for slicing away from the broader distributed monolith
  4. They are managed by a single team

Find this ideal candidate, and iterate on the testing until you have all of the interactions captured. At this point, you're free to deploy your 2 components without relying on others - this is a milestone to celebrate! But, the rest of the system will still require your component to release. Because you now have confidence in your deployment process, this is where you can have the discussion about removing the end-to-end tests, or at least shifting who runs them and manages them (hint: whoever owns the service).

One final note: if the situation is so bad that every day feels like a race to a release deadline and your builds are constantly RED, scrap the above rules - find the most painful problem, lean into it and show that it can be done in a better way. There's nothing like a fire to inspire change!

When you want to transition from a Monolith to microservices...

The rules here are similar to a distributed monolith. But instead of looking at services, you're going to need to think about domains (read this overview or follow this excellent guide). You'll want to find areas of code that are responsible for a single bounded context and look to split that off from the monolith - ideally start with the ones that change the most or have the most value, so that you can gain actual benefits from the architecture and have good arguments to convince others (e.g. management), creating a virtuous cycle to do more. Contract tests can support this transition, because it is an iterative process, and each subsequent interface that is extracted can be covered by a set of contracts.

Goal: Your contract-testing goal in this type of scenario is to be able to safely evolve your architecture, and to be able to unlock value previously stuck in slow-moving parts of your code base.


Customer stories

Here are some stories published on the web from those using Pact and contract-testing to improve their software quality - these could be great places to start if you're looking for inspiration.

Monolith to microservices

Faster and Safer Microservices

Do you have a story you'd like to share?

We would love to be able to be able to share your story with others, let us know and we'll get you added!