Sitemap

Knowing when the battle is lost with XCUITests

2 min readMay 8, 2025

Let’s say you’ve got a suite of XCUITests that takes 90 minutes to run. And let’s say you run those tests in a CI, such as Bitrise. And let’s say those tests start failing fairly early on in the run, many of them, if not all of them!

Quite possibly the logs you see in CI aren’t necessarily going to have all the information you need to figure out why the tests are failing. In this case what you need is the .xcresult that will only become available once the tests have completed. And certainly with Bitrise you can’t end the pipeline early, otherwise you’ll never get that .xcresult. But it could be a long wait, up to 90 minutes, or even longer if your tests tend to take longer when they’re timing out.

If something as terrible as this has happened in your tests there’s likely zero value in running the remaining tests. There’s clearly something fundamentally wrong with this test run that will be apparent from the failures so far and will likely be the same for all remaining tests. So why not just skip all the remaining tests?

This was the situation I found myself in for the nth time and figured I’d spend some of those wasted minutes looking for a solution.

I came across XCTestObservation, a protocol that can be used to be informed of tests going through their lifecycle, such as starting, recording issues and finishing.

I put together a simple implementation of this and it works nicely!

You can then put it into use in your XCUITests like so:

Initially the recorded failure count will be zero so shouldSkipAllTests will report false and not skip each test being run. If enough tests fail then it will become true and all tests from that point will be skipped, bringing your test run to a much quicker end.

Some things to consider:

  • If your tests are setup to retry when they fail then each failure will add to the failure count, which may or may not be desirable. One way to solve this might be to count unique test failures rather than just adding one on every failure.
  • Consider whether the total number of failures is right for your use case. Perhaps you’d rather count the number of failures in a row than just the total number across the whole test run.
  • I used a singleton for simplicity, you might prefer/need something a little different
  • If your tests run in parallel you’ll need to make sure TestObserver is thread-safe to avoid problems.

--

--

Chris Mash
Chris Mash

Written by Chris Mash

iOS developer since 2012, previously console games developer at Sony and Activision. Twitter: @CJMash

No responses yet