It’s a fact of life that we often have to write automated UI tests for features that have defects, or that interact with 3rd party APIs that aren’t returning the right responses, or for items that we know aren’t working right. When the team has decided that the behavior isn’t going to be fixed, what’s an automation engineer to do? Let the tests fail? Not write them? Champion harder for the defects?
Jenny Bramble suggests writing your tests to pass.
By creating tests that pass on the current expected behavior (the defect), we are in a perfect position to tell when the defect is resolved, or the api is returning the correct information, or any of the other error cases we may be encountering. This prevents failure fatigue (from seeing a test ‘always fail), while still providing meaningful, actionable information out of our test suite.
She will discuss several cases that she’s experienced that this method has worked for as well as how to keep the rest of the team informed through TODOs, Jira stories, and documentation. And—of course!—what to do when your test finally fails.
You’ll walk away from this talk with a clear idea of when to design your tests to fail, how to use TODOs and other indicators to let the rest of the test team know what’s going on, as well as a frank discussion of automation as information—not as a bug detection system!