Suppose 3 months pass and then some test fails. If the tests are split into multiple test files, it's easier to assess the failure. If you organize tests so that they are run in an order that reflects dependencies, you'll more quickly understand what's wrong. Say the first test is to see if authentication is working and tests that follow assume authentications. You know that if something is wrong with the authentication process, then all the tests that follow (that don't test authentication but that need it) will also fail.
Splitting the tests into multiple test files increases the likelihood that (3 months from now) you'll understand what the test verifies with only a peek at the test code.
Splitting test tests into multiple test files allows you to reuse test logic - you can build a test library. A test that verifies that invalid authentication does not allow access has a lot in common with a test that verifies that valid authentication does allow access.
Splitting tests into multiple files allows you to reduce conditional logic in tests. Suppose you have 3 different user types and you want to verify that they can do what they should and that they can't do what they shouldn't. You might be tempted to write a for loop that loops over the users and in that loop you put logic that will either run a test or not run a test. A better solution is to test each individually.
Gerard Meszaros wrote a wonderful book "xUnit Test Patterns - refactoring test code." Only after my test suite was a year or two old, did I find and appreciate that book.
In reply to Re: Sharing configuration information among test files
by jimX11
in thread Sharing configuration information among test files
by talexb
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |