in reply to How do you structure and run module test code?

I think that the Perl Critic test, the POD coverage test and the POD test are author tests.

I've long distributed these tests along the real functionality tests in t/, but I found that these tests add mostly noise from the CPAN testers as changes in the POD modules or in Perl Critic would generate failures that are not really related to the module functionality and usability. Thus, I would move these tests into a separate directory conventionally named xt/ for the release and author tests.

Personally, I run my test suite nowadays through

prove -bl xt t

For grouping the tests, I usually try to aim for a set of functionality within one test program. This is either one method (for example, string generation of URLs or whatever), or a set of methods (for example, navigation methods in a browser, like "forward", "back", ...). I do this in the expectation that if my test assumptions fail, most likely it will be a single test file that shows the failure and ideally also already gives an indication of what goes wrong.

Especially for data-driven tests, for example tests that compare known good results to the results of the current implementation, I like to keep these in separate programs because the data-driven tests are usually mostly data and not code themselves.

I use one shell session for running my tests and my editor in a separate window, so I cannot say how to integrate running the tests into the editor.