in reply to (OT) Tracking Issues, Requirements, Tests

We have one customer on GitHub, one on GitLab, and one on Gitea. All 3 of these work quite well for our size of dev teams (small). GitHub is out if you don't want a cloud, but the other two can be self-hosted. None of them separate issues from requirements from tests, but they all have numerous ways to tag and organize issues, which seems to work fine in practice. (I also don't understand why you'd want tests organized like issues... why not just create an issue for adding a test, write it and add it to the automated test suite, and then resolve the issue? If it's a documentation thing, then just use the wiki.)

As it happens, the largest customer is the one using GitLab and they are considering Jira now because GitLab is highly developer-focused and most of their employees are non-developers, and price for accounts is fairly high on enterprise GitLab, so they can't just give everyone an account. Meanwhile, the second largest customer is using the free version of Gitea and integrated it with Active Directory, so all employees automatically have an account if they need to interact with developers.

  • Comment on Re: (OT) Tracking Issues, Requirements, Tests

Replies are listed 'Best First'.
Re^2: (OT) Tracking Issues, Requirements, Tests
by afoken (Chancellor) on Apr 09, 2025 at 21:40 UTC
    why not just create an issue for adding a test, write it and add it to the automated test suite, and then resolve the issue?

    Because we track the entire product, including its documentation, electronics, mechanics, pneumatic (if any). There are tests that can not be automated, at least not for a sane price. One of our current projects has a container for some liquid, with a level sensor and a tilt sensor. The container has a manually operated draining valve and - for development and testing - a simple funnel on the top side. (A little bit like the fuel tank on old motor bikes, but for a completely different purpose.) I don't know the exact test spec yet, but I bet you will find instructions like these in the test plans:

    1. ...
    2. Fill ten volume units of liquid into the container
    3. Wait three time units
    4. Check the if the volume displayed by the device is between 9.5 and 10.5 units.
    5. Tilt the container by five angle units.
    6. Wait one time unit
    7. Check if the device issued a tilt alarm
    8. Return the container to level
    9. Wait one time unit
    10. Check that the alarm has turned off
    11. Drain five volume units out of the container
    12. Wait one time unit
    13. Check that the device issued a leakage alarm
    14. ...

    Yes, these are very detailed baby steps of actually using (or misusing) the device under test. Using your fingers, ears, eyes, and a little bit of that grey filling material between the ears. Pretend to be a smart or dumb user.

    Yes, you could automate that, using a machine three times as complex as the device under test. Or just tell an intern to open the test plan #47893 in a browser and to follow the instructions.

    These tests are usually executed and documented ONCE for the entire device, before handing it over to the client. The test instructions and results are part of the documentation that is handed to the client. Maybe after year or two, hardware and/or software are modified to better match the client's needs, tests are modified to match the new requirements, and then, those tests are run ONCE to confirm that all requirements are still fulfilled.

    So even just thinking about automating them is way too expensive. Interns are way cheaper than automating those tests. Even if they completely f*** up a device under test.

    Another part of the tests is simply reading the source code (or schematics, layouts, hardware drawings). Compiler warnings are great, and lint find a lot of extra mess-ups, but having another developer look at the code (or schematics, plans) can find a lot of those nasty edge cases everybody hates to debug.

    Alexander

    --
    Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)
      Ok, so not software tests. Well, then that goes to my point about the wiki- each of these platforms has a version controlled wiki and you could easily set up a standard location for all real-world testing plans to be documented. If you want them in the same repo as the code, then just designate a subdirectory for it and write them in Markdown, which renders nicely and is easy to edit from the web interface.
        [...] version controlled wiki [...] for all real-world testing plans to be documented [...]

        It's not just documenting the tests. It's also about traceability. You don't write tests like a poem, "inspired" by the requirements. Each and every requirement needs to have at least test. "Test" may be a lab test, software reading, datasheet reading (e.g. if a requirement demands UL-listed parts). In the end, you end with a lot of tests, each test verifies at least one requirement. You have test plans, grouping tests reasonably (i.e. you don't mix lab tests and software reading). And you have test executions, documenting that a test plan was executed partially or completely, including the test results. All of that can be traced back to requirements.

        Yes, it can be done in a wiki or in Excel. We did it in Excel. It sucked. Really. Starting with the fact that Excel documents can only be edited on a single machine at a time. A wiki would have improved that, but still, you would have to do all tracing manually. In that regard, Jira + R4J + T4J is a huge improvement. Three people executing different test plans in parallel is no problem, checking the traces required just a few mouse clicks instead of hours of clicking through MS Office documents. And if you reach 100% test execution, a few more mouse clicks export documents listing each test, its execution(s), and the links to the requirements. That can be added to the project documentation, making the client's auditors happy. And ours, because we could trace any single test result back to the requirements directly in the web browser. It really impresses auditors.

        (And no, we don't do that just to make auditors happy. It's a nice side effect.)

        Alexander

        --
        Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)
Re^2: (OT) Tracking Issues, Requirements, Tests
by cavac (Prior) on Apr 10, 2025 at 12:54 UTC

    add it to the automated test suite

    As afoken already said, in big real world applications, you often can only automate a small amount of tests.

    For my point-of-sales system, about 50% of the issues in the field are usability issues of the UI. And yes, that includes things like "an old lady refusing to wear her glasses and complaining that the text is too small" (so we added scalability to the main UI mask texts and buttons), touch screen gestures getting misinterpreted because the user did do them in the laziest way possible...

    A lot of UI stuff is very subjective and the results vary from customer to customer and from user to user and from device to device.

    Yes, one may (and should) write tests for some of those cases, when it is the best financial choice. But in many cases, a trained eye can catch errors in seconds or minute for which you would have to spend weeks writing test software. And you would need to keep spending weeks writing updates to the software for every small UI change, instead of going through a 10 minute checklist manually...

    PerlMonks XP is useless? Not anymore: XPD - Do more with your PerlMonks XP
    Also check out my sisters artwork and my weekly webcomics