This course is under active development.
One way we can alleviate the Increasing Pressure on Software Teams is to build our systems so that they are testable, which enables us to leverage testing automation in our development processes.
Manual testing is expensive. People have to perform testing rather than performing other valuable activities, and manual testing is quite time consuming. Because of this expense, it is often neglected until the last minute, and issues are caught much later than they should have been (or not caught at all).
To reduce the burden on our teams, headcount, and the time to catch a problem, we can rely on testing automation. This ensures that tests are run for every build without effort from our end.
As a benefit, building testable systems and relying on testing automation better enables us to change our software. Automated tests are also an important part of our Leverage tooling to catch issues as early as possible.
Testing Problems
- Manual testing is an expensive and time consuming process
- Manual testing is error prone strategy
- When systems are difficult to test, teams will skip testing
- Dependencies make testing difficult
Reasons people give for not testing:
- A common complaint is that writing tests takes too much time
- Dependencies make testing difficult
Benefits of Automated Testing
-
- Should I add another point about porting the software?
- Automated tests help us confidently change software
- Automated tests save time
- Automated tests help us resolve some issues faster
- Automated tests help us catch errors earlier
- Automated tests help us reduce headcount
- Automated tests are a form of documentation (but Automated tests are not necessarily a useful form of documentation)
Testability Approaches
-
- Tests written against abstract interfaces are portable and reusable (see: Embedded systems software benefits from the following types of abstract interfaces)
- Start implementing by writing tests against the interface
- Use the interface to determine what to test
-
- Use events, messages, and queues to reduce coupling
- Prefer to implement drivers as event-driven state machines
- Event-driven state machines improve testability
- Test driven development can improve a system’s testability (at the very least; it gives us a prompt for writing tests, and it encourages us to write software and test it out in smaller chunks)
Dealing with Existing Code
- Testing using recorded data
- Testing subsystems
- Running tests on the hardware
-
- FLEX paging example comes to mind – we can often refactor and extract pieces of our software that we can test in a standalone manner, even if we can’t test an entire module in a standalone way
Common challenges
- Testing implementation details makes our tests brittle
- Brittle tests indicate you are testing implementation details
- Brittle tests should be deleted and redesigned
- Non-deterministic tests leading to intermittent test failures
Demonstrations
- Bringing a Time-of-Flight sensor under test using recorded data
Exercises
-
- Writing tests for a hardware-independent module
- Writing tests for a hardware-dependent module
- Running off-target tests on the hardware
-
- Need some kind of recorded data test – provide the recorded data, use it to test the driver
Model Software
-
- testable systems: uses pre-recorded data to eliminate the driver code – we recorded what came out of the driver and pass the byte array to the decoder. automated decoder tests end-to-end testing without hardware
- Tests for the decoder can be run separately from hardware
- Package a repo (as a git bundle so we can keep private) for our students. they can run the tests on their own machines.
- For testability, note that importing recorded data can be a great way to bring code under test when it wasn’t designed for testability. we just need to look out for seams
Incorporate:
- Create a Compelling Vision for our testing automation, because Vision enables good following
Conclusion
Mindsets:
- Tests should be peer reviewed
- If it’s not tested, it’s not done
- Think of yourself as an interface designer (ties into ways to accomplish; we want to write tests against the interface, not the implementation)