Embedded Systems Testing Resources

Embedded Artistry was founded with the goal of creating reliable, safe, and well-tested embedded systems. The sad fact is that most embedded software that we've encountered is low-quality and untested. Perhaps this holds true for most of the software industry - the continual procession of hacks, flaws, and errors is discouraging.

We are focused on ways that we can all improve at our craft. Testing our systems, especially in an automated way, is a direct approach at identifying bugs early and improving our software quality.

We decided to make our list of testing resources publicly available. If you have any favorites which aren't listed here, leave us a comment!

Table of Contents:

  1. Unit testing
  2. Debugger-based testing using metal.test
  3. Phil Koopman's lectures on embedded software quality and testing

Unit Testing

Most developers know they should be writing unit tests as they develop new features. If you're unfamiliar with the concept, unit testing focuses on testing the finest granularity of our software, the functions and modules, in an independent and automated manner. We want to ensure that the small pieces operate correctly before we combine them into larger cooperating modules. By testing at the finest granularity, we reduce the total number of test combinations that are needed to cover all possible logic states. By testing in an automated manner, we can ensure that any changes we make don't introduce unintended errors.

"A key advantage of well tested code is the ability to perform random acts of kindness to it. Tending to your code like a garden. Small improvements add up and compound. Without tests, it's hard to be confident in even seemingly inconsequential changes." -Antonio Cangiano (@acangiano)

Unfortunately, at Embedded Artistry we’ve only worked on a handful of projects that perform unit testing. The primary reason for this is that many developers simply don’t know where to start when it comes to writing unit tests for embedded systems. The task feels so daunting and the schedule pressures are so strong that they tend to avoid unit testing all together.

James Grenning and TDD

James Grenning has put a tremendous amount of effort into teaching embedded systems developers how to adopt TDD. He published an embedded systems classic, _Test-Driven Development for Embedded C_, and regularly conducts TDD training seminars .

James has written extensively about TDD on his blog . Here are some of our favorite posts:

You can also watch these talks for an introduction into the how and why of TDD:

I took James Grenning's Remote TDD training webinar and wrote about my experience on the blog .

If you're interested in taking a training class, you can learn more on James's website.

Matt Chernosky and Electron Vector

Matt Chernosky, who runs the Electron Vector blog, is an invaluable resource for embedded unit testing and TDD. If you are having a hard time getting started with unit testing and TDD, Matt's articles provide a straightforward and accessible approach.

Here are some of our favorite articles from Matt's blog:

Matt published a free guide for using Ceedling for TDD . If you are experienced with TDD but are looking to improve your Ceedling skills, check out his Ceedling Field Manual.

Throw the Switch

Throw the Switch created the Ceedling , Unity , and Cmock unit testing suite. This is the trio of tools that Matt Chernosky uses and writes about.

Aside from creating testing tools, Throw the Switch maintains a library of test-related articles and a course called Unit Testing & Other Embedded Software Catalysts . Additional courses related to unit testing for embedded systems are being developed.

Unit Testing Frameworks

Listed below are frequently recommended unit testing frameworks for C and C++. There are more unit testing frameworks in existence than we can ever review, so again this list is not exhaustive. If nothing jumps out at you in this list, keep looking to find one that fits your team’s development style.

We already mentioned the Throw the Switch C unit testing frameworks: Unity , Cmock , and Ceedling .

Cmocka is the C unit testing framework we started with. Cmocka ships with built-in mock object support and operates similarly to Unity & CMock. The framework is built with C standard library functions and works well for embedded systems testing.

Catch appears to be the most popular C++ unit testing framework. Catch is a header-only library which supports C++11, C++14, and C++17.

Doctest is the unit test framework we use for EA’s C++ embedded framework project. Doctest is similar to Catch and is also header-only. Our favorite attribute of Doctest is that it keeps the test code alongside the implementation code. Doctest also enables you to write tests in headers, which Catch does not support.

GoogleTest is Google's C++ unit testing framework. GoogleTest is one of the few C++ frameworks with built-in mocking support.

CppUTest is a C++ test suite that was designed with embedded developers in mind. This framework is featured in James Grenning's book _ Test-Driven Development for Embedded C _. C++ features within the framework are kept to a minimum enabling it to be used for both C and C++ unit testing. CppUTest has built-in support for mocking.

If you're interested in mock object support for C++, check out GoogleMock , Trompeloeil , and FakeIt . Each of these mocking frameworks can be integrated with the unit test frameworks mentioned above.

Other Resources

If you're brand-new to TDD, read through this walkthrough to get a sense of the approach:

Steve Branam, who writes at Flink and Blink , has written a few posts on testing:

Steve recommended Jeff Langr's Modern C++ Programming with Test-Driven Development: Code Better, Sleep Better as another resource for embedded C++ developers.

Embedded Debugger-Based Testing

We always want to run as many tests as possible on a host PC when unit testing embedded systems code. However, we can’t test every aspect of our system on a host machine. Before shipping the final system, we need to evaluate the target compiler, issues which only present themselves on the target (e.g. endianness, timing), and the actual target hardware and hardware interfaces.

Metal.test is a framework which can help us with on-target testing. This project is maintained by Klemens Morgenstern, an independent contractor and consultant. Metal.test enables automated execution of remote code using debugger hardware, such as J-LINK or ST-Link. The project currently supports gdb, and lldb support is planned for a future release.

Metal.test features:

  • I/O Forwarding
  • Code Coverage
  • Unit testing
  • Call tracing
  • Profiling
  • Function Stubbing at link-time
  • Extensive documentation (in the GitHub repository Wiki )

Metal.test also includes a plugin system. While it's not essential, plugin support enables developers to extend the functionality to support any use case that a debugger can support.

Klemens is looking for feedback on metal.test . Don't hesitate to reach out with questions, issues, or other feedback.

Phil Koopman on Testing and Software Quality

Phil Koopman is a professor at Carnegie Mellon University . He also runs the Better Embedded Software blog , which is a must-read for embedded developers.

Phil has produced an immense and invaluable body of work, much of it focused on embedded software quality. The lecture notes for his embedded systems courses are available online, and he regularly posts lecture videos on his Youtube channel .

Here's a selection of his lectures that are related to testing and software quality:

Did We Miss Anything?

If you have a testing resource that you love, leave us a comment below and we will add it to the list!

A Look at My Portable Embedded Toolkit

Embedded systems developers rely on a variety of tools: debug adapters, power supplies, multimeters, oscilloscopes, logic analyzers, spectrum analyzers, and more.

Much of the equipment we use lives in our offices or labs, since it's too bulky to move around. But for engineers who travel frequently, it's quite helpful to have a portable toolkit. You never know when you'll be stuck in an emergency debugging situation, and having familiar tools on hand is a blessing.

If you're an engineer who travels frequently, or if you're just simply looking for useful tools, I hope you can find inspiration from my kit.

My Portable Embedded Toolkit

I've slowly built my portable embedded toolkit over the past ten years, and I've managed to pack a lot of debugging power into a small load. My kit is always on hand when I'm visiting a client, and its travelled with me to multiple manufacturing builds in China.

My kit consists of the following:

  • Digital multimeter
  • Aardvark I2C/SPI Host Adapter
  • Saleae Logic Analyzer
  • TIAO USB Multi-Protocol Adapter
  • USB Hub
  • A grab bag of wires and clamps
  • Spare jumpers

Most of the kit packs down into a first-edition Saleae Logic 8 case, which was made with a much sturdier shell. I carry the DMM and Aardvark adapter separately in my bag.

Let's take a deeper look at each piece of my kit and the roles they serve.

The major pieces of my embedded toolkit, packed for transport.

The unpacked contents of my Saleae case.

DIGITAL MULTIMETER

Digital multimeters (DMMs) are an essential tool for anyone working with electronics. I regularly need to measure voltage/current/resistance/capacitance and check continuity between signals.

My portable DMM of choice is the Mastech MS8288, which costs around 30 USD. I purchased my multimeter ten years ago and have yet to find a single cause for complaint.

For low-power tasks, the Mastech MS8288 performs admirably and produces accurate measurements. Once voltages and currents start to rise, you’ll notice inaccuracy (I've seen 3% error while measuring a 48V power supply). With that in mind, this isn't a DMM you'd use for tuning your power settings. For tasks which require precise measurements, you'll need to turn to a higher-precision DMM.

When selecting your own multimeter, make sure it has the following features:

  • Measurement Capabilities:

    • DC voltage 

    • AC voltage

    • Current

    • Resistance

    • Capacitance

  • Continuity check with audible beep

  • Selectable measurement range

  • Kickstand

  • Screen backlight

 Everybody needs a multimeter, but you don’t need the most expensive one available.

Everybody needs a multimeter, but you don’t need the most expensive one available.

Aardvark I2C/SPI Host Adapter

The Aardvark I2C/SPI Host Adapter is the newest addition to my toolkit. The Aardvark has been tremendously helpful in tracking down I2C/SPI problems and validating I2C/SPI interfaces. The adapter can operate as both a master and slave, and you can script sequences of commands to send to the device.

Total Phase also supplies libraries that you can use to interact with the adapter programatically. I’ve written I2C and SPI drivers for the Aardvark adapter, which enables me to write device drivers from the comfort of my host machine. Once the drivers are working, I can quickly port them to the target platform.

 The newest addition to the toolkit. Useful for debugging I2C/SPI problems and for writing drivers on your host machine.

The newest addition to the toolkit. Useful for debugging I2C/SPI problems and for writing drivers on your host machine.

Saleae Logic Analyzer

When I first started my career, logic analyzers were giant pieces of equipment which lived permanently in the lab. You would spend hours carefully getting set up and configuring the device, and you were chained to the analyzer until you were finished.

When Saleae released their amazingly compact USB logic analyzer, I immediately jumped on board. The Saleae Logic 8 is my favorite tool in my kit. Saleae’s logic analyzer software supports a variety of trigger conditions and data resolutions, and it can also decode a common communication protocols such as JTAG, SPI, I2C, CAN, and UART.

I'm still using my first edition Saleae Logic 8, but they’ve since overhauled their design and released both 4-channel and 16-channel versions.

I think that eight channels is the sweet spot for a portable analyzer. I’ve rarely needed to monitor more than eight channels at once, and in those rare cases I can usually work through signal group in stages. I also find that I regularly use more than four channels, especially when I need to analyze both control signals and a bus (e.g. SPI).

 The  Saleae Logic 8  is my favorite tool in the toolkit.

The Saleae Logic 8 is my favorite tool in the toolkit.

TIAO USB Multi-Protocol Adapter

The TIAO USB Multi-Protocol Adapter (TUMPA) has been another invaluable tool in my kit.

TUMPA is built around FTDI’s FT2232H chip. Between OpenOCD and FTDI libraries, you can use the TUMPA as an adapter for SWD, JTAG, SPI, I2C, UART, and digital I/O. The board also sports on-board voltage translation, which can be enabled/disabled through software or with a jumper.

TUMPA allows me to use a signal debug adapter across most of my projects. If you work on a variety of projects, having a single debugging adapter can drastically simplify your development environment.

 The  TUMPA  board enables me to carry a single debug adapter for a variety of scenarios.

The TUMPA board enables me to carry a single debug adapter for a variety of scenarios.

USB Hub

My laptop doesn't have enough ports to support all of my debugging devices, so I’m always carrying around a small USB hub.

I use Sabrent’s 4-port USB Hub without an external power supply, which I love for its small size and toggle buttons. If you’re working with high-current devices, I recommend purchasing the 4-port hub with a 5V power adapter.

You can use any USB hub you like, but I highly recommend picking one with toggle buttons. Being able to selectively enable and disable ports has been helpful when working with embedded devices. I frequently find myself cutting power to a USB device, using the buttons to reset devices, and to force USB disconnect/connect conditions.

 All these USB devices mean that I need to carry a hub in my kit.

All these USB devices mean that I need to carry a hub in my kit.

Wire Grab-Bag

All of these debug tools need to be hooked up to the target system, so I keep a mixed bag of wires and clips in my kit. I have a mix of male-male, female-male, and female-female jumper wires to handle any manner of connector. I also keep a few pieces of scrap wire for emergency soldering needs.

The clips you see come with the Saleae logic analyzers, but they are just generally useful for clipping pins and boards. You can find all manner of useful clips by searching for “test probe hook clip”.

 You can never have enough wires.

You can never have enough wires.

Spare Jumpers

Because I keep finding myself in situations where I don’t have enough jumpers, I decided to keep a little baggie of 2.54mm standard jumpers in my kit. These come in handy when you lose a jumper, or your local EE can’t seem to find enough for that new dev board.

 There are never enough jumpers when you need them.

There are never enough jumpers when you need them.

What’s in your kit?

I’d love to hear from my readers about the tools you frequently carry around. Leave me a note in the comments!

What I Learned from James Grenning's Remote TDD Course

Test Driven Development (TDD) is an important software development practice which is typically foreign to embedded teams. James Grenning has put a tremendous amount of effort into teaching embedded systems developers how to adopt TDD. He published an embedded systems classic, Test-Driven Development for Embedded C, and regularly conducts TDD training seminars.

Admittedly, TDD is one of those concepts that I've heard about but never actually got around to studying and implementing. After seeing a tweet about a remote TDD training class, I decided to sign up and see if it was really all it's cracked up to be.

If you're looking to grow as an embedded developer, I recommend taking a TDD class with James - it has transformed my development approach. TDD helps us to decouple our software from the underlying hardware and OS, as well as to develop and test embedded software on our host machines. We've all felt the pain of the "Target Hardware Bottleneck" - this class shows you how to avoid the pain and to adapt to sudden requirements changes.

Aside from getting hands-on experience with TDD, I learned many valuable lessons from James's course. Below I will recount my experience with James's remote TDD training, review my lessons learned, and share my thoughts on taking the course vs reading the book.

Table of Contents:

  1. Why I Took the Course
  2. Course Structure
  3. Lessons Learned
  4. Course vs Book
  5. In Conclusion
  6. Further Reading

Why I Took the Course

I've come to believe that the common approach for developing software, especially embedded systems software, must be dramatically overhauled. I see far too many projects which skimp out on design, testing, code reviews, continuous integration, or other helpful practices which can improve code quality and keep our projects on schedule.

I've also noticed that I spend too much time with "debugging later programming", as James calls it. I write a bunch of code, get it to compile, and then deploy it and test on the target. The debugging time often ends up being much longer than the coding time - there must be a better way to approach development. Furthermore, why do I need to flash to the target to do most of my testing? Can't I build my programs in such a way that I can test large pieces of them on my host machine, where I have an extensive suite of debugging tools on hand?

When I was a junior embedded engineer, I believed other developers when they told me that unit tests weren’t useful or feasible for embedded systems due to our dependence on hardware. After studying architecture, design principles, and experiencing sufficient pain on multiple projects, I realize that there is immense value in changing our current approach to building and testing embedded systems. James’s course is the perfect way to dive head-first into TDD and unit testing.

Course Structure

I signed up for the remote training course, which consists of three five-hour days of training. The training is conducted with a suite of web-based tools:

  • Zoom meeting for video/audio
  • CyberDojo for programming exercises
  • A central course website with links to resources & exercises
  • A "question board" where we could post questions as we thought of them without interrupting the flow of the class

The course follows this pattern each day:

  • Discuss theory
  • James performs a TDD demo
  • Class members perform a hands-on programming exercise (~2 hrs long) while receiving live feedback from James (~2 hours each day)
  • James answers questions, reflects on the exercise, and discusses more theory

We used the CppUTest framework throughout the training, which is the same test framework featured in his book. I had not used CppUTest before the course, so it was great to get experience with a new test framework.

Day 1

Day 1 started with introduction into TDD. James opened with a discussion about the impact of the typical Debug-Later programming style and the value propositions of TDD. He introduced us to the TDD cycle:

  • Write a test
  • Watch it not build
  • Make it build, but fail
  • Make it pass
  • Refactor (clean up any mess)
  • Repeat cycle until work is finished

The cycle is directly related to Bob Martin’s TDD rules which we continually referred to throughout the course:

  • Do not write any production code unless it is to make a failing unit test pass
  • Do not write any more of a unit test than is sufficient to fail; and compilation failures are failures
  • Do not write any more production code than is sufficient to pass the one failing unit test

We also discussed a TDD-based development cycle for embedded systems, which involves writing code on the host machine first, then incrementally working up to running the code on the target hardware. This development cycle enables embedded software teams to prototype, create modules, and test driver logic before target hardware is available.

We followed the TDD cycle with “design for testability” concepts, which are the same general design concepts we should already be applying:

  • Data hiding
  • Implementation hiding
  • Single responsibility principle
  • Separation of concerns
  • Dependency inversion (depend on interfaces not implementations)

After this introduction to TDD, we dove right in with live programming exercises. James performed a demo where he used TDD to create and test a circular buffer library in C. After showing us the TDD approach, he set us loose to write our own circular buffer library. The exercise took 2 hours, and James gave each of us direct feedback as we worked through the exercise.

We ended the day with a discussion of the next day’s exercise, which involved creating a light scheduler for a home automation system. He gave us optional homework to write a “spy” for a light controller, which took me around 15 minutes to complete.

Day 2

Day 2 started with a discussion of “spies”, “fakes”, and strategies for testing modules in the middle of a hierarchy. We reviewed TDD strategies, focusing on how to write a minimal number of tests and how each new test should encourage us to write new module code.

We quickly moved to the programming exercise, which involved TDD for a Light Scheduler. The tests were written using the Light Controller Spy that we created prior to class as homework, and demonstrated how to apply spies in our testing process. As with the circular buffer exercise, James monitored our progress and offered live feedback while we worked.

After the exercise was completed, James performed a refactoring demo, showing how we can use our unit tests to maintain confidence while performing major changes to our code base. We also discussed code coverage tools and ran gcov on our unit tests.

At the end of class, James gave a brief introduction to CppUTest’s mocking support. Our homework on day 2 was to play around with the mocking functions to get a feel for how the framework functions and how expectations can be used during testing.

Day 3

Day three opened with a discussion of test doubles, mocking, and run-time substitution. After a brief introduction, we started the day’s programming exercise: writing and testing a flash driver off-target using mocking.

After finishing the exercises, we recapped the lessons we had learned up until that point and reviewed the value propositions for TDD.

After the review, we moved into a discussion on refactoring. James covered general refactoring theory, code smells, design principles, and refactoring strategies. He introduced a method for refactoring legacy code (“Crash to Pass”), and pointed us to resources to help us test and refactor our existing code.

After the refactoring discussion, we had one last general Q&A session and then wrapped up the training course.

Lessons Learned

There were more lessons packed into the workshop than I can reasonably relate here. Many of them are simple one-offs to guide you as you develop your TDD skills:

  • Use a test harness that will automatically find your test cases and run them, saving you the headache of manual registration
  • Write the minimal amount of code you need to exercise your program paths (aka "don't write too many tests")
  • Ruthlessly refactor your tests whenever they are passing to keep the tests maintained and understandable
  • Even though we are incrementally building our modules, we want to try to invent the full parameter list up-front (TDD will show you exactly how painful it is to update APIs)
  • Mocking can be a refactoring code smell, as it identifies coupling within your system

Aside from these practical tidbits, here are some of the deeper lessons learned during the course:

Feedback Loop Design: Work in Small Steps

In system design, I've been struck by the importance of feedback loops. Donna Meadows frequently touches on their importance and impact on undesirable behavior:

Delays in feedback loops are critical determinants of system behavior. They are common causes of oscillations. If you’re trying to adjust a system state to your goal, but you only receive delayed information about what the system state is, you will overshoot and undershoot. Same if your information is timely, but your response isn’t.

One of the key challenges with building embedded products is that there are numerous delayed feedback loops in play. Firmware engineers are writing software before hardware is available, hardware issues aren't identified until it's too late for another spin because the software wasn't ready yet, critical bugs aren't discovered until integration or acceptance testing starts, and the list goes on.

Shortening our firmware engineers' feedback cycles can dramatically impact a program lifecycle. With TDD, developers get immediate feedback when errors are introduced. We can correct these errors right away, one at a time, and stay on track.

TDD also helps us keep our modules decoupled and testable, allowing firmware to be increasingly developed and tested on a host machine. We can make full use of debugging tools and avoid hundreds of time-consuming flashing steps. We can also utilize mocking, spies, and fakes to develop interfaces, modules, and higher-level business logic before hardware is available.

If you're not getting the system behavior that you want, you likely need to adjust your feedback loops and feedback delays. TDD is one approach to improving feedback loops for embedded systems development.

TDD Feels Slower, But I Programmed Faster

TDD certainly feels like it is more work and that you're moving slower. However, this was merely an illusion in my experience. By working in small steps and addressing problems as they arise, we can stay engaged, move forward continually, and avoid many of those intense debugging sessions.

Let's consider the circular buffer exercise, which I finished in 1 hour and 20 minutes. One of the most popular articles on this website is Creating a Circular Buffer in C and C++. It took me at least 4 hours to get my libraries implemented correctly thanks to debugging tricky logic errors. That's quite a difference!

You might say that I had an advantage in the exercise, having written such a library before. Sadly, I will admit that I made the same mistakes that I struggled with in my initial implementation - some logic errors are just easy to make. However, with the TDD approach I noticed the flaws immediately, rather than having them pile up at the end.

As the popular military maxim goes, "Slow is smooth, smooth is fast". James repeatedly emphasizes this point with his own motto: "Slow down to go fast".

Trust the Process

If you read my website, you know that I am a great believer in processes. We can turn much of our operation over to autopilot, allowing us to allocate our brain’s valuable critical thinking resources to the problem at hand.

When I’m deep in thought, it’s maddening to be interrupted, as the house of cards in my mind comes tumbling down. I always took this as “Just The Way It Is”, but TDD showed me that it doesn’t have to be that way. By working in small steps through a defined process, we know exactly where to jump back in if we get interrupted. We are kept from being overwhelmed because we know what the next step is. We can enter a state of flow more easily - small steps and continual progress keep us moving forward and helps me feel more productive.

Having a defined process also helps when you are stuck. You’re never really wondering what to do next - simply move on to the next step in the process.

Keep a Test List

There's no need to worry about writing all of your unit tests at once. Maintain a test list for each module that describes any work which still needs to be completed. The best place to store this list is inside of the test source code itself, e.g. as a block comment at the top of the file.

If you think of a new test to write, make a note. Then you never need to worry about remembering all of the tests.

TDD is Not the Holy Grail

James emphasizes throughout the course that while TDD reduces the errors that are introduced into our programs, TDD is not sufficient for proving that our programs are bug-free. The best that TDD can do for us is to show us that our code is doing what we think it should do. This does not equate to correctness - our understanding may still be incomplete or incorrect.

TDD only helps us ensure that our code is working on purpose. You still need design and code reviews, integration testing, static analysis, and other helpful developmental processes.

Course vs Book

If you have James's Test Driven Development for Embedded C book, you may be wondering whether the course is still worth taking. I respond with an emphatic yes. I recommend the course in conjunction with the book for one simple reason: the course requires you to actually program in the TDD style. Practice makes perfect.

 
 

During the course, you'll work through multiple hands-on programming exercises and receive direct feedback. Whenever I skipped steps or started writing code without tests, James noticed and helped me get back on track. Without this feedback, I would not have been successful at noticing and breaking my existing development habits.

When reading a book, we commonly acquire knowledge but never take the time to apply it. By getting a chance to try out the method for yourself, you're more likely to feel the benefits and adopt the process. Once you have experience with TDD, the concepts in the book can be easily connected to real experiences. You will be much more likely to make connections in your mind and apply the concepts in practice.

There's one more reason I recommend the course in addition to the book: when you are a beginner, you have many questions. It's hard to get help if you don't know what, how, or where to ask questions. James is willing to answer your testing questions and provides you with plenty of resources and forums for finding answers. Even better, once the course is finished, you have access to email support from James. As long as your questions aren't easily Google-able, you will always have a resource to help guide you.

In Conclusion

I really enjoyed James's remote TDD training and think can help developers at any skill level (in fact, most of the attendees were experienced programmers). The hands-on programming exercises were unexpected and enjoyable. The direct and immediate feedback from James was an invaluable aid for adopting the process and correcting our default behaviors.

If you're interested in taking the TDD class, you can find the course options and schedules on James's website.

I adopted TDD immediately after completing the course. I spent a day setting up my development environment so I can compile and run tests with a keystroke, just like we did in CyberDojo. The process is addictive - writing new tests and getting them to pass is a continual reward cycle that keeps me focused on programming for much longer periods of time.

I've already found myself refactoring and updating my code with increasing confidence, since I have tests in place to identify any glaring errors which are introduced.

A key advantage of well tested code is the ability to perform random acts of kindness to it. Tending to your code like a garden. Small improvements add up and compound. Without tests, it's hard to be confident in even seemingly inconsequential changes.
--Antonio Cangiano (@acangiano)

Further Reading

James's book, Test Driven Development for Embedded C is an excellent starting point for TDD, especially for embedded systems developers. Again, I recommend this book in conjunction with the online training course. You can find the courses and schedules on James's website.

These talks by James provide an introduction into the how and why of TDD:

James has written extensively about TDD on his blog. Here are some of my favorite posts:

Other TDD-related links:

Related Posts