Development Process

What I Learned from James Grenning's Remote TDD Course

Test Driven Development (TDD) is an important software development practice which is typically foreign to embedded teams. James Grenning has put a tremendous amount of effort into teaching embedded systems developers how to adopt TDD. He published an embedded systems classic, Test-Driven Development for Embedded C, and regularly conducts TDD training seminars.

Admittedly, TDD is one of those concepts that I've heard about but never actually got around to studying and implementing. After seeing a tweet about a remote TDD training class, I decided to sign up and see if it was really all it's cracked up to be.

If you're looking to grow as an embedded developer, I recommend taking a TDD class with James - it has transformed my development approach. TDD helps us to decouple our software from the underlying hardware and OS, as well as to develop and test embedded software on our host machines. We've all felt the pain of the "Target Hardware Bottleneck" - this class shows you how to avoid the pain and to adapt to sudden requirements changes.

Aside from getting hands-on experience with TDD, I learned many valuable lessons from James's course. Below I will recount my experience with James's remote TDD training, review my lessons learned, and share my thoughts on taking the course vs reading the book.

Table of Contents:

  1. Why I Took the Course
  2. Course Structure
  3. Lessons Learned
  4. Course vs Book
  5. In Conclusion
  6. Further Reading

Why I Took the Course

I've come to believe that the common approach for developing software, especially embedded systems software, must be dramatically overhauled. I see far too many projects which skimp out on design, testing, code reviews, continuous integration, or other helpful practices which can improve code quality and keep our projects on schedule.

I've also noticed that I spend too much time with "debugging later programming", as James calls it. I write a bunch of code, get it to compile, and then deploy it and test on the target. The debugging time often ends up being much longer than the coding time - there must be a better way to approach development. Furthermore, why do I need to flash to the target to do most of my testing? Can't I build my programs in such a way that I can test large pieces of them on my host machine, where I have an extensive suite of debugging tools on hand?

When I was a junior embedded engineer, I believed other developers when they told me that unit tests weren’t useful or feasible for embedded systems due to our dependence on hardware. After studying architecture, design principles, and experiencing sufficient pain on multiple projects, I realize that there is immense value in changing our current approach to building and testing embedded systems. James’s course is the perfect way to dive head-first into TDD and unit testing.

Course Structure

I signed up for the remote training course, which consists of three five-hour days of training. The training is conducted with a suite of web-based tools:

  • Zoom meeting for video/audio
  • CyberDojo for programming exercises
  • A central course website with links to resources & exercises
  • A "question board" where we could post questions as we thought of them without interrupting the flow of the class

The course follows this pattern each day:

  • Discuss theory
  • James performs a TDD demo
  • Class members perform a hands-on programming exercise (~2 hrs long) while receiving live feedback from James (~2 hours each day)
  • James answers questions, reflects on the exercise, and discusses more theory

We used the CppUTest framework throughout the training, which is the same test framework featured in his book. I had not used CppUTest before the course, so it was great to get experience with a new test framework.

Day 1

Day 1 started with introduction into TDD. James opened with a discussion about the impact of the typical Debug-Later programming style and the value propositions of TDD. He introduced us to the TDD cycle:

  • Write a test
  • Watch it not build
  • Make it build, but fail
  • Make it pass
  • Refactor (clean up any mess)
  • Repeat cycle until work is finished

The cycle is directly related to Bob Martin’s TDD rules which we continually referred to throughout the course:

  • Do not write any production code unless it is to make a failing unit test pass
  • Do not write any more of a unit test than is sufficient to fail; and compilation failures are failures
  • Do not write any more production code than is sufficient to pass the one failing unit test

We also discussed a TDD-based development cycle for embedded systems, which involves writing code on the host machine first, then incrementally working up to running the code on the target hardware. This development cycle enables embedded software teams to prototype, create modules, and test driver logic before target hardware is available.

We followed the TDD cycle with “design for testability” concepts, which are the same general design concepts we should already be applying:

  • Data hiding
  • Implementation hiding
  • Single responsibility principle
  • Separation of concerns
  • Dependency inversion (depend on interfaces not implementations)

After this introduction to TDD, we dove right in with live programming exercises. James performed a demo where he used TDD to create and test a circular buffer library in C. After showing us the TDD approach, he set us loose to write our own circular buffer library. The exercise took 2 hours, and James gave each of us direct feedback as we worked through the exercise.

We ended the day with a discussion of the next day’s exercise, which involved creating a light scheduler for a home automation system. He gave us optional homework to write a “spy” for a light controller, which took me around 15 minutes to complete.

Day 2

Day 2 started with a discussion of “spies”, “fakes”, and strategies for testing modules in the middle of a hierarchy. We reviewed TDD strategies, focusing on how to write a minimal number of tests and how each new test should encourage us to write new module code.

We quickly moved to the programming exercise, which involved TDD for a Light Scheduler. The tests were written using the Light Controller Spy that we created prior to class as homework, and demonstrated how to apply spies in our testing process. As with the circular buffer exercise, James monitored our progress and offered live feedback while we worked.

After the exercise was completed, James performed a refactoring demo, showing how we can use our unit tests to maintain confidence while performing major changes to our code base. We also discussed code coverage tools and ran gcov on our unit tests.

At the end of class, James gave a brief introduction to CppUTest’s mocking support. Our homework on day 2 was to play around with the mocking functions to get a feel for how the framework functions and how expectations can be used during testing.

Day 3

Day three opened with a discussion of test doubles, mocking, and run-time substitution. After a brief introduction, we started the day’s programming exercise: writing and testing a flash driver off-target using mocking.

After finishing the exercises, we recapped the lessons we had learned up until that point and reviewed the value propositions for TDD.

After the review, we moved into a discussion on refactoring. James covered general refactoring theory, code smells, design principles, and refactoring strategies. He introduced a method for refactoring legacy code (“Crash to Pass”), and pointed us to resources to help us test and refactor our existing code.

After the refactoring discussion, we had one last general Q&A session and then wrapped up the training course.

Lessons Learned

There were more lessons packed into the workshop than I can reasonably relate here. Many of them are simple one-offs to guide you as you develop your TDD skills:

  • Use a test harness that will automatically find your test cases and run them, saving you the headache of manual registration
  • Write the minimal amount of code you need to exercise your program paths (aka "don't write too many tests")
  • Ruthlessly refactor your tests whenever they are passing to keep the tests maintained and understandable
  • Even though we are incrementally building our modules, we want to try to invent the full parameter list up-front (TDD will show you exactly how painful it is to update APIs)
  • Mocking can be a refactoring code smell, as it identifies coupling within your system

Aside from these practical tidbits, here are some of the deeper lessons learned during the course:

Feedback Loop Design: Work in Small Steps

In system design, I've been struck by the importance of feedback loops. Donna Meadows frequently touches on their importance and impact on undesirable behavior:

Delays in feedback loops are critical determinants of system behavior. They are common causes of oscillations. If you’re trying to adjust a system state to your goal, but you only receive delayed information about what the system state is, you will overshoot and undershoot. Same if your information is timely, but your response isn’t.

One of the key challenges with building embedded products is that there are numerous delayed feedback loops in play. Firmware engineers are writing software before hardware is available, hardware issues aren't identified until it's too late for another spin because the software wasn't ready yet, critical bugs aren't discovered until integration or acceptance testing starts, and the list goes on.

Shortening our firmware engineers' feedback cycles can dramatically impact a program lifecycle. With TDD, developers get immediate feedback when errors are introduced. We can correct these errors right away, one at a time, and stay on track.

TDD also helps us keep our modules decoupled and testable, allowing firmware to be increasingly developed and tested on a host machine. We can make full use of debugging tools and avoid hundreds of time-consuming flashing steps. We can also utilize mocking, spies, and fakes to develop interfaces, modules, and higher-level business logic before hardware is available.

If you're not getting the system behavior that you want, you likely need to adjust your feedback loops and feedback delays. TDD is one approach to improving feedback loops for embedded systems development.

TDD Feels Slower, But I Programmed Faster

TDD certainly feels like it is more work and that you're moving slower. However, this was merely an illusion in my experience. By working in small steps and addressing problems as they arise, we can stay engaged, move forward continually, and avoid many of those intense debugging sessions.

Let's consider the circular buffer exercise, which I finished in 1 hour and 20 minutes. One of the most popular articles on this website is Creating a Circular Buffer in C and C++. It took me at least 4 hours to get my libraries implemented correctly thanks to debugging tricky logic errors. That's quite a difference!

You might say that I had an advantage in the exercise, having written such a library before. Sadly, I will admit that I made the same mistakes that I struggled with in my initial implementation - some logic errors are just easy to make. However, with the TDD approach I noticed the flaws immediately, rather than having them pile up at the end.

As the popular military maxim goes, "Slow is smooth, smooth is fast". James repeatedly emphasizes this point with his own motto: "Slow down to go fast".

Trust the Process

If you read my website, you know that I am a great believer in processes. We can turn much of our operation over to autopilot, allowing us to allocate our brain’s valuable critical thinking resources to the problem at hand.

When I’m deep in thought, it’s maddening to be interrupted, as the house of cards in my mind comes tumbling down. I always took this as “Just The Way It Is”, but TDD showed me that it doesn’t have to be that way. By working in small steps through a defined process, we know exactly where to jump back in if we get interrupted. We are kept from being overwhelmed because we know what the next step is. We can enter a state of flow more easily - small steps and continual progress keep us moving forward and helps me feel more productive.

Having a defined process also helps when you are stuck. You’re never really wondering what to do next - simply move on to the next step in the process.

Keep a Test List

There's no need to worry about writing all of your unit tests at once. Maintain a test list for each module that describes any work which still needs to be completed. The best place to store this list is inside of the test source code itself, e.g. as a block comment at the top of the file.

If you think of a new test to write, make a note. Then you never need to worry about remembering all of the tests.

TDD is Not the Holy Grail

James emphasizes throughout the course that while TDD reduces the errors that are introduced into our programs, TDD is not sufficient for proving that our programs are bug-free. The best that TDD can do for us is to show us that our code is doing what we think it should do. This does not equate to correctness - our understanding may still be incomplete or incorrect.

TDD only helps us ensure that our code is working on purpose. You still need design and code reviews, integration testing, static analysis, and other helpful developmental processes.

Course vs Book

If you have James's Test Driven Development for Embedded C book, you may be wondering whether the course is still worth taking. I respond with an emphatic yes. I recommend the course in conjunction with the book for one simple reason: the course requires you to actually program in the TDD style. Practice makes perfect.

 
 

During the course, you'll work through multiple hands-on programming exercises and receive direct feedback. Whenever I skipped steps or started writing code without tests, James noticed and helped me get back on track. Without this feedback, I would not have been successful at noticing and breaking my existing development habits.

When reading a book, we commonly acquire knowledge but never take the time to apply it. By getting a chance to try out the method for yourself, you're more likely to feel the benefits and adopt the process. Once you have experience with TDD, the concepts in the book can be easily connected to real experiences. You will be much more likely to make connections in your mind and apply the concepts in practice.

There's one more reason I recommend the course in addition to the book: when you are a beginner, you have many questions. It's hard to get help if you don't know what, how, or where to ask questions. James is willing to answer your testing questions and provides you with plenty of resources and forums for finding answers. Even better, once the course is finished, you have access to email support from James. As long as your questions aren't easily Google-able, you will always have a resource to help guide you.

In Conclusion

I really enjoyed James's remote TDD training and think can help developers at any skill level (in fact, most of the attendees were experienced programmers). The hands-on programming exercises were unexpected and enjoyable. The direct and immediate feedback from James was an invaluable aid for adopting the process and correcting our default behaviors.

If you're interested in taking the TDD class, you can find the course options and schedules on James's website.

I adopted TDD immediately after completing the course. I spent a day setting up my development environment so I can compile and run tests with a keystroke, just like we did in CyberDojo. The process is addictive - writing new tests and getting them to pass is a continual reward cycle that keeps me focused on programming for much longer periods of time.

I've already found myself refactoring and updating my code with increasing confidence, since I have tests in place to identify any glaring errors which are introduced.

A key advantage of well tested code is the ability to perform random acts of kindness to it. Tending to your code like a garden. Small improvements add up and compound. Without tests, it's hard to be confident in even seemingly inconsequential changes.
--Antonio Cangiano (@acangiano)

Further Reading

James's book, Test Driven Development for Embedded C is an excellent starting point for TDD, especially for embedded systems developers. Again, I recommend this book in conjunction with the online training course. You can find the courses and schedules on James's website.

These talks by James provide an introduction into the how and why of TDD:

James has written extensively about TDD on his blog. Here are some of my favorite posts:

Other TDD-related links:

Related Posts

Timeless Laws of Software Development

Updated: 20190913

I am always seeking the wisdom and insights of those who have spent decades working in software development. The experiences of those who came before us is a rich source of wisdom, information, and techniques.

Only a few problems in our field are truly new. Most of the solutions we seek have been written about time-and-time-again over the past 50 years. Rather than continually seeking new technology as the panacea to our problems, we should focus ourselves on applying the tried and tested basic principles of our field.

Given my point of view, it's no surprise that I was immediately drawn to a book titled Timeless Laws of Software Development.

The author, Jerry Fitzpatrick, is a software instructor and consultant who has worked in a variety of industries: biomedical, fitness, oil and gas, telecommunications, and manufacturing. Even more impressive for someone writing about the Timeless Laws of Software Development, Jerry was originally an electrical engineer. He worked with Bob Martin and James Grenning at Teradyne, where he developed the hardware for Teradyne's early voice response system.

Jerry has spent his career dealing with the same problems we are currently dealing with. It would be criminal not to steal and apply his hard-earned knowledge.

I recommend this invaluable book equally to developers, team leads, architects, and project managers.

Table of Contents:

  1. Structure of the Book
  2. The Timeless Laws
  3. What I Learned
  4. Selected Quotes
  5. Buy the Book

Structure of the Book

The book is short, weighing in at a total of 180 pages, including the appendices, glossary, and index. Do not be fooled by its small stature, for there is much wisdom packed into these pages.

Jerry opens with an introductory chapter and dedicates an entire chapter to each of his six Timeless Laws (discussed below). Each law is broken down into sub-axioms, paired with examples, and annotated with quotes and primary sources.

Aside from the always-useful glossary and index, Jerry ends the book with three appendices, each valuable in its own right:

  • "About Software Metrics", which covers metrics including lines of code, cyclomatic complexity, software size, and Jerry's own "ABC" metric
  • "Exploring Old Problems", which covers symptoms of the software crisis, the cost to develop software, project factors and struggles, software maintenance costs, superhuman developers, and software renovation.
  • "Redesigning a Procedure", where Jerry walks readers through a real-life refactoring exercise

"Exploring Old Problems" was an exemplary chapter. I highly recommended it to project managers and team leads.

My only real critique of the book is that the information is not partitioned in a way that makes it easily accessible to different roles - project managers may miss valuable lessons while glossing over programming details. Don't give in to the temptation to skip: each chapter has valuable advice no matter your role.

The Timeless Laws

Jerry proposes six Timeless Laws of software development:

  1. Plan before implementing
  2. Keep the program small
  3. Write clearly
  4. Prevent bugs
  5. Make the program robust
  6. Prevent excess coupling

At first glance, these six laws are so broadly stated that the natural reaction is, "Duh". Where the book shines is in the breakdown of these laws into sub-axioms and methods for achieving the intent of the law.

Breakdown of the Timeless Laws

  1. Plan before implementing
    1. Understand the requirements
    2. Reconcile conflicting requirements
    3. Check the feasibility of key requirements
    4. Convert assumptions to requirements
    5. Create a development plan
  2. Keep the program small
    1. Limit project features
    2. Avoid complicated designs
    3. Avoid needless concurrency
    4. Avoid repetition
    5. Avoid unnecessary code
    6. Minimize error logging
    7. Buy, don't build
    8. Strive for Reuse
  3. Write clearly
    1. Use names that denote purpose
    2. Use clear expressions
    3. Improve readability using whitespace
    4. Use suitable comments
    5. Use symmetry
    6. Postpone optimization
    7. Improve what you have written
  4. Prevent bugs
    1. Pace yourself
    2. Don't tolerate build warnings
    3. Manage Program Inputs
    4. Avoid using primitive types for physical quantities
    5. Reduce conditional logic
    6. Validity checks
    7. Context and polymorphism
    8. Compare floating point values correctly
  5. Make the program robust
    1. Don't let bugs accumulate
    2. Use assertions to expose bugs
    3. Design by contract
    4. Simplify exception handling
    5. Use automated testing
    6. Invite improvements
  6. Prevent excess coupling
    1. Discussion of coupling
    2. Flexibility
    3. Decoupling
    4. Abstractions (functional, data, OO)
    5. Use black boxes
    6. Prefer cohesive abstractions
    7. Minimize scope
    8. Create barriers to coupling
    9. Use atomic initialization
    10. Prefer immutable instances

What I Learned

I've regularly referred to this book over the past year. My hard-copy is dog-eared and many pages are covered in notes, circles, and arrows.

I've incorporated many aspects of the book into my development process. I've created checklists that I use for design reviews and code reviews, helping to ensure that I catch problems as early as possible. I've created additional documentation for my projects, as well as templates to facilitate ease of reuse.

Even experienced developers and teams can benefit from a review of this book. Some of the concepts may be familiar to you, but we all benefit from a refresher. There is also the chance that you will find one valuable gem to improve your practice, and isn't that worth the small price of a book?

The odds are high that you'll find more than one knowledge gem while reading Timeless Laws.

Here are some of the lessons I took away from the book:

  1. Create a development plan
  2. Avoid the "what if" game
  3. Logging is harmful
  4. Defensive programming is harmful
  5. Utilize symmetry in interface design

Create a Development Plan

We are all familiar with the lack of documentation for software projects. I'm repeatedly stunned by teams which provide no written guidance or setup instructions for new members. Jerry points out the importance of maintaining documentation:

Documentation is the only way to transfer knowledge without describing things in person.

One such method that I pulled from the book is the idea of the "Development Plan". The plan serves as a guide for developers working on the project. The plan describes the development tools, project, goals, and priorities.

As with all documentation, start simple and grow the development plan as new information becomes available or required. Rather than having a large document, it's easy to break the it up into smaller, standalone files. Having separate documents will help developers easily find the information they need. The development plan should be kept within the repository so developers can easily find and update it.

Topics to cover in your development plan include:

  • List of development priorities
  • Code organization
  • How to set up the development environment
  • Minimum requirements for hardware, OS, compute power, etc.
  • Glossary of project terms
  • Uniform strategy for bug prevention, detection, and repair
  • Uniform strategy for program robustness
  • Coding style guidelines (if applicable)
  • Programming languages to be used, and where they are used
  • Tools to be used for source control, builds, integration, testing, and deployment
  • High-level organization: projects, components, file locations, and naming conventions
  • High-level logical architecture: major sub-systems and frameworks

Development plans are most useful for new team members, since they can refer to the document and become productive without taking much time from other developers. However, your entire team will benefit from having a uniform set of guidelines that can be easily located and referenced.

Avoid the "What If" Game

Many of us, myself included, are guilty of participating in the "what if" game. The "what if" game is prevalent among developers, especially when new ideas are proposed. The easiest way to shoot a hole in a new idea is to ask a "what if" question: "This architecture looks ok, but what if we need to support 100,000,000 connections at once?"

The "what if" game is adversarial and can occur because:

  • Humans have a natural resistance to change
  • Some people enjoy showing off their knowledge
  • Some people enjoy being adversarial
  • The dissenter dislikes the person who proposed the idea
  • The dissenter does not want to take on additional work

"What if" questions are difficult to refute, as they are often irrational. We should always account for realistic possibilities, but objections should be considered only if the person can explain why the proposal is disruptive now or is going to be disruptive in the future.

Aside from keeping conversations focused on realistic possibilities, we can mitigate the ability to ask "what if" with clear and well-defined requirements.

Logging is Harmful

I have been a long-time proponent of error logging, and I’ve written many embedded logging libraries over the past decade.

While I initially was skeptical of Fitzpatrick’s attitude toward error logging, I started paying closer attention to the log files I was working with as well as the use of logging in my own code. I noticed the points that Jerry highlighted: my code was cluttered, logs were increasingly useless, and it was always a struggle to remove outdated logging statements.

You can read more about my thoughts on error logging in my article: The Dark Side of Error Logging.

Defensive Programming is Harmful

Somewhere along the way in my career, the idea of defensive programming was drilled into me. Many of my old libraries and programs are layered with unnecessary conditional statements and error-code returns. These checks contribute to code bloat, since they are often repeated at multiple levels in the stack.

Jerry points out that in conventional product design, designs are based on working parts, not defective ones. As such, designing our software systems based on the assumption that all modules are potentially defective leads us down the path of over-engineering.

Trust lies at the heart of defensive programming. If no module can be trusted, then defensive programming is imperative. If all modules can be trusted, then defensive programming is irrelevant.

Like conventional products, software should be based on working parts, not defective ones. Modules should be presumed to work until proven otherwise. This is not to say that we don't do any form of checking: inputs from outside of the program need to be validated.

Assertions and contracts should be used to enforce preconditions and postconditions. Creating hard failure points helps us to catch bugs as quickly as possible. Modules inside of the system should be trusted to do their job and to enforce their own requirements.

Since I've transitioned toward the design-by-contract style, my code is much smaller and easier to read.

Utilize Symmetry in Interface Design

Using symmetry in interface design is one of those points that seemed obvious on the surface. Upon further inspection, I found I regularly violated symmetry rules in my interfaces.

Symmetry helps us to manage the complexity of our programs and reduce the amount of knowledge we need to keep in mind at once. Since we have existing associations with naming pairs, we can easily predict function names without needing to look them up.

Universal naming pairs should be used in public interfaces whenever possible:

  • on/off
  • start/stop
  • enable/disable
  • up/down
  • left/right
  • get/set
  • empty/full
  • push/pop
  • create/destroy

Our APIs should also be written in a consistent manner:

  • Motor::Start() / Motor::Stop()
  • motor_start() / motor_stop()
  • StartMotor() / StopMotor()

Avoid creating (and fix!) inconsistent APIs:

  • Motor::Start() / Motor::disable()
  • startMotor / stop_motor
  • start_motor / Stop_motor

Naming symmetry may be obvious, but where I am most guilty is in parameter order symmetry. Our procedures should utilize the same parameter ordering rules whenever possible.

For example, consider the C standard library functions defined in string.h. In all but one procedure (strlen), the first parameter is the destination string, and the second parameter is the source string. The parameter order also matches the normal assignment order semantics (dest = src).

The standard library isn't the holy grail of symmetry, however. The stdio.h header showcases some bad symmetry by changing the location of the FILE pointer:

int fprintf ( FILE * stream, const char * format, ... );
nt fscanf ( FILE * stream, const char * format, ... );

// Better design: FILE is first!
int fputs ( const char * str, FILE * stream );
char * fgets ( char * str, int num, FILE * stream );

Keeping symmetry in mind will improve the interfaces we create.

Selected Quotes

I pulled hundreds of quotes from this book, and you will be seeing many of them pop up on our Twitter Feed over the next year. A small selection of my highlights are included below.

Any quotes without attribution come directly from Jerry.

Failure is de rigueur in our industry. Odds are, you're working on a project that will fail right now.
-- Jeff Atwood, How to Stop Sucking and Be Awesome

Writing specs is like flossing: everybody agrees that it's a good thing, but nobody does.
-- Joel Spolsky

Documentation is the only way to transfer knowledge without describing things in person.

Robustness must be a goal and up front priority.

Disorder is the natural state of all things. Software tends to get larger and more complicated unless the developers push back and make it smaller and simpler. If the developers don't push back, the battle against growth is lost by default.

YAGNI (You ain't gonna need it):
Always implement things when you actually need them, never when you just foresee that you need them. The best way to implement code quickly is to implement less of it. The best way to have fewer bugs is to implement less code.

-- Ron Jeffries

Most developers write code that reflects their immediate thoughts, but never return to make it smaller or clearer.

The answer is to clear our heads of clutter. Clear thinking becomes clear writing; one can't exist without the other.
-- William Zinsser

Plan for tomorrow but implement only for today.

Code that expresses its purpose clearly - without surprises - is easier to understand and less likely to contain bugs.

Most developers realize that excess coupling is harmful but they don't resist it aggressively enough. Believe me: if you don't manage coupling, coupling will manage you.

Few people realize how badly they write.
-- William Zinsser

To help prevent bugs, concurrency should only be used when needed. When it is needed, the design and implementation should be handled carefully.

Sometimes problems are poorly understood until a solution is implemented and found lacking. For this reason, it's often best to implement a basic solution before attempting a more complete and complicated one. Adequate solution are usually less costly than optimal ones.

I've worked with many developers who didn't seem to grasp the incredible speed at which program instructions execute. They worried about things that would have a tiny effect on performance or efficiency. They should have been worried about bug prevention and better-written code.

Most sponsors would rather have a stable program delivered on-time than a slightly faster and more efficient program delivered late.

It's better to implement features directly and clearly, then optimize any that affect users negatively.

Efficiency and performance are only problems if the requirements haven't been met. Optimization usually reduces source code clarity, so it isn't justified for small gains in efficiency or performance. Our first priorities should be correctness, clarity, and modest flexibility.

Implementation is necessarily incremental, but a good architecture is usually holistic. It requires a thorough understanding of all requirements.

Buy the Book

If you are interested in purchasing Timeless Laws of Software Development, you can support Embedded Artistry by using our Amazon affiliate link:

Change Log

  • 20190913:
    • Demoted headers for consistency across the site

Related Posts

Embedded Artistry's Weekly Planning Process

Last month, I published How I Structure My Day as a Consultant, giving you insight in my day-to-day flow.

Today I'd like to share another one of our core business practices: The Weekly Plan.

Table of Contents:

  1. Weekly Planning Process
    1. Anatomy of a Week
    2. Anatomy of a Day
    3. Weekly Review
  2. The Productivity Planner
  3. Weekly Plan Template
  4. Further Reading

Weekly Planning Process

We all struggle with productivity and ever-growing tasks lists. Even worse, we are living in a world with an increasing number of interruptions and distractions. In a world where there is always more to be done, how can we stay motivated and focused on the most important tasks?

We've developed and standardized a weekly planning process to help keep our company on track. Our process has three elements:

  1. Identify weekly objectives
  2. At the end of each workday, make a plan for the next day based on the list of weekly objectives
  3. Review the weekly plan on Friday and generate a new plan for the following week

Anatomy of a Week

We keep a master backlog of action items for each of our projects. When we note the need for a new task, it is added to the backlog. At the end of each week we review the backlog and select items to work on the following week. By separating the identification of tasks from their assignment, we prevent ourselves from getting distracted and help ourselves stick to the plan.

For each week, we generate a three separate lists of objectives based on priority:

  1. Most Important Tasks
  2. Secondary Tasks
  3. Bonus Tasks

We select five Most Important tasks for the following week. These tasks should be the focus of your week. If you ONLY accomplished these tasks, your week will be considered a success and you will know you made progress.

We select five Secondary tasks for the week. These tasks are important, but have lower priority compared to the Most Important tasks. If there is a conflict in timing, Most Important wins.

We then select 5-10 Bonus tasks. These tasks are opportunistic, but do not need to get done this week. They are typically used as filler tasks whenever there is a scheduling gap or block of free time.

We also take the time to note down any interesting notes for the week. Are there meetings, calls, due dates, or other items we need to keep in mind? Is one of our clients out of office? Anything that requires a reminder for the week is noted down.

These three task lists feed into our day-to-day planning process. Once our plan weekly plan is set, we try our best to avoid changing it.

Anatomy of a Day

Each day we create a plan with the following formula:

  • 1 Most Important Task
  • 1-2 Secondary Tasks
  • 1-5 Bonus Tasks

We select the tasks from the Weekly Plan. The Most Important Task receives our attention first, and we do not move on to other tasks until it is completed. The Secondary Tasks are accomplished afterward the Most Important Task is completed, and we will typically intersperse bonus tasks as timing allows.

For each day, we keep a log of daily notes including:

  • Clients we contacted
  • Work done that was not included in the plan
  • Reminders
  • Items to discuss during the weekly review

Aside from the tasks and daily log, we keep track of a few other daily details:

We keep a checklist of "pre-work activities" which we want to perform every day before beginning starting work. These help us stay on track with our habits, such as working out, reading, and journaling.

For each task, we produce an estimate for the amount of time it will take us. We use the Pomodoro Technique for scheduling, but you can use whatever method you like. As we complete each task, we track the time it takes us and note it down. We log the total estimated time and actual time worked for each day.

At the end of each day, we note down a subjective productivity score (1-10). We use this score to correlate perceived productivity with our accomplishments, pre-work activities, and any daily notes. This score helps us identify whether we are just feeling unproductive, or whether something caused us to become less productive that day.

At the end of each workday, we generate a plan for the following day. This planning process includes selecting tasks to accomplish from the weekly plan and producing initial time estimates for each task. By starting each day with a plan, we can jump right into working on the most important task without distractions.

Weekly Review

At the end of each week, I meet with my project manager to review our progress throughout the week. We work through the following list of actions:

  • Give a high-level verbal update of work accomplished during the week
  • Summarize the status of the weekly plan
    • What was not finished?
    • If a Most Important Task was not finished, why?
  • Walkthrough of each daily plan and discuss/clarify each day
  • Discuss the plan for next week
    • Review task tracker
    • Review meetings on the calendar
    • Review exceptional activities planned for the following week (e.g. tax deadlines)
    • Review list of people to contact for the following week

After the meeting, we generate the plan for the following week and the cycle continues.

Since we have this meeting every Friday, I generate my plan for the following Monday using the new weekly plan.

The Productivity Planner

The original inspiration for our weekly planning process came from the Productivity Planner, produced by Intelligent Change. We have since outgrown the planner and have integrated the process into our Evernote workflow. If you prefer to use a paper planner, the Productivity Planner is for you. It's a high-quality, hard-back notebook which will help you stick to a weekly and daily planning process.

If you are interested in purchasing the Productivity Planner you can support Embedded Artistry by using our Amazon affiliate link. We also share our weekly plan template below.

Weekly Plan Template

We no longer utilize the Productivity Planner. Instead, we have integrated our weekly planning and review process into our Evernote flow.

On Friday, we create a new note for the following week. We identify the most important tasks, secondary tasks, and bonus tasks that we want to accomplish that week.

At the end of each workday, I pull from the weekly task list and populate the plan for the next day. This helps me start every day with a plan, rather than deciding what I will be working on as the day starts.

On Friday mornings we meet, review our progress and talk about what went well and what didn't go well.

Lather, rinse, repeat.

Weekly Plan

Most Important Tasks





Secondary Tasks





Bonus Tasks:





Relevant Notes:

  • Note any important details

Daily Plan - Monday

  • Estimated Pomodoros:
  • Actual Pomodoros:
  • Productivity Score (1-10):

Pre-work Checklist

Most Important Task

Secondary Tasks


Bonus Tasks



Daily Notes:

  • Note any important details

Daily Plan - Tuesday

  • Estimated Pomodoros:
  • Actual Pomodoros:
  • Productivity Score (1-10):

Pre-work Checklist

Most Important Task

Secondary Tasks


Bonus Tasks



Daily Notes:

  • Note any important details

Daily Plan - Wednesday

  • Estimated Pomodoros:
  • Actual Pomodoros:
  • Productivity Score (1-10):

Pre-work Checklist

Most Important Task

Secondary Tasks


Bonus Tasks



Daily Notes:

  • Note any important details

Daily Plan - Thursday

  • Estimated Pomodoros:
  • Actual Pomodoros:
  • Productivity Score (1-10):

Pre-work Checklist

Most Important Task

Secondary Tasks


Bonus Tasks



Daily Notes:

  • Note any important details

Daily Plan - Friday

  • Estimated Pomodoros:
  • Actual Pomodoros:
  • Productivity Score (1-10):

Pre-work Checklist

Most Important Task

Secondary Tasks


Bonus Tasks



Daily Notes:

  • Note any important details

Further Reading

Related Posts