Timeless Laws of Software Development

I am always seeking the wisdom and insights of those who have spent decades working in software development. The experiences of those who came before us is a rich source of wisdom, information, and techniques.

Only a few problems in our field are truly new. Most of the solutions we seek have been written about time-and-time-again over the past 50 years. Rather than continually seeking new technology as the panacea to our problems, we should focus ourselves on applying the tried and tested basic principles of our field.

Given my point of view, it's no surprise that I was immediately drawn to a book titled Timeless Laws of Software Development.

The author, Jerry Fitzpatrick, is a software instructor and consultant who has worked in a variety of industries: biomedical, fitness, oil and gas, telecommunications, and manufacturing. Even more impressive for someone writing about the Timeless Laws of Software Development, Jerry was originally an electrical engineer. He worked with Bob Martin and James Grenning at Teradyne, where he developed the hardware for Teradyne's early voice response system.

Jerry has spent his career dealing with the same problems we are currently dealing with. It would be criminal not to steal and apply his hard-earned knowledge.

I recommend this invaluable book equally to developers, team leads, architects, and project managers.

Table of Contents:

  1. Structure of the Book
  2. The Timeless Laws
  3. What I Learned
  4. Selected Quotes
  5. Buy the Book

Structure of the Book

The book is short, weighing in at a total of 180 pages, including the appendices, glossary, and index. Do not be fooled by its small stature, for there is much wisdom packed into these pages.

Jerry opens with an introductory chapter and dedicates an entire chapter to each of his six Timeless Laws (discussed below). Each law is broken down into sub-axioms, paired with examples, and annotated with quotes and primary sources.

Aside from the always-useful glossary and index, Jerry ends the book with three appendices, each valuable in its own right:

  • "About Software Metrics", which covers metrics including lines of code, cyclomatic complexity, software size, and Jerry's own "ABC" metric
  • "Exploring Old Problems", which covers symptoms of the software crisis, the cost to develop software, project factors and struggles, software maintenance costs, superhuman developers, and software renovation.
  • "Redesigning a Procedure", where Jerry walks readers through a real-life refactoring exercise

"Exploring Old Problems" was an exemplary chapter. I highly recommended it to project managers and team leads.

My only real critique of the book is that the information is not partitioned in a way that makes it easily accessible to different roles - project managers may miss valuable lessons while glossing over programming details. Don't give in to the temptation to skip: each chapter has valuable advice no matter your role.

The Timeless Laws

Jerry proposes six Timeless Laws of software development:

  1. Plan before implementing
  2. Keep the program small
  3. Write clearly
  4. Prevent bugs
  5. Make the program robust
  6. Prevent excess coupling

At first glance, these six laws are so broadly stated that the natural reaction is, "Duh". Where the book shines is in the breakdown of these laws into sub-axioms and methods for achieving the intent of the law.

Breakdown of the Timeless Laws

  1. Plan before implementing
    1. Understand the requirements
    2. Reconcile conflicting requirements
    3. Check the feasibility of key requirements
    4. Convert assumptions to requirements
    5. Create a development plan
  2. Keep the program small
    1. Limit project features
    2. Avoid complicated designs
    3. Avoid needless concurrency
    4. Avoid repetition
    5. Avoid unnecessary code
    6. Minimize error logging
    7. Buy, don't build
    8. Strive for Reuse
  3. Write clearly
    1. Use names that denote purpose
    2. Use clear expressions
    3. Improve readability using whitespace
    4. Use suitable comments
    5. Use symmetry
    6. Postpone optimization
    7. Improve what you have written
  4. Prevent bugs
    1. Pace yourself
    2. Don't tolerate build warnings
    3. Manage Program Inputs
    4. Avoid using primitive types for physical quantities
    5. Reduce conditional logic
    6. Validity checks
    7. Context and polymorphism
    8. Compare floating point values correctly
  5. Make the program robust
    1. Don't let bugs accumulate
    2. Use assertions to expose bugs
    3. Design by contract
    4. Simplify exception handling
    5. Use automated testing
    6. Invite improvements
  6. Prevent excess coupling
    1. Discussion of coupling
    2. Flexibility
    3. Decoupling
    4. Abstractions (functional, data, OO)
    5. Use black boxes
    6. Prefer cohesive abstractions
    7. Minimize scope
    8. Create barriers to coupling
    9. Use atomic initialization
    10. Prefer immutable instances

What I Learned

I've regularly referred to this book over the past year. My hard-copy is dog-eared and many pages are covered in notes, circles, and arrows.

I've incorporated many aspects of the book into my development process. I've created checklists that I use for design reviews and code reviews, helping to ensure that I catch problems as early as possible. I've created additional documentation for my projects, as well as templates to facilitate ease of reuse.

Even experienced developers and teams can benefit from a review of this book. Some of the concepts may be familiar to you, but we all benefit from a refresher. There is also the chance that you will find one valuable gem to improve your practice, and isn't that worth the small price of a book?

The odds are high that you'll find more than one knowledge gem while reading Timeless Laws.

Here are some of the lessons I took away from the book:

  1. Create a development plan
  2. Avoid the "what if" game
  3. Logging is harmful
  4. Defensive programming is harmful
  5. Utilize symmetry in interface design

Create a Development Plan

We are all familiar with the lack of documentation for software projects. I'm repeatedly stunned by teams which provide no written guidance or setup instructions for new members. Jerry points out the importance of maintaining documentation:

Documentation is the only way to transfer knowledge without describing things in person.

One such method that I pulled from the book is the idea of the "Development Plan". The plan serves as a guide for developers working on the project. The plan describes the development tools, project, goals, and priorities.

As with all documentation, start simple and grow the development plan as new information becomes available or required. Rather than having a large document, it's easy to break the it up into smaller, standalone files. Having separate documents will help developers easily find the information they need. The development plan should be kept within the repository so developers can easily find and update it.

Topics to cover in your development plan include:

  • List of development priorities
  • Code organization
  • How to set up the development environment
  • Minimum requirements for hardware, OS, compute power, etc.
  • Glossary of project terms
  • Uniform strategy for bug prevention, detection, and repair
  • Uniform strategy for program robustness
  • Coding style guidelines (if applicable)
  • Programming languages to be used, and where they are used
  • Tools to be used for source control, builds, integration, testing, and deployment
  • High-level organization: projects, components, file locations, and naming conventions
  • High-level logical architecture: major sub-systems and frameworks

Development plans are most useful for new team members, since they can refer to the document and become productive without taking much time from other developers. However, your entire team will benefit from having a uniform set of guidelines that can be easily located and referenced.

Avoid the "What If" Game

Many of us, myself included, are guilty of participating in the "what if" game. The "what if" game is prevalent among developers, especially when new ideas are proposed. The easiest way to shoot a hole in a new idea is to ask a "what if" question: "This architecture looks ok, but what if we need to support 100,000,000 connections at once?"

The "what if" game is adversarial and can occur because:

  • Humans have a natural resistance to change
  • Some people enjoy showing off their knowledge
  • Some people enjoy being adversarial
  • The dissenter dislikes the person who proposed the idea
  • The dissenter does not want to take on additional work

"What if" questions are difficult to refute, as they are often irrational. We should always account for realistic possibilities, but objections should be considered only if the person can explain why the proposal is disruptive now or is going to be disruptive in the future.

Aside from keeping conversations focused on realistic possibilities, we can mitigate the ability to ask "what if" with clear and well-defined requirements.

Logging is Harmful

I have been a long-time proponent of error logging, and I’ve written many embedded logging libraries over the past decade.

While I initially was skeptical of Fitzpatrick’s attitude toward error logging, I started paying closer attention to the log files I was working with as well as the use of logging in my own code. I noticed the points that Jerry highlighted: my code was cluttered, logs were increasingly useless, and it was always a struggle to remove outdated logging statements.

You can read more about my thoughts on error logging in my article: The Dark Side of Error Logging.

Defensive Programming is Harmful

Somewhere along the way in my career, the idea of defensive programming was drilled into me. Many of my old libraries and programs are layered with unnecessary conditional statements and error-code returns. These checks contribute to code bloat, since they are often repeated at multiple levels in the stack.

Jerry points out that in conventional product design, designs are based on working parts, not defective ones. As such, designing our software systems based on the assumption that all modules are potentially defective leads us down the path of over-engineering.

Trust lies at the heart of defensive programming. If no module can be trusted, then defensive programming is imperative. If all modules can be trusted, then defensive programming is irrelevant.

Like conventional products, software should be based on working parts, not defective ones. Modules should be presumed to work until proven otherwise. This is not to say that we don't do any form of checking: inputs from outside of the program need to be validated.

Assertions and contracts should be used to enforce preconditions and postconditions. Creating hard failure points helps us to catch bugs as quickly as possible. Modules inside of the system should be trusted to do their job and to enforce their own requirements.

Since I've transitioned toward the design-by-contract style, my code is much smaller and easier to read.

Utilize Symmetry in Interface Design

Using symmetry in interface design is one of those points that seemed obvious on the surface. Upon further inspection, I found I regularly violated symmetry rules in my interfaces.

Symmetry helps us to manage the complexity of our programs and reduce the amount of knowledge we need to keep in mind at once. Since we have existing associations with naming pairs, we can easily predict function names without needing to look them up.

Universal naming pairs should be used in public interfaces whenever possible:

  • on/off
  • start/stop
  • enable/disable
  • up/down
  • left/right
  • get/set
  • empty/full
  • push/pop
  • create/destroy

Our APIs should also be written in a consistent manner:

  • Motor::Start() / Motor::Stop()
  • motor_start() / motor_stop()
  • StartMotor() / StopMotor()

Avoid creating (and fix!) inconsistent APIs:

  • Motor::Start() / Motor::disable()
  • startMotor / stop_motor
  • start_motor / Stop_motor

Naming symmetry may be obvious, but where I am most guilty is in parameter order symmetry. Our procedures should utilize the same parameter ordering rules whenever possible.

For example, consider the C standard library functions defined in string.h. In all but one procedure (strlen), the first parameter is the destination string, and the second parameter is the source string. The parameter order also matches the normal assignment order semantics (dest = src).

The standard library isn't the holy grail of symmetry, however. The stdio.h header showcases some bad symmetry by changing the location of the FILE pointer:

int fprintf ( FILE * stream, const char * format, ... );
int fscanf ( FILE * stream, const char * format, ... );

// Better design: FILE is first!
int fputs ( const char * str, FILE * stream );
char * fgets ( char * str, int num, FILE * stream );

Keeping symmetry in mind will improve the interfaces we create.

Selected Quotes

I pulled hundreds of quotes from this book, and you will be seeing many of them pop up on our Twitter Feed over the next year. A small selection of my highlights are included below.

Any quotes without attribution come directly from Jerry.

Intentionally hiding a bug is the greatest sin a developer can commit.

Failure is de rigueur in our industry. Odds are, you're working on a project that will fail right now.
-- Jeff Atwood, How to Stop Sucking and Be Awesome

Writing specs is like flossing: everybody agrees that it's a good thing, but nobody does.
-- Joel Spolsky

Documentation is the only way to transfer knowledge without describing things in person.

Robustness must be a goal and up front priority.

Disorder is the natural state of all things. Software tends to get larger and more complicated unless the developers push back and make it smaller and simpler. If the developers don't push back, the battle against growth is lost by default.

YAGNI (You ain't gonna need it):
Always implement things when you actually need them, never when you just foresee that you need them. The best way to implement code quickly is to implement less of it. The best way to have fewer bugs is to implement less code.

-- Ron Jeffries

Most developers write code that reflects their immediate thoughts, but never return to make it smaller or clearer.

The answer is to clear our heads of clutter. Clear thinking becomes clear writing; one can't exist without the other.
-- William Zinsser

Plan for tomorrow but implement only for today.

Code that expresses its purpose clearly - without surprises - is easier to understand and less likely to contain bugs.

Most developers realize that excess coupling is harmful but they don't resist it aggressively enough. Believe me: if you don't manage coupling, coupling will manage you.

Few people realize how badly they write.
-- William Zinsser

To help prevent bugs, concurrency should only be used when needed. When it is needed, the design and implementation should be handled carefully.

Sometimes problems are poorly understood until a solution is implemented and found lacking. For this reason, it's often best to implement a basic solution before attempting a more complete and complicated one. Adequate solution are usually less costly than optimal ones.

I've worked with many developers who didn't seem to grasp the incredible speed at which program instructions execute. They worried about things that would have a tiny effect on performance or efficiency. They should have been worried about bug prevention and better-written code.

Most sponsors would rather have a stable program delivered on-time than a slightly faster and more efficient program delivered late.

It's better to implement features directly and clearly, then optimize any that affect users negatively.

Efficiency and performance are only problems if the requirements haven't been met. Optimization usually reduces source code clarity, so it isn't justified for small gains in efficiency or performance. Our first priorities should be correctness, clarity, and modest flexibility.

Implementation is necessarily incremental, but a good architecture is usually holistic. It requires a thorough understanding of all requirements.

Buy the Book

If you are interested in purchasing Timeless Laws of Software Development, you can support Embedded Artistry by using our Amazon affiliate link:

Related Posts

Building a Team that Delivers Business Value

Today we have a guest post by Saket Vora about building a product development team aligned with your company's business value propositions.

Saket is a hardware product developer who has helped create iPods, iPhones, and Watch at Apple, and was a founding team member at Pearl Automation. You can contact him via email or on Twitter

Building a Team that Delivers Business Value

It’s hard to overstate the importance of team composition in a successful business. This is especially true for early stage companies that are trying to build their first product. The early team will define the company’s culture, create foundational processes & priorities around product development, and help attract additional talent & investment.

A beginning is the time for taking the most delicate care that the balances are correct.
~ Frank Herbert, Dune.

Early stage companies are laser focused on transforming their product from a concept to a shipping product as quickly as possible. Recruiting and hiring are focused directly on achieving that goal. However, it is essential to ensure that the team composition is aligned with the company’s business positioning. At the core of every company is the belief that it has a unique approach to a business problem, and so the company’s full-time core team should be a reflection of that particular approach.

Consider a young company in the IoT, robotics, or similar embedded systems-related space. Such startups commonly outsource functions like payroll, HR, and IT. While these roles are vital to running a business, they are not roles that directly contribute the company’s unique value proposition.

Leveraging external resources for such activities keeps the company’s headcount lean and reduces operating expenses. Most importantly, outsourcing enables increased agility with respect to changes in direction. When needs change, it is much easier for companies to scale up and down external resources than it is to hire and lay off employees.

If outsourcing works for payroll, HR, and IT, then we can apply the same mentality to product and engineering functions which do not directly contribute to our business’s value proposition.

What differentiates your product?

With limited resources in the form of team size, money, and brand awareness, trying to do many things at once results in doing nothing particularly well. It is critical to decide what your company should focus on, and thus tradeoffs must be evaluated. How do you intend for your product to be different than your competitors?

Is it through:

  • Hardware design or production quality?

  • Software features or user experience?

  • Marketing?

  • Distribution?

  • Pricing?

Of course, you may wish to differentiate in all these ways over the long term, but it is important to identify the main differentiating factor in the short term. Choosing your area of focus will constrain your options in other areas.

For example, Dropcam’s key advantage involved a superior user experience that was tied into a cloud-based service model. Their original hardware cameras were essentially commodity technology, which allowed them to get their hardware product to market faster and cheaper.    

The Pebble smart watch focused on their unique e-ink ‘always-on display’ that also enabled a long battery life. The tradeoffs for these features meant that they could not offer as rich a software feature set as other smartwatches with conventional LCD displays.

GoPro action cameras and Beats headphones emphasized high profile marketing & content to boost their brand identities & sales, rather than push the envelope in device performance.

Roku licensed their hardware reference designs to OEM TV manufacturers in addition to making their own streaming media sticks. While this enabled them to get their OS platform into wider distribution channels, their designs had to be very conscious of cost and compatibility.

The composition of your full-time employees, even in the early stages, should ideally reflect where your company’s strategic advantage aims to be.

NPI and On-Going Support are Different Disciplines

Creating a product for the first time, referred to as New Product Introduction (NPI), requires different skill-sets than sustaining or expanding an existing product line.  For example, contract manufacturers in Asia will often have completely different teams for the NPI phase and Production phase.

For engineers, the goal of the NPI phase is twofold:

  1. To get working prototypes functional as quickly as possible to unblock all other cross functional teams

  2. To build a platform that is production worthy.

To pursue this, you might quickly hire several engineers skilled at prototyping. They will likely hack together Arduinos, Raspberry PIs, or Particles, bring up an OS and write drivers, and fab out quick-turn circuit boards and 3D print enclosures. With the tools and services available today, teams can make incredibly quick progress towards delivering functional prototypes.

However, jump ahead in your mind to when the basic functional prototypes are ready.

The product team will be using them to get user feedback, refine the product experience, and inevitably start requesting new or changed features. If the product is tied to a smartphone app and/or cloud service, you’ll be defining how the different pieces of your product interface with each other -- not just to enable core features, but also around protocols, logging, analytics, device updates, and product health. Your team will need to support diagnostic and manufacturing needs, including first-time programming, provisioning, and preparing the devices for shipment. End-to-end security is required across all levels of the software stack. There will be regular new and urgent requests to support one-off product demos for investors, press, or partners.  And of course, supporting quality assurance testing & validation throughout the product development lifecycle.

The truth is that these tasks are of a different nature compared to early platform bringup. Technically, it involves higher level systems level architecture and understanding the needs of manufacturing, field testing, security, QA, etc. Culturally, it’s about grinding through long bug lists, dealing with constant change requests from all cross functional teams, chasing difficult to reproduce issues, and addressing edge cases. It’s during this phase of the development cycle that the overall product quality and customer experience comes to be defined.   

It’s extremely difficult to find embedded engineers who can easily run the gamut from early prototyping platforms to production-ready systems. It is common for even well connected, experienced technical recruiters to take six months or more to fill an embedded position.

If you want to keep a lean full-time team, do you hire the early stage prototypers to get going quickly and trust they’ll figure out the rest? Or do you spent months hunting for the unicorn engineers who you know will be able to deliver everything?

Aligning Your Core Team to Deliver Your Business Value

When building your company’s early core team, focus on what you believe is your company’s key strategic advantages -- then leverage outside services and resources for the rest.

Think carefully before pursuing that ‘full stack company’ org chart or insisting that the entire product needs to be done in house.

Engineering design firms can cover a range of industrial design, mechanical, electrical, firmware, and even manufacturing needs. Software consultants with expertise in platform bringup, sensors, wireless connectivity, and security can help fill in gaps in your team’s skillsets without burdening headcount with narrowly-focused experts. You could hire a full-time WiFi expert to enable your product’s wireless connectivity, but what will that person do when that task is complete? When done right, it can be faster & cheaper to bring your product to market if you leverage outside resources for tasks that are not central to your long-term business strategy.

Valuable core roles to fill are the connectors -- the systems-savvy engineers who know how to interface between the device, app, cloud, manufacturing, and business layers of the product. These are the people who drive the architecture, implement the top level features that differentiate your company, and execute the business logic. With their holistic understanding of how your product comes together, they also should be able to identify the most optimal way to leverage external resources. For these reasons, connectors are more valuable to early stage companies than the experts.

Investors, founders, and industry veterans all acknowledge how important the people are to a company, especially the company’s early core team. Building a team that is closely aligned to what makes your company unique -- and being smart about leveraging outside resources for the rest -- will give your company the best chance to succeed.

 

Related Posts

Musings on Tight Coupling Between Firmware and Hardware

Firmware applications are often tightly coupled to their underlying hardware and RTOS. There is a real cost associated with this tight coupling, especially in today's increasingly agile world with its increasingly volatile electronics market.

I've been musing about the sources of coupling between firmware and the underlying platform. As an industry, we must focus on creating abstractions in these areas to reduce the cost of change.

Let's start the discussion with a story.

Table of Contents

  1. The Hardware Startup Phone Call
  2. Coupling Between Firmware and Hardware
    1. Processor Dependencies
    2. Platform Dependencies
    3. Component Dependencies
    4. RTOS Dependencies
  3. Why Should I Care?

The Hardware Startup Phone Call

I'm frequently contacted by companies that need help porting their firmware from one platform to another. These companies are often on tight schedules with a looming development build, production run, or customer release. Their stories follow a pattern:

  1. We built our first version of software on platform X using the vendor SDK and vendor-recommended RTOS
  2. We need to switch to platform Y because:
    1. X is reaching end of life
    2. We cannot buy X in sufficient quantities because Big Company bought the remaining stock
    3. Y is cheaper
    4. Y's processor provides better functionality / power profile / peripherals / GPIO availability
    5. Y's components are better for our application's use case
  3. Platform Y is based on a different processor vendor (i.e. SDK) and/or RTOS
  4. Our engineer is not familiar with Platform Y's processor/components/SDK/RTOS
  5. The icing on the cake: We need to have our software working on Platform Y within 30-60 days

After hearing the details of the project, I ask my first question, which is always greeted with the same answer:

Phillip: Did you create abstractions to keep your code isolated from the vendor SDK or RTOS?

Company: No. We're a startup and we were focused on moving as quickly as possible

I'll then ask my second question, which is always greeted with the same answer:

Phillip: Do you have a set of unit/functional tests that I can run to make sure the software is working correctly after the port?

Company: No. We're a startup and we were focused on moving as quickly as possible

Then I'll ask the final question, which is always greeted with the same answer:

Phillip: How can I tell whether or not the software is working correctly after I port it?

Company: We'll just try it out and make sure everything works

Given these answers, there's practically no chance I can help the company and meet their deadlines. If there are large differences in SDKs and RTOS interfaces, the software has to be rewritten from scratch using the old code base as a reference.

I also know that if I take on the project, I'm in for a risky business arrangement. How can I be sure that my port was successful? How can I defend myself from the client's claim that I introduced issues without having a testable code base to compare against?

Why am I telling you this story?

Because this scenario arises from a single strategic failure: failure to decouple the firmware application from the underlying RTOS, vendor SDK, or hardware. And as an industry we are continually repeating this strategic failure in the name of "agility" and "time to market".

These companies fail to move quickly in the end, since the consequences of this strategic blunder are extreme: schedule delays, lost work, reduced morale, and increased expenditures.

Coupling Between Firmware and Hardware

Software industry leaders have been writing about the dangers of tight coupling since the 1960s, so I'm not going to rehash coupling in detail. If you're unfamiliar with the concept, here is some introductory reading:

In Why Coupling is Always Bad, Vidar Hokstad brings up consequences of tight coupling, two of which are relevant for this musing:

  • Changing requirements that affect the suitability of some component will potentially require wide ranging changes in order to accommodate a more suitable replacement component.
  • More thought need to go into choices at the beginning of the lifetime of a software system in order to attempt to predict the long term requirements of the system because changes are more expensive.

We see these two points play out in the scenario above.

If your software is tightly coupled to the underlying platform, changing a single component of the system - such as the processor - can cause your company to effectively start over with firmware development.

The need to swap components late in the program (and the resulting need to start over with software) is a failure to perform the up-front long-term thinking required by tightly coupled systems. Otherwise, the correct components would have been selected during the first design interation, rendering the porting process unnecessary.

Let's review on a quote from Quality Code is Loosely Coupled:

Loose coupling is about making external calls indirectly through abstractions such as abstract classes or interfaces. This allows the code to run without having to have the real dependency present, making it more testable and more modular.

Decoupling our firmware from the underlying hardware is As Simple As That™.

Up front planning and design is usually minimized to keep a company "agile". However, without abstractions that easily enable us to swap out components, our platform becomes tied to the initial hardware selection.

You may argue that taking the time to design and implement abstractions for your platform introduces an unnecessary schedule delay. How does that time savings stack up against the delay caused by the need to rewrite your software?

We all want to be "agile", and abstractions help us achieve agility.

What is more agile than the ability to swap out components without needing to rewrite large portions of your system? You can try more designs at a faster pace when you don't need to rewrite the majority of your software to support a new piece of hardware.

Your abstractions don't need to be perfect. They don't need to be reusable on other systems. But they need to exist if you want to move quickly.

We need to start producing abstractions that minimize the four sources of tight coupling in our embedded systems:

  1. Processor Dependencies
  2. Platform Dependencies
  3. Component Dependencies
  4. RTOS Dependencies

Processor Dependencies

Processor dependencies are the most common form of coupling and arise from two major sources:

  1. Using processor vendor SDKs
  2. Using APIs or libraries which are coupled to a target architecture (e.g. CMSIS)

Processor-level function calls are commonly intermixed with application logic and driver code, ensuring that the software becomes tightly coupled to the processor. De-coupling firmware from the underlying processor is one of the most important for design portability and reusability.

In the most common cases, teams will develop software using a vendor's SDK without an intermediary abstraction layer. When the team is required to migrate to another processor or vendor, the coupling to a specific vendor's SDK often triggers a rewrite of the majority of the system. At this point, many teams realize the need for abstraction layers and begin to implement them.

In other cases, software becomes dependent upon the underlying architecture. Your embedded software may work on an ARM system, but not be readily portable to PIC, MIPS, AVR, or x86 machine. This is common when utilizing libraries such as CMSIS, which provides an abstraction layer for ARM Cortex-M processors.

A more subtle form of architecture coupling can occur even when abstraction layers are used. Teams can create abstractions which depend on a specific feature, an operating model particular to a single vendor, or an architecture-specific interaction. This form of coupling is less costly, as the changes are at least isolated to specific areas. Interfaces may need to be updated and additional files may need to change, but at least we don't need to rewrite everything.

Platform Dependencies

Embedded software is often written specifically for the underlying hardware platform. Rather than abstracting platform-specific functionality, embedded software often interacts directly with the hardware.

Without being aware of it, we develop our software based on the assumptions about our underlying hardware. We write our code to work with four sensors, and then in the second version we only need two sensors. However, you need to support both version one and version two of the product with a single firmware image.

Consider another common case, where our software supports multiple versions of a PCB. Whenever a new PCB revision is released, the software logic must be updated to support the changes. Supporting multiple revisions often leads to #ifdefs and conditional logic statements scattered throughout the codebase. What happens when you move to a different platform, with different revision numbers? Wouldn't it be easier if your board revision decisions were contained in a single location?

When these changes come, how much of your code needs to be updated? Do you need to add #ifdef statements everywhere? Do your developers cringe and protest because of the required effort? Or do they smile and nod because it will only take them 15 minutes?

We can abstract our platform/hardware functionality behind an interface (commonly called a Board Support Package). What features is the hardware platform actually providing to the software layer? What might need to change in the future, and how can we isolate the rest of the system from those changes?

Multiple platforms & boards can be created that provide same set of functionality and responsibilities in different ways. If our software is built upon a platform abstraction, we can move between supported platforms with greater ease.

Component Dependencies

Component Dependencies are a specialization of the platform dependency, where software relies on the presence of a specific hardware component instance.

In embedded systems, software is often written to use specific driver implementations rather than generalized interfaces. This means that instead of using a generalized accelerometer interface, software typically works directly with a BMA280 driver or LIS3DH driver. Whenever the component changes, code interacting with the driver must be updated to use the new part. Similar to the board revision case, we will probably find that #ifdefs or conditionals are added to select the proper driver for the proper board revision.

Higher-level software can be decoupled from component dependencies by working with generic interfaces rather than specific drivers. If you use generic interfaces, underlying components can be swapped out without the higher-level software being aware of the change. Whenever parts need to be changed, your change will be isolated to the driver the declaration (ideally found within your platform abstraction).

RTOS Dependencies

An RTOS's functions are commonly used directly by embedded software. When a processor change occurs, the team may find that the RTOS they were previously using is not supported on the new processor.

Migrating from one RTOS to another requires a painful porting process, as there are rarely straightforward mappings between the functionality and usage of two different RTOSes.

Providing an RTOS abstraction allows platforms to use any RTOS that they choose without coupling their application software to the RTOS implementation.

Abstracting the RTOS APIs also allows for host-machine simulation, since you can provide a pthreads implementation for the RTOS abstraction.

Why Should I Care?

It's a fair question. Tight coupling in firmware has been the status quo for a long time. You may claim it still must remain that way due to resource constraints.

Vendor SDKs are readily available. You can start developing your platform immediately. The rapid early progress feels good. Perhaps you picked all the right parts, and the reduced time-to-market will actually happen for your team.

If not, you will find yourself repeating the cycle and calling us for help.

It's not all doom and gloom, however. There are great benefits from reducing coupling and introducing abstractions.

  • We can rapidly prototype hardware without triggering software rewrites
  • We can take better advantage of unit tests, which are often skipped on embedded projects due to hardware dependencies
  • We can implement the abstractions on our host machines, enabling developers to write and test software on their PC before porting it to the embedded system
  • We can reuse subsystems, drivers, and embedded system applications on across an entire product line

I'll be diving deeper into some of these beneficial areas in the coming months.

In the mean time - happy hacking! (and get to those abstractions!)

Related Posts