Mediator Pattern

The Mediator pattern is documented in Design Patterns: Elements of Reusable Object-Oriented Software, also known as the “Gang of Four” book. They describe the intent of the pattern as:

Define an object that encapsulates how a set of objects interact. Mediator promotes loose coupling by keeping objects from referring to each other explicitly, and it lets you vary their interaction independently.

Context

This pattern helps reduce (or eliminate) coupling between modules in the system by moving that coupling into a Mediator module or class. The Mediator is responsible for coordinating and controlling the interactions of the various modules.

Problem

A typical software design goal is to create modules that are independent and decoupled from other modules. However, to assemble complex system behaviors, you need modules to interact with each other. The most convenient way to develop complex behaviors is to have modules refer to each other in order to achieve the complex operation. This convenience makes the disparate modules tightly coupled, which means that a change in one part of the system will often cascade throughout the system. Wouldn’t it be preferable to have a way to coordinate the behavior of these modules without having them to be directly coupled together?

Forces

The Mediator pattern is balancing two competing forces: coupling and complexity. By increasing complexity (through the Mediator class), you can reduce the coupling between elements of a system. This is often desirable when the goal is to enable design for change, reusability, and/or testability.

Solution

The goal of this pattern is to decouple modules or objects by constraining their interactions within a single Mediator. Instead of having objects interact with each other directly, the Mediator is responsible for coordinating the interactions between objects. Colleague objects know about the Mediator, but do not know about each other. The Mediator manages all interactions among Colleague objects.

Quote

You can avoid these [dependency] problems by encapsulating collective behavior in a separate mediator object. A mediator is responsible for coordinating and controlling the interactions of a group of objects. The mediator serves as an intermediary that keeps objects in the group from referring to each other explicitly. The objects only know the mediator, thereby reducing the number of interconnections.
–– Design Patterns

The Mediator pattern is also useful for coordinating “non-standard” functionality and configuration options that are provided by a specific implementation but not exposed through generic abstract interfaces. Tight coupling to a specific implementation can safely live in a Mediator. The rest of the program can use the generic interface.

Implementation Notes

  • There is no need to define an abstract Mediator class when colleagues only work with a single mediator.
  • Colleagues have to communicate with the mediator when an event of interest occurs.
    • You can make direct calls.
    • You can use the Observer pattern. When using the Observer pattern, Colleague classes act as subjects, sending notifications ot the mediator whenever they change state. The mediator responds by propagating the effects of the change to other colleagues.
    • You can create a specialized notification interface in the Mediator. For instance, when communicating with the mediator, a colleague can pass itself as an argument, allowing the Mediator to identify the sender.

Consequences

This approach has three primary benefits:

  1. Interactions between objects are managed in a single place, instead of distributed across several objects. Changes in how objects interact only impact the Mediator module. This also helps us focus on interactions separately from the individual behavior of the objects.
  2. We create a one-to-many interaction set between the Mediator and its colleagues. This is much easier to conceptualize and manage than many-to-many interactions distributed among objects.
  3. Colleague modules are easier to reuse in other systems due to the reduced dependencies on other modules. We can independently reuse and vary the Mediator and its Colleagues.

One tradeoff with using the Mediator pattern is that we push most of the complexity in object interactions into the Mediator itself. This object runs the risk of becoming overly complex, but that can be managed.

Consequences:

  • Localizes behavior that would otherwise be distributed among several objects – changes only need to happen in the Mediator only. Colleague classes can be reused without changes.
  • Decouples colleagues from each other – you can vary and reuse colleague and Mediator classes independently.
  • Simplifies interactions – replaces many-to-many interactions between objects with one-to-many interactions between the Mediator and colleagues. One-to-many interactions are easier to understand, maintain, and extend than many-to-many interactions.
  • Abstracts how objects cooperate – lets you focus on how objects interact apart from their individual behavior. A Mediator can clarify how objects interact in a system.
  • Centralizes control – Mediator pattern trades complexity of interaction for complexity in the mediator. This can make your Mediators harder to maintain.

Known Uses

Use the Mediator pattern when:

  • A set of objects communicates in well-defined but complex ways. The resulting interdependencies are unstructured and difficult to understand.
  • Reusing an object is difficult because it refers to and communicates with many other objects.
  • A behavior that’s distributed between several classes should be customizable without a lot of subclassing.

For embedded software, a hardware abstraction layer (HAL) or board support package (BSP) can also be examples of the Facade and Mediator patterns (which it is depends on the design). Fundamentally, the goal of a HAL or BSP is to provide Facade that higher software layers can use to interact with the underlying hardware. If the components managed by the HAL or BSP can also use the provided interfaces, we end up with a Mediator.

In our own standard embedded program design, we create three core Facade/Mediator layers with different responsibilities: ProcessorHardwarePlatform, and Platform.

  • The Hardware Platform contains all the details about what hardware is on the board, what peripherals are hooked up to what, and the various configuration details for those particular parts.
    • The Hardware Platform has access to specific implementations and can invoke APIs that are note exposed through the abstract interfaces.
    • There are common interfaces that apply to all Hardware Platforms. These can be used by other parts of the system, such as the boot process.
    • The Hardware Platform provides its own general interface for use by Platform modules.
    • Above the HW platform, none of the specific details about the underlying hardware are known, other than the generic interfaces provided by the HW platform layer itself and the generic interfaces to drivers that are automatically registered in a central driver registry.
  • The Platform contains a Hardware Platform, as well as other software-level details: OS initialization, heap initialization, language feature setup, etc.
    • Like the Hardware Platform, it is responsible for configuring and coordinating platform-specific details that should be kept hidden from the rest of the program.
    • There are common interfaces that apply to all Platforms. These can be used by other parts of the system, such as the boot process.
    • The Platform provides its own general interface for use by application-level modules.

These are described in further detail in the following locations:

Variants

  • In their book Patterns in the Machine: A Software Engineering Guide to Embedded Development, the Wayne brothers describe an architectural approach using two different patterns: “Main Pattern” and “Model Points”. In our view, this is an application of the Mediator pattern. Different modules interact through Model Points to send and receive data – the modules know about the semantics of the Model Points they must interact with, but not what is on the other side of that Model Point. In this sense, Model Points are serving as a Mediator. The Main Pattern can also be viewed as a variation on Mediator, just viewed from a step above the Model Points. This pattern describes having a section of code that knows about concrete instances of modules, configures them, and connects them together via Model Points. All of the complexity for the system is contained in the specific implementation of the Main Pattern, while the modules themselves remain independent and unaware of other modules in the system.
  • The Facade pattern is an alternative to Mediator in some contexts, and the two may be combined together.

Differentiating Mediator and Facade

When you think about the Mediator and Facade patterns alongside each other, they may seem quite similar. Fundamentally, both patterns are about managing interactions between modules.

The primary difference is in the type of interaction between modules.

The Facade Pattern abstracts a subsystem to provide a more convenient interface or to shield users from directly interfacing with subsystem modules. The interaction flows in one direction: users make a request of the Facade, and the Facade makes requests of the subsystem modules. In fact, subsystem modules do not know about the existence of the Facade!

The Mediator pattern primarily enables cooperative behavior while keeping modules decoupled. The Mediator pattern allows us to centralize coordination activities that do not belong in the individual modules. Unlike the Facade pattern, Colleagues are aware of the existence of the Mediator and communicate with the mediator instead of directly with each other. The Mediator itself communicates with Colleagues to fulfill requests.

In practice, the lines between these two patterns can blur. Sometimes you are able to create clearly defined Facades and Mediators. In other cases, the resulting design is a blend of the two patterns.

For example, consider a Facade for a power subsystem. The Facade provides a general interface for handling power-related activities, and it uses the subsystem objects to fulfill the requests.

However, we might need a more complex power management module. What if the setPowerState() function interacted with other drivers to control when they are turned on and off? And what if drivers themselves can trigger power state changes through the power management module in response to specific events?

In this case, we end up with something that resembles a blend between the two patterns rather than a pure Mediator or pure Facade.

References

Information Hiding

Information hiding is a software design principle, where certain aspects of a program or module (the “secrets”) are inaccessible to clients. The primary goal is to prevent extensive modification to clients whenever the implementation details of a module or program are changed. This is done by hiding aspects of an implementation that might change behind a stable interface that protects clients from the implementation details. Users of that interface (whether it is a module, class, or function) will perform operations purely through its interface. This way, if the implementation changes, the clients do not have to change.

Information hiding serves as a criterion that can be used to decompose a system into modules. The principle is also useful for reducing coupling within a system.

Applying Information Hiding

The key challenge when applying information hiding is determining what information should be hidden and what should be exposed. Parnas suggests that the heuristic we should use is hiding those details that are “likely to change”. This way our changes have only a local effect, since we have hidden the details to be changed behind a firewall of some kind (e.g., an abstract interface).

Using this heuristic, information hiding can be thought of as a three-step process:

  1. Identify all of the pieces of a design that are likely to change or other design details that you might want to hide
    • We call these our “secrets”, and we want to hide these details from external entities.
  2. Isolate each secret into its own module, class, or function
    • This is done so that changes to the secret are isolated and don’t affect the rest of the program
    • Parnas: “All data structures that reveal the presence or number of certain components should be included in separate information hiding modules with abstract interfaces.”
  3. Design intermediate interfaces that are insensitive to changes in the underlying secrets.
    • That is, make sure that your secrets aren’t leaked or revealed by your interfaces’

As Parnas reminds us:

The interface must be general but the contents should not. Specialization is necessary for economy and flexibility.

For advice on practically implementing information hiding in C, please see this entry.

What to Hide?

Here are some ideas that are likely to change:

  • Hardware dependencies
  • Physical storage layout
  • Data formats
  • Data conversions
  • Data access/traversal
  • Algorithms
  • Implementation dependencies
  • Object creation (e.g., factories)
  • Transport layer mechanisms
  • Non-standard language features
  • Library routines
  • Complex data structures
  • Complex logic
  • Global variables (hide behind access routines if they are actually needed)
  • Data constraints, such as array sizes and loop limits
  • Business logic

Benefits

Information hiding helps us improve the ability to change our system while minimizing the impact on other parts of the system.

Since the implementation details are hidden behind a common interface, changes to the implementation do not require changes to the rest of the program (as long as the implementation sufficiently satisfies the specified interface). Changes are isolated to a single location in the ideal case.

For example, a common sensor API may be provided, and any sensor driver that satisfies the interface requirements can be used within the system. This enables a team to swap between components (and even processors or board designs) as necessary.

Areas of Concern

Critiques

Revisiting Information Hiding – Reflections on Classical and Nonclassical Modularity provides some critiques on the classical idea of information hiding, mainly:

  • Information hiding as described corresponds to classical logic, but we know that humans aren’t very good at (don’t default to) reasoning via classical logic; instead, we tend to rely on inductive reasoning

  • Some stakeholders of a system will eventually care about aspects of the system that are supposed to be “hidden”

    When information is hidden behind an abstraction barrier, there are potential stakeholders (or concerns), who are interested in that hidden information.

    If a stakeholder wants to reason about “nonfunctional” aspects of a system, such as time or space complexity or power consumption, he probably needs to reason about implementation details hidden behind abstraction barriers

    For example, different implementations have different time or space behavior of the operations, different rounding errors, different optimizations that the compiler will apply, or different power consumption. To some stakeholders, such concerns may well be important; while some require higher performance, others require higher precision.

  • For large systems, basic assumptions of information hiding (e.g., monotonicity and composability) may not seem to hold:

    Composing two programs which are each separately correct with respect to, say, lock-based concurrency or transactions, are in general no longer correct when composed. More importantly, the non-composability can in general not be deduced from the interfaces of these components

  • Information hiding and separation of concerns can be contradictory:

    For instance, in the canonical AOP example of updating a display when a figure element changes, a figure element module hides less information behind its interface when the display updating logic is separated from the figure element module. In that sense, and contrary to the common notion that information hiding and separation of concerns go hand in hand, information hiding and separation of concerns can actually be contradictory.

    There are many concerns that, when separated, need to expose implementation detail in such a way that information hiding is impaired. Developers have to decide what information to hide and what to separate. This is a fundamental problem of classical modularity

  • Information hiding is limited by “the tyranny of the dominant decomposition”

    What can be hidden behind an interface depends on the chosen decomposition, but there is no “best” decomposition; rather, from each point of view (such as the points of views of the different stakeholders) a different decomposition (and hence information hiding policy) would be most appropriate. What one stakeholder would hide as an implementation detail be- hind an interface is of primary importance to another stakeholder, who would hence choose a different decomposition that exposes that information.

  • Information hiding may still hinder modifiability

    Even if a software system is successfully modularized, and the information needs of all stakeholders and concerns are reflected in the interfaces of components, information hiding might still hinder software evolution. This might be surprising at first, because information hiding is supposed to facilitate software evolution by hiding design decisions behind interfaces, so that they can be changed at will. The problem is that the original developers have to anticipate change and to modularize the software accordingly.

    Unfortunately, it is not clear how to decide up-front which design decisions need to be hidden and which need to be exposed.

    One could argue that successful modularization just needs better planning to better assess what is likely to change, but we believe that this is an implausible assumption because large-scale software systems are assembled from many independently developed and independently evolving parts; hence, a big global “plan” is infeasible and unanticipated changes are unavoidable in long-living projects.

    If the design decision is hidden behind the interface, software evolution might bring a new stakeholder (or concern) into the system which needs to access that hidden information. So, to support the information need of this stakeholder (or concern), the design decision should not have been hidden in the first place.

General Response

In our view, these critiques are valid for some definitions or applications of information hiding – particularly, those which seem to be more far reaching and absolute than our own uses.

Our primary focus is using information hiding as a way to design for change. We do not view information hiding as strictly meaning we do not need to examine implementation details of a module. Instead, we think that implementation details should be hidden from other software modules as much as possible. In fact, in our own model, we still dive into secrets often: our focus is on abstracting common behaviors that other modules require for interaction, while still allowing custom APIs for non-common behaviors. After all, we still need to initialize and configure many modules and properties for our system’s use cases, often in a non-generic and not-easily-abstractable way. For example, every IMU component has a different set of configuration options and setup procedures; however, our algorithms that operate on IMU samples can do so in a generic way that can be made common across devices. We do not have to be so absolute in the application of information hiding.

Certainly it is true every implementation that satisfies an interface is not composable into a suitable system that meets our requirements – we must implement for our system specifically. We aren’t striving to implement general, idealized abstract components that are suitable in all cases.

Engineering is a balancing act: we cannot purely focus on any one desirable quality. Sometimes, we may decide to trade off information hiding for improving separation of concerns. But it also might be true that we still hide things for the rest of the system behind a single interface, allowing the deeper implementation to be tightly coupled.

Finally, we must remember that we are human. We will not create perfect artifacts, no matter how hard we try. Changes will come, some of our assumptions will be invalidated, and we will have to rework the system. This is inevitable, and it is not a reason to apply the techniques that we have at hand to improve our chances of success.

References

  • The original source for information hiding is David Parnas’s Paper Designing Software for Ease of Extension and Contraction. Also recommended is his paperOn the Criteria to Be Used in Decomposing Systems into Modules

  • The Secret History of Information Hiding by David Parnas

    Nonetheless, the paper did explain that it was information distribution that made systems “dirty” by establishing almost invisible connections between supposedly independent modules

    After some thought, it became clear to me that information distribution, and how to avoid it, had to be a big part of that course. I decided to do this by means of a project with limited information distribution and demonstrate the benefits of a “clean” design

    This program is still used as an example of the principle. Only once has anyone noticed that it contains a flaw caused by a failure to hide an important assumption. Every module in the system was written with the knowledge that the data comprised strings of strings. This led to a very inefficient sorting algorithm because comparing two strings, an operation that would be repeated many times, is relatively slow. Considerable speed-up could be obtained if the words were sorted once and replaced by a set of integers with the property that if two words are alphabetically ordered, the integers representing them will have the same order. Sorting strings of integers can be done much more quickly than sorting strings of strings. The module interfaces described in [9] do not allow this simple improvement to be confined to one module.

    My mistake illustrates how easy it is to distribute information unnecessarily. This is a very common error when people attempt to use the information-hiding principle. While the basic idea is to hide information that is likely to change, one can often profit by hiding other information as well because it allows the re-use of algorithms or the use of more efficient algorithms

    Several software design “experts” have suggested that one should reflect exciting business structures and file structures in the structure of the software. In my experience, this speeds up the software development process (by making decisions quickly) but leads to software that is a burden on its owners should they try to update their data structures or change their organisation. Reflecting changeable facts in software structure is a violation of the information-hiding principle.

    In determining requirements it is very important to know about the environment but it is rarely the right “move” to reflect that environment in the program structure.

  • Missing in Action: Information Hiding by Steve McConnell

    In the 20th Anniversary edition of The Mythical Man-Month, Fred Brooks concludes that his criticism of information hiding was one of the few ways in which the first edition of his book was wrong. “Parnas was right, and I was wrong about information hiding,” he proclaims (Brooks 1995). Barry Boehm reported in 1987 that information hiding was a powerful technique for eliminating rework, and he pointed out that it was particularly effective during software evolution (“Improving Software Productivity,” IEEE Computer, September 1987). As incremental, evolutionary development styles become more popular, the value of information hiding can only increase.

    To use information hiding, begin your design by listing the design secrets that you want to hide. As the example suggested, the most common kind of secret is a design decision that you think might change. Separate each design secret by assigning it to its own class or subroutine or other design unit. Then isolate–encapsulate–each design secret so that if it does change, the change doesn’t affect the rest of the program.

    Aside from providing support for structured and object-oriented design, information hiding has unique heuristic power, a unique ability to inspire effective design solutions.

    Object design provides the heuristic power of modeling the world in objects, but object thinking wouldn’t help you avoid declaring the ID as an int instead of an IDTYPE in the example. The object designer would ask, “Should an ID be treated as an object?” Depending on his project’s coding standards, a “Yes” answer might mean that he has to create interface and implementation source-code files for the ID class; write a constructor, destructor, copy operator, and assignment operator; document it all; have it all reviewed; and place it under configuration control. Unless the designer is exceptionally motivated, he will decide, “No, it isn’t worth creating a whole class just for an ID. I’ll just use _int_s.”

    Note what just happened. A useful design alternative, that of simply hiding the ID’s data type, was not even considered. If, instead, the designer had asked, “What about the ID should be hidden?” he might well have decided to hide its type behind a simple type declaration that substitutes IDTYPE for int. The difference between object design and information hiding in this example is more subtle than a clash of explicit rules and regulations. Object design would approve of this design decision as much as information hiding would. Rather, the difference is one of heuristics–thinking about information hiding inspires and promotes design decisions that thinking about objects does not.

  • C2 Wiki: Information Hiding

  • Wikipedia: Information Hiding

  • Revisiting Information Hiding – Reflections on Classical and Nonclassical Modularity

    sInformation hiding is to distinguish the concrete implementation of a software component and its more abstract interface, so that details of the implementation are hidden behind the interface. This supports modular reasoning and independent evolution of the “hidden parts” of a component. If developers have carefully chosen to hide those parts ‘most likely to change’, most changes have only local effects: The interfaces act as a kind of firewall that prevents the propagation of change.

    A key question in information hiding is which information to hide and which information to expose. Parnas suggested the heuristic to hide what is ‘likely to change’.

    the programming research community, in which information hiding is nowadays such an undisputed dogma of modularity that Fred Brooks even felt that he had to apologize to Parnas for questioning it.

    Both information hiding and abstraction imply some notion of substitutability: A module’s implementation can be replaced by a different implementation adhering to the same interface, and since the implementation was hidden to other components in the system in the first place, these other components should not be disturbed by the change.

    The distinction between an interface and implementations of that interface, which is the at the core of information hiding and abstraction, is related to logic. The interface corresponds to a set of axioms, and the implementation of the interface corresponds to a model of the axioms. Substitutability is reflected by the fact that the same theorems hold for all models of the axioms (by soundness of the logic), hence we cannot distinguish two different models within the theory. The heuristic of hiding what is most likely to change is reflected by the design of axiom systems (say, the axioms of a group in abstract algebra) in such a way that there are many interesting models of the axioms.

    As in the case of information hiding and abstraction, compositionality implies a strong notion of substitutability: If a subprogram is substituted by a different subprogram with the same meaning, the meaning of the whole program will still be the same. In other words, we can successfully reason more abstractly on an expression by thinking of its meaning rather than of the expression itself. When reasoning about the program, we can identify expressions having the same meaning. This process is typically called equational reasoning. Since the actual expression is hidden behind its meaning, compositionality can also be seen as a specific form of information hiding by considering the meaning of a program to be its interface.

Active Object [AO]

The active object design pattern decouples method execution from method invocation for objects that each reside in their own thread of control. Typically, an active object is constructed using an internal thread and a queue of operations or events that will be executed on the active object’s thread. The goal is to enable concurrency using asynchronous invocation, and to eliminate the need for an object to worry about managing threading details itself: that’s all taken care of under the hood.

From Around the Web

Rust

Rust is a systems programming language that places an emphasis on memory and concurrency safety. Because of these factors, it is gaining traction in the embedded community.

Table of Contents:

  1. Books
  2. From Around the Web
    1. Learning Rust
    2. Code Analysis
    3. Rust for Embedded
    4. Safety-Critical Rust
    5. Rust Patterns
    6. Mixing Languages

Books

From Around the Web

Learning Rust

Code Analysis

Rust for Embedded

Safety-Critical Rust

Ferrous Systems has been working on qualifying Rust for safety-critical system development with its Ferrocene toolchain.

Rust Patterns

Mixing Languages

C

C is a general-purpose imperative programming language widely used for embedded systems development.

Table of Contents:

  1. From Around the Web
    1. Beginners
    2. Standard Library
    3. Pointers
    4. Variadic Functions
    5. Volatile
    6. Security & Safety
    7. Undefined Behavior
    8. C11
    9. References
  2. Tools
  3. Advanced Techniques
    1. Bitwise Operations
  4. Exceptions
  5. Objects in C
  6. Information Hiding in C
  7. Polymorphism and Inheritance in C
  8. From Embedded Artistry
  9. Recommended C Libraries

From Around the Web

Beginners

The classic introductory recommendation for programming in C is Brian W. Kernighan and Dennis M. Ritchie’s The C Programming Language.

If you prefer more of a course-based approach, Learn Code the Hard Way has a Learn C the Hard Waycourse. We recommend this course because it provides hands-on demos and examples.

An excellent C crash course can be found on Embedded.fm in the Embedded Wednedsays series:

The following Embedded.fm articles can be used to build upon your new C knowledge:

The Atoms of Confusion website provides information on making confusing code constructs more understandable. Review these common C confusion points to improve your programming abilities. Avoid unclear constructs whenever you’re able to.

For an in-depth test of your C knowledge, try figuring out the Bad C Analysis interview question.

Standard Library

The C standard library is commonly called libc, and occasionally stdlib or cstdlib.

For more information and relevant links, see the dedicated glossary entry.

Articles related to standard library evolution:

Pointers

Pointers are the foundation of C, yet many developers are intimidated by them. These resources will help you better understand pointers.

Variadic Functions

Volatile

Embedded developers will defend their beloved volatile keyword to the death. Make sure you understand what the keyword actually does under the hood!

If in doubt, do not declare a variable volatile. Instead, cast it to volatile in the particular spot where you want to suppress optimizations.

Security & Safety

MITRE outlines common weaknesses found in software written in C. Familiarize yourself with these common security flaws to improve your programming abilities.

No, strncpy() is not a “safer” strcpy() points out the weaknesses with strncpy().

Secure Coding in C and C++ (2nd Edition) (SEI Series in Software Engineering)

Microsoft is working on Checked C, a language extension project that adds static and dynamic (runtime) checking for common errors such as buffer overruns, out-of-bounds memory accesses, and incorrect type casts.

For C coding standards with a focus on safety and security, see:

Undefined Behavior

Undefined behavior abounds in the C programming language, and programmers easily trip over it. Here are some resources to improve your knowledge on undefined behavior:

C11

References

Tools

Advanced Techniques

The following advanced techniques are reviewed below:

  1. Bitwise Operations
  2. Exceptions
  3. Information Hiding in C
  4. Objects in C
  5. Interitance in C

Bitwise Operations

The best resource for bit manipulation routines is Bit Twiddling Hacks. This code is in the public domain and is used in a variety of projects I’ve worked on.

Exceptions

You can implement exception-like behavior in C with libraries. Our favorite is CException, written by the Throw the Switch team.

Objects in C

For information on how to use “objects” in C, please see this Field Atlas entry.

Information Hiding in C

We can take the previous approach to objects in C further by applying information hiding and encapsulation using opaque pointers.

For a practical example that uses this technique, see Creating a Circular Buffer in C and C++.

Polymorphism and Inheritance in C

For information on how to use polymorphism and inheritance in C, please see this Field Atlas entry.

From Embedded Artistry

cdecl

/ / C, Tools, Uncategorized

Product Development

Product development typically refers to all of the stages involved in bringing a product from concept to customer release.

Aspects

Product development encompasses several aspects:

Managing Components and Documents

Considerations

References

  • Learning to Learn: A New Look at Product Development – The Systems Thinker
  • Patterns in the Machine : A Software Engineering Guide to Embedded Development by John Taylor and Wayne Taylor

    All software resists shipment. No matter what your release date, there are always last-minute features that become critical and last-minute bugs that are uncovered. All of these things will reset your release timeline. Additionally, there can be noncode, nontechnical activities that slow things down like licensing reviews and export control paperwork. And the bigger the project is, the more people there are that can come up with reasons and roadblocks that force a reset of the release timeline. Don’t be fooled into thinking, then, that after the last line of code has been written, the hard part is done. You have to beat software out the door with a stick.

C++

C++ is a compiled programming language originally derived from C. C++ supports object-oriented, generic, and functional programming features.

Table of Contents:

  1. Books
  2. C++ YouTube Playlist
  3. From Embedded Artistry
  4. From Around the Web
  5. Recommended C++ Libraries

Books

Here are our favorite C++ books:

C++ YouTube Playlist

We have a Youtube Playlist with our favorite lectures on C++.

From Embedded Artistry

From Around the Web

References

  • C++ Reference – the best online reference for the language
  • ISO C++ Website – the official C++ website
    • The Super-FAQ contains a treasure trove of knowledge that is especially useful to new developers
  • C++ Core Guidelines is a collaborative effort to provide guidelines for the effective use of C++ (since C++11).
  • C++ Best Practices – a “Collaborative Collection of C++ Best Practices” from Jason Turner
    • Covers the safety, maintainability, portability, thread-ability, and performance of C++ code

Blogs

Not active, but full of quality content:

On C++ Templates

Rainer Grimm, a C++ trainer, has a series that teaches you about C++ templates:

On C++ Concepts

Concepts are a new addition to C++20. The topic can be complex enough to deserve a bit of dedicated study.

Ramp

“Ramp” refers to the NPI process stage following PVT. The main goal from ramp is transition from PVT to Mass Production (MP) output volumes. In practice, Ramp is a distinct stage, but in description it is often lumped together as a “subset” of MP.

Ramp can involve a number of activities:

  • Additional assembly lines are brought up to increase production volumes.
  • Processes may be sped up until they reach an optimum in the tradeoff between throughput and quality.
  • Test limits may be adjusted to increase the throughput on the line.

Ramp also faces a number of challenges:

  • The pressure to increase throughput and output can result in poor-quality parts being allowed onto the line, causing an increase in failures.
  • One or more components can gate the transition to mass production due to quality problems or late deliveries.
  • As production quantities increase, the absolute number of failed units that needs to be inspected by the engineering team increases, leading to potential delays in identifying and addressing critical problems.

NPI Process Flow

  • Ramp immediately follows PVT.
  • Ramp transitions into Mass Production once assembly lines have been brought up, quality/throughput have stabilized, and you have sufficient build material in house and on the way to manufacture the desired number of devices.

References

Production Validation and Test [PVT]

PVT is a stage in the NPI process. During PVT, no further design changes are expected. You are focused on working out the final kinks in the manufacturing process before entering Mass Production. The goal of the engineering team is to produce one “golden line” that operates with desired yield and throughput. This golden line can then be replicated by the operations teams to scale up production.

Qualities of PVT

  • Volumes are typically 5-10% of the initial mass-production run.
  • Production-intent processes and parts are in place.
    • Custom tooled parts are used: no more milling, printing, soft molds, etc.
    • Some tools and processes may be introduced at PVT; they are in effect being qualified before entering mass production.
    • In practice, there may still be some experiments going on at PVT, but this is not the ideal situation and you should strive to avoid it.
  • The units that are built at PVT are revenue-able (can be sold to customers).
    • If this is not the case, you’re not ready for PVT, but having a DVT build.
  • The focus is on improving yield and throughput to hit mass production goals
    • The build will often start “slow” and speed up while moving through gated phases (‘red’, ‘yellow’, ‘green’ being common phase descriptors) that reflect operator training level, throughput, and yield levels.
    • Test station software and manufacturing firmware are improved to reduce retest rates and cycle time.
    • Process flaws are addressed to improve yield and throughput.
    • Cosmetic fallout caused by activities on the manufacturing line is addressed.
      • PVT is often heavily focused on cosmetic yield.
  • The packaging flow is perfected.
  • Outgoing Quality Control (OQC) and/or Final Quality Control (FQC) processes roll out in force.
  • Proceeding into the next PVT build phase (or on to Ramp and Mass Production) is often gated by a problematic vendor or three, whether due to yield problems, insufficient build quantities, late deliveries, or other problems.

Uses of PVT Units

PVT units are used for:

  • Sale
    • Units at this stage should ideally be “revenue-able” – able to be sold to customers.
    • Often, cosmetic flaws are the reason that units from PVT will not be sold. You can still use these for other purposes. If not terribly egregious, you can offer them to friends and family at a significant discount.
  • Internal development
  • Beta testing
  • “Golden units” (ideal devices) are used for GR&R activities, test station validation, and manufacturing firmware validation at the CM

NPI Process Flow

  • The PVT stage begins after DVT has been completed with:
    • Sufficient confidence in addressing yield loss issues
    • Certifications have been completed
    • Packaging is ready
    • Reliability and environmental testing show acceptable results
  • A significant change in the design at the PVT stage should move the product back to DVT. In practice, you are more likely to see “Pre-PVT” or “PVT-2” builds than a reset to DVT.
  • PVT is completed when:
    • There are acceptable yields and throughput for mass production on at least one manufacturing line (the “golden line”, which will be replicated to other lines).
    • You have sufficient build material in house and on the way to manufacture the desired number of devices.
  • After the PVT stage is completed, production begins to ramp to Mass Production levels.

References

  • Hardware engineers speak in code: EVT, DVT, PVT decoded by Anna-Katrina Shedletsky

    PVT is the “last build” — the units you are building are supposedly intended to be sold to customers, if they pass all of your test stations. PVT typically transitions directly into Ramp and Mass Production, or a Pilot build with no time gap.

    Purpose: to verify mass production yields at mass production speeds

    • Validate and qualify additional tools needed to support quantities for early ramp
    • No parallel experimental units allowed (I have never seen this actually happen, but it is a goal that should be driven to for as long as possible)

    Typical Quantities: 1K to 20K

    • All units are intended to be sold to customers
    • The build is potentially phased — red, yellow, green is common — indicating “maturity” of the production process, which includes a combination of operator training level, line speed, and line yield

    Things that Go Wrong:

    • There is almost always at least one issue that is still outstanding at the start of PVT — this is likely the item at highest risk of impacting your schedule
    • There is usually at least one vendor whose yields are way lower than expected, and because they cannot produce at the quantities promised, input is gated by their deliveries
    • If you have a high cosmetic standard, your cosmetic yield likely starts at 0%. Unless you decide to loosen your standard, the conventional way to improve it is to knowingly input units to a 0% yield line and painstakingly seek places where damage occurs and improve them. This process can take weeks and hundreds or thousands of units. An Instrumental system can streamline and significantly accelerate this process

    Exit Criteria: mass production yields at mass production speeds on at least one line, and replication to other lines already started.

  • The different engineering validation stages in a nutshell | EVT, DVT, PVT | by Chris Boucher | Medium
  • Overview of the hardware product development stages: POC – EVT – DVT – PVT explained
    • PVT objectives:
      1. Verify mass-production yields;
      2. Finalize DF-X with the help of CM aiming to minimize waste and make assembly more efficient;
      3. Make the first pilot production run and ensure the product quality adheres to your expectations;
      4. Weed out the last design flaws during the pilot production run;
    • PVT prototype quantities typically range between 50 and 500 in order to verify mass-production yields and provide product samples.
    • Technologies: Industrial technologies suitable for volume production only;
    • Outputs / Deliverables: Final product produced in a limited quantity by using the tools for mass-production. Electronic layouts and components are revisited using PCB stencils for soldering components. Mechanical DFM is finalized and plastic parts are manufactured by using 2nd generation moulds.
    • Duration: 3-6 months in general.
    • Limitations: The time required to design and produce custom tools is generally long.