New Product Introduction [NPI Process]

The New Product Introduction (NPI) process is a framework for taking a new product from design to manufacturing through a series of structured phases. The goal of the process is producing a functional, reliable, manufacturable, and cost-effective product.

The NPI process is not a standard that is set-in-stone, but more of a general pattern. The implementation and interpretation of the process and its stages varies from company to company. We learned the NPI process at Apple, which certainly colors our interpretation of the process. You will find descriptions of the process similar to ours from other ex-Apple engineers, such as Anna-Katrina Shedletsky of Instrumental.

Stages

  1. Product Requirements Specification (also called the Product Requirements Document)
  2. Proto
  3. EVT
  4. DVT
  5. PVT
  6. Ramp
  7. Mass Production

References

Mass Production [MP]

Mass production (MP) is the final stage in the NPI and represents sustaining production of the new product in meaningful quantities.

In MP, the bulk of the responsibility shifts from engineering to the CM and operations team.

Ideally, at this stage, there will be no changes to the product or process. In practice, there will often be ongoing efforts to reduce costs and improve yields (even if only by loosening limits). New tools or vendors will likely need to be qualified at some point (through a Post Ramp Qualification) – often due to cost advantages, supply problems, or end-of-life issues. Minor design changes and test coverage improvements may occur, especially as a result of Early Field Failure Analysis (EFFA) activities.

NPI Process Flow

  • Ramp transitions into Mass Production once sufficient assembly lines have been brought up and quality/throughput have stabilized.
  • Eventually, engineering resources cycle off of the project and the factory manages successive mass production runs. Quality usually degrades when the factory is left unsupervised.
  • At some point, the project will reach end-of-life (EOL).

References

  • Manufacturing
  • Hardware engineers speak in code: EVT, DVT, PVT decoded by Anna-Katrina Shedletsky

    PVT flows immediately into the phase of the program called Ramp, where parallel assembly lines are being brought up to increase daily output volume. Mass Production is a superset of Ramp and the sustaining production that follows.

    Purpose:

    • Bring up multiple lines in parallel to support high volume
    • Continue to improve ongoing yield
    • Qualify additional tools or vendors
    • Make design changes based on returns, Early Field Failure Analysis (EFFA), or cost down efforts

    Things that Go Wrong:

    • Vendors change processing parameters or take down tools for maintenance, resulting in dimensional or quality shifts that can cause line failures
    • Parts from unqualified tools are allowed on the line and cause failures
    • A single-sourced part becomes the supply gate, usually due to ongoing yield issues
    • Quality tends to decrease as engineering is pulled away and factory is left unsupervised

Manufacturing [Mfg]

Manufacturing refers to the production of the devices that we create. This involves machines, human labor, chemical processes, testing, packaging, and more.

This page primarily collects manufacturing glossary terms. For technical information, please see our entry on Manufacturing Devices.

Manufacturing Glossary Entries

Engineering Validation and Test [EVT]

EVT is a stage in the NPI process. EVT units are intended to test the functionality of your product against its requirements. In some sense, EVT is a “feasibility study” of the design.

EVT builds will often be the first time that a proper Form Factor Engineering Prototype device is built – one that both works like and looks like the intended product. However, it is still common to produce Non-Enclosed Devices. Some materials, such as housings, may be in short supply. NEDs are also useful for software development and EE teams, as they provide easier access to the components for debugging and measurement purposes.

True yields at the EVT stage are quite low. There will be manufacturing process errors, out-of-spec components, and other problems. This is useful, however, as the engineering team will investigate failures and improve the design and manufacturing processes to improve yield (though significant improvements may not be seen until future builds).

EVT units should meet the requirements outlined in the Product Requirements Specification before proceeding DVT. There will often be significant design changes that need to be made at this stage, requiring at least another EVT event (“EVT-2”). We have rarely worked on products that made it past EVT with a single build event.

Qualities of EVT

  • EVT builds produce a small quantity of units. Of course, what “small” means varies according to company resources and product cost.
    • Small may be “5-25 units”, built in 1-5 unit batch sizes
    • Small may be “50-100 units”, built in 5-10 unit batch sizes
    • Small may be “500-1000 units”, built in 50-100 unit batch sizes
  • EVT builds often involve several configurations (e.g., distinguished by using different component vendors)
  • Production-intent materials are used, however:
    • Cosmetics are almost always ignored at this stage
    • Production-intent materials may not be available (e.g., tooling has not been kicked off), so 3D printing, soft tooled parts, or milled parts may be used in place of dye cast or molded parts
  • Testing happens, but is often a secondary concern
    • Often, you’re bringing up test stations for the first time at EVT.
    • Software may not yet be stable, requiring frequent software updates to address issues.
    • At a minimum, test stations should be collecting data. Limits may be wide open, or selected to catch only egregious failures.
    • If parametric test limits exist, you will often still pass every unit through the line even when out-of-spec.
      • Units that fail in some particular way can still be useful for other teams, e.g., for software development.
  • Manufacturing process steps are refined as the team works through the assembly, testing, and repair processes

Uses of EVT Units

EVT units are used for:

  • Internal development
  • Validation of the design
  • Identification of issues that need to be fixed
  • Comparing alternative configurations (e.g., different component vendors)

NPI Process Flow

  • EVT follows the Proto Stage, when the team has a path forward on a design that is worth the manufacturing effort.
  • The EVT stage is completed when there is at least one production-worthy product configuration that meets the requirements outlined in the Product Requirements Specification. If this has not happened, another EVT event will be scheduled, incorporating improvements from the previous design.
    • Some companies will set yield targets for exiting EVT, but these will usually be quite low (e.g. 60% yield).
  • After EVT is completed, the DVT stage begins.

References

  • Hardware engineers speak in code: EVT, DVT, PVT decoded by Anna-Katrina Shedletsky

    The EVT build is the first time you combine looks-like and works-like into one form factor, with production intent materials and manufacturing processes.

    Purpose:

    1. To select the production intent design, sometimes from a build matrix of options
    2. To identify all of the issues that need to be fixed with that design

    Typical Quantities: 100 to 1000

    • Units must be fully functional and testable, made from the intended materials and with the intended manufacturing process, but may be from soft-tools (if you’re using 3D printed parts, it’s not EVT!)
    • All functional test stations must be present and collecting data

    Things that Go Wrong:

    • A new revision of an intended design does not work after reliability testing
    • Tighter than expected (or capable) tolerances are needed to meet the intended performance specifications — such as with an antenna element
    • Depending on product complexity, up to ~40% of the units built may fail for a variety of functional or performance reasons and need to be analyzed
    • Engineering has started the battle to get glue processes, hand-soldering, environmental seals, and other tricky steps under control

    Exit Criteria: one production-worthy configuration that meets all of the product requirements for functionality, performance, and reliability

  • The different engineering validation stages in a nutshell | EVT, DVT, PVT | by Chris Boucher | Medium
  • Overview of the hardware product development stages: POC – EVT – DVT – PVT explained
    • The objective of the EVT is to combine look-alike and work-like subsystem prototypes made of intended components to meet the functional requirements in the form factor as per your PRD (product requirements Document).
    • EVT prototype quantities: 3-50 units, depending on the design complexity and BOM cost. On average, 5-12 prototypes are required to complete the EVT.
    • Technologies: 3D printing, laser cut/milled PCBs, soft tooling (silicon molds), professional hardware development kits (HDK), rapidly cut/milled parts;
    • Outputs / Deliverables: fully-functional prototype with key components performing as intended.
    • Limitations: Prototypes delivered throughout the EVT phase may look somewhat ugly, raw and have a lack of beautiful cosmetic finish. The EVT prototype can also miss some non-key mechanical features such as handles, curves in enclosure, painting, etc.

Design Validation and Test [DVT] [DVT]

DVT is a stage in the NPI process. DVT units should represent, as much as possible, the final production-intent design. No major future design changes should be expected at the start of DVT (otherwise, you should have another EVT build).

The goal of the DVT is to validate that the MP-intent production process can build production-intent units at sufficient quality. Unlike EVT, DVT enforces test limits. Fallout rates are often high, especially early in the build, requiring engineering engagement to correct the problems and bring yields. up.

At the end of DVT, you should be confident that any issues causing unacceptable yield losses have been (or will be) corrected. If yields are not at an acceptable level, or resolutions are uncertain, another DVT event is warranted (“DVT-2”).

Qualities of DVT

  • Units are produced at “medium” quantities: 2-5x EVT quantities.
    • This often means 250-2500 units are produced (and in larger batch sizes than at EVT).
  • Units are produced in fewer configurations than EVT – ideally, one per SKU. This is rarely adhered to, however.
    • There are often challenges such as your production-intent supplier producing lower-than-expected yields, requiring you to evaluate an alternative.
    • Additional configurations may be created as cost-down experiments.
    • Keep in mind that additional configurations add significant costs. You need to build each configuration in sufficient quantity to prove that the design is suitable for production.
      • If you are building multiple small-quantity configurations, you are probably not at the DVT stage.
    • There may be experiments or “DOEs” to evaluate different process parameters: different glue vendors, varied glue curing times, modified assembly orders, etc.
  • Production-intent components should be used
    • Devices are all form factor units
    • Components should come from production processes (e.g., using hard tools, not soft tools or prints or mills)
      • This is often the first time that hard tools are used at a build and thus represents qualification for those tools.
      • Economic reasons may still require the use of, e.g., milled parts instead of hard-tooled parts, but this should be minimized, as it represents a significant risk if you see these parts for the first time at PVT.
    • Cosmetics may still not be at the desired quality level from the supplier
    • Capabilities like dust- and water-proofing should work at DVT
  • Manufacturing test stations are enforcing realistic limits, allowing you to understand (and improve) actual process yield.
    • Since there is new
    • Failures may still be waived, depending on how egregious the failure is. This often involves setting “continue-on-fail” policies, allowing you to track your true process yield while still producing units that are good enough for development or testing purposes.
  • Packaging is typically introduced, and packaging processes evaluated
  • Additional checks on the manufacturing line are added: e.g., cosmetic inspection, OQC
    • For cosmetics, there will often be an effort to track down cosmetic fallout introduced on the line, but this is often difficult when input components are not cosmetically sound

Uses of DVT Units

DVT units are used for:

  • Development
  • Certification efforts (FCC certification, UL certification, Bluetooth certification, etc.)
  • Reliability and environmental testing
  • Internal and external beta testing
  • Test station software and manufacturing firmware validation at the CM

NPI Process Flow

  • The DVT stage begins once there is at least one EVT configuration that meets the requirements outlined in the Product Requirements Specification
  • The DVT stage is complete when:
    • Yield loss problems have been addressed (or there is high confidence that corrective actions put in place after DVT will address the yield problems)
    • Certifications have passed (design modifications to address certification failures may be significant enough to warrant another DVT event, though some teams plunge into PVT anyway)
    • Reliability and environmental testing have yielded acceptable results (design modifications to address failures may be significant enough to warrant another DVT event, though some teams plunge into PVT anyway)
    • Packaging for the device is finalized
    • As a reminder, the DVT units must still meet the Product Requirements Specification.
  • After the DVT stage is complete, PVT begins.

Exit Criteria: high confidence in all corrective actions for any issue that causes unacceptable yields on units using mass production parts made from mass production tools.

  • Manufacturing Test limits are enforced at DVT and used to fail units from the line. However, test limits may still be wider than expected in future build stages.
  • Outgoing Quality Control is often introduced at this stage for process development and feedback on manufacturing

References

  • Hardware engineers speak in code: EVT, DVT, PVT decoded by Anna-Katrina Shedletsky

    The DVT build is supposed to be one configuration of your production-worthy design, made of components from production processes (and hard tools) and on a line following production procedures. I believe very few companies actually stick to this requirement — because even if miraculously there are no outstanding issues, there may be parallel efforts to cut cost or increase yields that create additional configurations to build.

    If you do have functional, performance, or reliability issues that are driving Plan B and Plan C configurations at this stage, it can be costly because each of those alternates needs to be built in “full quantity” to ensure that design can be fully mass-production qualified by the end of the build. I believe that’s the real test for whether you are at DVT or not: if you are running side configurations of 20 units, you are fooling yourself, and should call it EVT2.

    Purpose:

    1. To verify mass production yields with one production-worthy design (one configuration for each shipping SKU)
    2. To qualify the first hard tool for every part in the assembly

    Typical Quantities: 300 to 2000

    • All parts should be from hard tools or mass production capable processes
    • All functional test stations must be present with limits in place to understand true yields

    Things that Go Wrong:

    • High functional fallout rates — requiring the need for fast failure analysis and corrective actions
    • Cosmetic yields are 0% — there may be an effort to try to track down and fix cosmetic aggressors, but it is usually fruitless because your cosmetic part suppliers are likely still shipping scratched parts (and you are having to waive them)
      • DOEs (there’s another one! Design of Experiments, mentally replace with “experiments”) are run with alternate glues or curing parameters
      • there are nightly calls with vendors demanding support or giving updates to hardware company executives

    Exit Criteria: high confidence in all corrective actions for any issue that causes unacceptable yields on units using mass production parts made from mass production tools.

  • The different engineering validation stages in a nutshell | EVT, DVT, PVT | by Chris Boucher | Medium
  • Overview of the hardware product development stages: POC – EVT – DVT – PVT explained
    • The objective of the DVT is to fix the design (i.e. dimensions, weight, materials, finish, moving mechanical parts) and rationalize the final product’s features.
      1. At this stage you should carefully revise and consider features vs product quality/finish vs production and BOM cost vs production volume.
      2. Complete the necessary certifications;
      3. Develop and finalize boxing and packaging
      4. Commence to request RFQs from mass-producers and devise plans for logistics.
    • DVT prototype quantities: typically 20-200 units, depending on the design complexity and BOM cost. The prototypes will be used for various reasons: certification lab tests, “beta tests” with early customers/testers.
    • Technologies: 3D printed + gel-coated enclosures with the finish “as from the factory”, rapidly cut/milled parts; industrial equipment (e.g. injection moulding) and 1st generation tooling (e.g. “quick moulds”).
    • Outputs / Deliverables: a [batch of] functional prototypes ready for mass-production with BOM and a design documentation package. Boxing and Packaging design completed. Estimate mass-production yields
    • Limitations: The DVT prototypes and documentation is nearly final and can be slightly changed further in development. Some mechanical parts and electronic components may not be final due to economic reasons (e.g. it is cheaper to CNC mill some metallic parts instead of using dye casting).

Coupling

Coupling is a measure of the degree to which a component or module depends upon other components or modules in the system. Coupling is a qualitative value, which we can get a sense of by asking: Given two components X and Y, if X changes, how much code in Y must be changed?

Minimizing coupling is desirable because it allows components and modules to be used, modified, and swapped independently from other components or modules in the system.

Describing Coupling

We usually describe coupling in terms of tight/strong/high coupling and loose/weak/low coupling.

  • Two components are loosely coupled when changes in one rarely (or never) necessitate changes in the other
  • Two components are tightly coupled when they cannot be easily separated – changing one component will require corresponding changes in the other component

We can think of coupling as being logical or physical:

  • Logical coupling refers to relationships between program abstractions
  • Physical coupling refers to relationships between files, such as source/header file inclusions or library linkage

Coupling can be actual or potential:

  • Actual coupling indicates that program elements are currently connected
  • Potential coupling means that an element is not currently connected, but its visibility would allow it to be connected
    • Most excess coupling exists as potential coupling

We can categorize coupling based on the type of connection between two components or modules:

  • Content coupling (high) – when one module uses the code of another module (violating information hiding)
  • Common coupling – when several modules have access to the same global data
  • External coupling – when two modules share an externally imposed data format, communication protocol, or device interface
  • Control coupling – when one module controls the flow of another by passing it information on what to do
  • Stamp coupling (aka data-structured coupling) – when modules share a composite data structure, and use only parts of it (e.g., passing a whole record to a function that only needs one field)
    • A modification to a field that the module does not need may mean the dependent module still has to change
  • Subclass coupling – describes the relationship between a child and its parent – the child is connected to the parent, but the parent is not connected to the child
  • Temporal coupling – when two actions are bundled together into one module because they occur at the same time
  • Data coupling (low) – when modules share data through parameters; each datum is an elementary piece, and these are the only data shared

Consequences of Tight Coupling

Tight coupling has many observable consequences:

  • Individual modules are more difficult to reuse, since they have dependencies on other modules in the existing system
  • Individual modules are more difficult to test, since the modules cannot be used in isolation
  • It makes the program more difficult to reason about, since changes in one module can impact multiple modules within a system (potentially in unexpected ways)
  • Changing requirements, design decisions, and hardware components will typically trigger large scale changes in order to accommodate them

Over time, tight coupling reduces the maintainability and flexibility of the system.

Most developers realize that excess coupling is harmful but they don’t resist it aggressively enough. Believe me: if you don’t manage coupling, coupling will manage you.
— Jerry Fitzpatrick, Timeless Laws of Software Development

Benefits of Loose Coupling

Loose coupling is desirable because it allows modules and components to be developed, used, and modified independently from one another. Loosely coupled components are much easier to test, because they can be isolated from other components in the system. They can be more easily reused for the same reasons.

Loosely coupled components can also be easily replaced with alternate implementations that provide the same functionality. This makes our systems more flexible and modifiable. This is especially useful for embedded systems, as loosely coupled hardware modules can be easily swapped, enabling us to quickly add support for new board designs, sensors, and peripheral devices.

Strategies for Reducing Coupling

“ Uncertainty is not a license to guess. It is a directive to decouple.”
—  Sandi Metz

There are a number of strategies we can use for reducing coupling.

Fundamentally, we can apply foundational software principles for creating modules and components that are loosely coupled. We should strive to create modules that make use of information hiding. Our modules should adhere to the single responsibility principle. Our modules should be cohesive and provide minimal interfaces. Modules and components should communicate with each other through abstract interfaces whenever possible, allowing other modules or components that satisfy the interface to be substituted as needed. This strategy leads to three decoupling tactics:

  • Keep related parts together
  • Reduce potential coupling by using visibility options
  • Create barriers to isolate or protect parts of the system

Coupling increases between two modules under the following conditions. We can try to eliminate these conditions wherever possible.

  • One module has an internal attribute/member that refers to (or is the type of) another module
  • One module calls the functions of another module
  • One module has a method that references another module (or its type) (e.g., via a return type or parameter)
  • One module is a subclass of another module
  • One module implements a specification described by another module

We can also employ specific techniques that enable loose coupling:

We can minimize potential coupling by carefully structuring our build systems to prevent undesired coupling between elements.

Considerations

Even in loosely coupled systems, changes that affect external interfaces will still require changes in other modules. The best defense against this is to create minimal interfaces.

Some people state that coupling isn’t always bad, primarily because it can allow us to create highly cohesive modules. In our view, coupling should be prioritized over cohesion, because coupling prevents us from easily replacing modules and means that we cannot make changes to a module’s implementation independently of other modules. We also must consider the fact that some piece of our code likely will need to be tightly coupled to other pieces. Our goal is to constrain tight coupling into designated locations within the program.

However, we might reasonably ask when is tight coupling worth the cost?

  • We may need to break cohesive modules into pieces for reasons such as performance or maintainability, which then necessitates tighter coupling between less-cohesive modules
  • There are conditions where we need to have every step of a process access a piece of global data or state, which introduces tight coupling among the components
  • Performance requires tight coupling
  • Sometimes the work required to create low coupling is not worth the investment in time/code (e.g., we are likely never going to need to swap or reuse this module, so we can keep it tightly coupled)
  • Cohesion and coupling are closely related concepts. Low coupling is often achieved by creating highly cohesive components with narrow interfaces.
  • Loose coupling is often achieved through the technique of information hiding.
  • Loosely coupled code is much easier to modify, maintain, test, and reuse than tightly coupled code.
  • The Interface Segregation Principle focuses on reducing potential and actual coupling to unnecessary interfaces; other formulations highlight hiding interfaces you don’t control from application code, reducing coupling to them

Coupling is a subject that we frequently address on our website:

References

Architecture Decision Record

Architecture decision records (ADRs) are a lightweight method for documenting architecturally-significant decisions within a project’s source control system.

This style of documentation serves as a great example for how we can document decisions on our project, even if they are not architecturally-significant.

You can record significant decisions affecting the structure, dependencies, interfaces, techniques, or other aspects of your code within ADRs. They are kept within the project’s repository so they are easily accessible to developers, easily modifiable, and have revisions tracked. ADR records should be kept short (maximum of two pages) so they are easily digestible by developers.

Note

You can, of course, repurpose the ADR concept as a general decision record and track other project-related details, such as team structure, development process, communication styles, coding standards, tooling, etc.

Table of Contents:

  1. ADR Format
  2. Using ADRs
    1. ADRs as an Alternative to Design Documentation
  3. Using ADRs
  4. Example ADRs
  5. Further Reading

ADR Format

The typical ADR format summarizes decisions in five parts:

  1. Title
  2. Status (e.g. proposed, accepted, deprecated, superseded)
  3. Context (description of the situation leading to the decision)
  4. Decision
  5. Consequences (positive, negative, neutral)

We recommend using both a numeric identifier (e.g., ADR-0001) and a human readable name summarizing the decision.

Often, you will benefit from describing alternative approaches that were considered but rejected. You should also note why a given choice was rejected. This will prevent future developers from redoing work that you have already performed.

As a design evolves, it is common to have ADRs that relate to or supersede each other. You can add a “Related ADRs” section whenever appropriate, referencing other ADRs by ID, title, or a direct link.

A good length to target for an ADR is 1-3 pages (in essence, a memo). However, this does not mean that essential information should be left out. Although supplemental information can also be externalized and simply referenced from the ADR, allowing those who need the additional context to get it.

Using ADRs

  • One ADR documents one significant decision.
  • If a decision is reversed, amended, deprecated, or clarified, keep the corresponding ADR. Instead of deleting it, generate a new ADR, link the related decisions together, and mark the previous decision with a relevant status note.
  • Create ADRs for significant proposals that require a decision by the team. If the proposal is rejected, the ADR itself can be marked as such.

By keeping a full history of decisions, we help developers see the evolution of our decisions through time and provide the context for each decision.

ADRs as an Alternative to Design Documentation

ADRs are, in our view, a sustainable alternative to more traditional detailed design documentation. T

ypically, design documentation begins to fail at some point as decisions outpace updating of the document. And once someone “forgets” to update the document, it is viewed as obsolete and never updated again.

ADRs do not have this problem. When a design change occurs, you can document the change and rationale in a new ADR, leaving the old one in place. The new ADR can indicate which decision(s) have been superseded. This also preserves the history of the design’s evolution and provides context as to why various changes to the design were made.

Using ADRs

We prefer to use the adr-tools project below, but you can also implement ADRs with a basic Markdown template. Feel fre to tweak the templates to suit your team’s needs.

Example ADRs

Further Reading

  • Documenting Architectural Decisions Within our Repositories
  • Q&A: How We Document Software Projects discusses ADRs as one documentation tool we use
  • Documenting Architecture Decisions by Michael Nygard
  • Scaling the Practice of Architecture, Conversationally, by Andrew Harmel-Law, discusses Decision Records as a key supporting element for a scalable Architecture practice.
  • Documenting Software Architectures: Views and Beyond, by Clements et al., discusses decision records. Some select quotes below.
    • Rejected. Decision that does not hold in the current system; but we keep such decisions around as part of the system rationale (see subsumes in the next list).
    • Obsolesced. Similar to rejected, but the decision was not explicitly rejected (in favor of another one, for example) but simply became “moot”—for example, as a result of some higher level restructuring.
  • Episode 35: Better Built by Burkhard Stubert

    ADRs were exactly what I needed! And even a bit more. I could use them for decisions about system architecture, team structure, development process – and any other decision.

    After the interviews with the development teams and with their stakeholders, managers and executives, I would write down the important topics requiring a decision. I would describe the context, possible options, the consequences of the decision and my recommended decision.

    The decision records (DRs) were available on the company’s intranet for everyone to read. So, everyone could comment on the recommended decision. In weekly meetings, a team of managers, architects, senior developers and me discussed the recommended decisions. We revised the records, decided some right away and deferred some to gather more information for next week’s meeting.

    […]
    A big advantage of DRs is that they decouple creation, discussion and decision temporally. Everyone can think through the decision asynchronously. That leads to more constructive and respectful discussions than the fights in meetings. Jeff Bezos, Amazon’s founder, has made asynchronous communication and excellent preparation of meetings.

Abstract Interface

An “abstract interface” is a higher-order abstraction that can represent one or more concrete implementations. An abstract interface defines the required minimum contract that all suitable implementations must satisfy: function signatures, preconditions, postconditions, and behavioral guarantees. What is revealed is the contract callers can rely on; what is hidden are internal state, algorithms, and platform dependencies. Note that the interface captures what all valid implementations must provide, not the union of every possible feature.

Abstract interfaces can take many forms, all of which are mechanisms for expressing the same thing: a contract that callers can rely on and implementations must satisfy.

  • a straightforward header-defined interface
  • a collection of function pointers
  • an abstract base class
  • Rust traits
  • C++ concepts or templates

Abstract interfaces are used to improve the changeability of software. Programs are written against the interface, keeping details about the underlying implementation secret. The fundamental goal is to be able to swap different implementations without needing to change components that interact with the interface.

Courses

Blog Posts

Papers

Selected Quotes

  • Abstract Interface Specifications for the A-7E Device Interface Module by Parker, Heninger, and Parnas

    Interface: The interface between two programs consists of the set of assumptions that each programmer needs to make about the other program in order to demonstrate the correctness of his own program. For convenience, we use the phrase “assumptions made by program A about program B,” to mean the properties of B that must be true in order for A to work properly. These assumptions are not limited to the calling sequences and parameter formats traditionally found in interface documents; they include additional information such as the meaning and limits of information exchanged, restrictions on the order of events, and expected behavior when undesired events (ref (7)) occur. There is an analogous definition of the interface between a program and a device.

    Abstract interface: An abstract interface is an abstraction that represents more than one interface; it consists of the assumptions that are included in all of the interfaces that it represents. An abstract interface for a given type of device reveals some, but not all, properties of the actual device: it describes the common aspects of devices of that type, omitting the aspects that distinguish them from each other.

Abstract Data Type [ADT]

An “Abstract Data Type” (ADT) or a “data abstraction” is meant to hide the internal representation of the data in some way. The goal is to provide a standard representation and interface for the data.

Details that are often hidden include:

  • storage format
  • internal representation
  • conversions
  • calibration factors
  • element ordering
  • element traversal

The internal implementation of an abstract data type can be replaced as long as all external guarantees hold.

An ADT that defines many operations with behavioral aspects starts to look like an abstract interface; what distinguishes them is where importance is placed – on the operations and their guarantees vs the data format and data layout being hidden.

References

  • Revisiting Information Hiding – Reflections on Classical and Nonclassical Modularity

    Data abstraction mechanisms hide the internal representation of an abstract data type, for instance, whether a complex number is stored in polar or Cartesian coordinates. Logically, abstract data types are a form of existential quantification. The internal representation of an abstract data type can be replaced with a different representation type supporting the same interface. Reynolds formalized and proved this property of abstract data types in his abstraction theorem.