Decision-making in Concept Selection

This page provides an overview of tools to help in concept selection, and decision-making, in general. Refer to basic notes on decision-making for an overview of decision-making and unconscious bias.

Decision-making Techniques

In the following, a series of decision-making techniques are considered. During a design process several of these techniques might be used. To move quickly when the stakes are low, simple decision-making techniques are appropriate. When the stakes are high, e.g. when there is a significant cost to a bad decision, more methodological approaches – perhaps multiple approaches – are warranted.

Basic considerations

  • Is the decision whether or not to choose a single course of action? In other words, is the decision between "yes" and "no"? If so, simple majority voting or consensus may be appropriate.

  • Is the decision to pick one of a small number (say 3 or less) items? If so, then simple majority voting, instant run-off voting or multivoting can be helpful.

  • If the decision to is choose one (or a few) from a list of many options, then a method using decision matrices maybe helpful. Perhaps multiple rounds of decision-making will be necessary.

See


1. Simple Majority Voting

  • Familiar
  • Useful for binary choices
  • May work better when there is a strong majority in favor of any one decision. When the vote is close, you may want to consider other alternatives.
  • May disenfranchise and sideline a large minority, especially if the same people are always voting in the majority. If that happens, your group should ask the hard question of whether factions have developed and what you are going to do about it.

2. Ranked Voting

See the Wikipedia page on Ranked voting system.

Tools

  • MATLAB election with an extensive array of voting methods from Mathworks file exchange
  • MATLAB rankedVote script handles simple preference-based voting
  • Excel script from Geek Speak Decoded blog

More sophisticated analyses use pair-wise comparisons at each preference level. Refer to the Shulze method.


2. Multivoting

Multivoting is useful when faced with many choices and no group consensus on priorities.

See

Abbreviated sequence from ASQ:

  1. Obtain a list of items to consider, e.g. from brainstorming or a combined internal and external search.
  2. Group items by affinity or similarity if possible. See ASQ's Affinity Diagram. Try to use voting only within groups
  3. Distribute an even number of votes to every participant
  4. Participants can place any number of votes on any item in list of choices.
  5. Use vote counts to prioritize the list, and removing low-ranked items from further consideration.

The number of votes could be

  • n = an arbitrary number chosen by the group

  • 3*M, where M is the number of items to be retained from the list (3 is arbitrary)

  • N/3 where Let N is the number of items on the original list

Tools


3. Consensus

See

In common usage of the term, consensus is thought of as a decision that everyone agrees with. More technically, consensus-based decision-making involves three possible decisions by individuals

  1. Consent: agreeing to abide by the group decision, which can range from enthusiastic support to merely tolerating the decision. An individual may not prefer the decision, but can "go along" because a large majority of individuals support the decision.
  2. Standing aside: an individual holds a position of dissent, but decides to not block the group decision.
  3. Blocking: an individual stops the group from proceeding ("stands in the way") because of a principled objection that the decision is against the group's core values.

A consensus is reached when everyone in the group either consents or stands aside. A single person blocking consensus causes the group to forgo the decision.

There are a range of decision-making options:

  • Basic majority vote: winner needs only more than 50% of the vote.
  • Super-majority voting: winner needs some high percentage like 2/3 majority or 3/4 majority or higher.
  • Consensus-minus-one: one blocking person does not stop the decision.
  • Full consensus: all members either consent or stand aside.

Rigorous adherence to a full consensus process can yield strong group cohesion. It can also be a slow and painful process as the views of a small minority are heard and incorporated into the final decision.


4. Decision Matrix Methods

The engineering design process may involve voting or consensus-building. However, those methods are mostly suitable to decisions involving a small number of choices, and when additional performance-based criteria are not available. As a design evolves, the team assembles information about the strengths and weaknesses of design choices. This information can and should be incorporated into analytical models that aid in decision-making.

Decision matrix methods are really about building a model of how we think each design choice will perform. The models are constructed as simple scoring tables that use different scales and methods of combining scores on those scales. The goal is to create a figure of merit for ranking choices.

The decision models are arbitrary in the sense that the figures of merit do not conform to physical laws. Compared to the analytical tools used engineering science, the models are quasi-quantitative. Furthermore, the models encode our preferences and biases. Indeed, the purpose of the models is to develop an aggregate metric from a set of smaller choices about preference and importance.

Because the scoring is subjective, it is possible to game the scoring scheme so that a preferred design comes out with the best score. Design teams should be mindful of their biases. If a preferred concept does not have the highest score, it is important to dig deeper into the analysis and examine assumptions, not simply adjust the scoring so that the preferred design "wins".

Suppose that we have metrics for the expected performance (or realization of a design objective) for each design choice. Furthermore, suppose that we have a method for assigning an importance to each metric. Ulrich and Eppinger describe a sequence of six steps in applying concept selection methods

  1. Prepare a selection matrix: organize the design concepts or options along one dimension and the selection criteria (metrics) along the other dimension
  2. Rate each concept according to the selection criteria (or metric)
  3. Rank the concepts by a weighted score (weights indicate importance)
  4. Eliminate, combine, or improve concepts
  5. Select one or more concepts
  6. Reflect on the results.

Steps 2 through 5 may be iterated as concepts are refined, either through modification, combination, or the gaining of additional or improved performance data.

As the design evolves, the team may use a sequence of methods to initially reduce and then refine the choices. In other words, the six step process outlined above can be applied through the following stages, which involve an increasing level of detail, and effort.

Mattson and Soreson describe three types of selection matrices

  • Preliminary screening
  • Scoring or ranking methods
  • Controlled convergence

It is also possible that the team may iterate the first two levels of decision-making (screening and scoring) as the design process reveals more information. The last step, controlled convergence may involve combining as well as eliminating design choices. Ideally, by the end of controlled convergence the team has sufficiently high confidence in their design concept that they can procede to subsystem design.

Preliminary Screening

Early in the conceptual design process, the team is likely to be confronted with a large number of choices and limited detailed information about each choice. The goal of screening is to reduce a large number of choices to a smaller set by eliminating choices that are unlikely to enable the design to meet client or market requirements. Another potential goal is to combine features from competing concepts to create a new concept that is likely to be more successful.

Screening Matrix

Refer to the Screening Matrix description in the second half of the textbook by Mattson and Sorensen. The screening matrix is constructed by listing the market requirements in the first column, and allocating a column to each of the competing concepts. One of the concepts should be chosen as a reference or benchmark. If the team is designing a product that competes with an existing product in the marketplace, then the existing product is the natural benchmark.

In the column under each design concept, enter a "+", "-" or "=" to indicate whether the design under consideration is better, worse or equal to the benchmark. Mattson and Sorensen recommend working horizontally through the table, i.e., scoring each concept for how well it meets a given requirement.

After the relative scores for each requirement are assigned, tally the number of "+", "-" and "=" scores for each concept. Compute a net score as the sum of the number of "+" scores minus the number of "-" scores. A low net score indicates that a design concept is not likely to be successful relative to the benchmark.

The success of the screening matrix method depends on the quality of the market requirements, the experience of the design team in predicting performance, and the choice of benchmarks

Ullman's Screens

Ullman describes three types of screening that are performed in sequence.

  1. Feasibility judgement
  2. Technology readiness assessment
  3. Go/no-go screening

These screening techniques are applied to possible solutions to the design problem. The screening process uses absolute scales whereby the design options are evaluated independently. In other words, the screen is applied to each option without reference to how well other options are scoring during the screening process. Later, during final concept selection, design options are compared to each other via a designated reference design.

Preliminary screening could be applied to ideas for sub-functions of the design, e.g. the component for supplying force or torque. As the conceptual design evolves, the screening could also be applied to systems. However, in late stages of the design process, a scoring matrix method, which compares design options against each other, is more appropriate than a screening method.

All three screening methods (feasibility judgement, technology assessment, and go/no-go) can be organized into a matrix or table

Scoring Matrix

Refer to the Scoring Matrix section in part 2 of the textbook by Mattson and Sorensen.

After a larger number of concepts are winnowed to a smaller number, a more quantitative analysis can be performed. From here on we suppose that we have some basis for assigning a quasi-quantitative score to each of the design options. In other words, unlike the voting methods described above that rely on individual judgement or preference, we assume that we have additional information that allows us to quantitatively estimate the potential benefits of each option. The goal is to have quantifiable performance metrics that are linked (e.g. by a requirements matrix) to the client or market needs. We still need to make subjective decisions about the importance of each performance metric, but the existence of the metrics has moved us one step closer to an analytical basis for decision-making.

The scores are entered into a matrix that is superficially similar to the screening matrix. Performance requirements are listed in the first column. In a second column, weights are assigned to each of the performance requirements. The weights are numerical values between 0 and 1 that also sum to 1.0. A rational way of assigning weights is to use the importance scores for the performance metrics in the requirements matrix.

In the remaining columns the estimated performance of each design concept is given a numerical score. The quantitative scoring may involve results of preliminary design computations or other quantitative sources of data. The quantitative scoring could also rely on judgement of team members when quantitative data is not available. Regardless of the source and precision of data, only a rough scoring is is used. In their discussion of Scoring Matrix method on the second half of the textbook, Mattson and Sorenson recommend a five point scale shown in Table 1 where "reference" is the score of the benchmark.

Table 1: A scheme for rating designs in a scoring matrix.

RatingDescription
1Much worse than reference
2Worse than reference
3Same as reference
4Better than reference
5Much better than reference

After scores are entered for each requirement of each concept, the weighted score for each concept is computed. Presumably the concept with the best score will have the highest chance of meeting the design requirements.


References

  1. Christopher A. Mattson and Carl D. Sorenson, Fundamentals of Product Development, 5th ed., 2017, Brigham Young University.

  2. David G. Ullman, The Mechanical Design Process, 5th ed. 2016, McGraw-Hill, New York

  3. Karl T. Ulrich and Steven D. Eppinger, Product Design and Development, 5th ed., 2012 McGraw-Hill, New York.

  4. George E. Dieter and Linda C. Schmidt, Engineering Design, 5th ed. 2013, McGraw-Hill, New York.


Document updated 2018-02-25.