MSA Guide: Type 1, Type 2, Type 3, Gage R&R & Cohen’s Kappa


🧪 Ultimate MSA Guide: Type 1, Type 2, Type 3, Gage R&R & Cohen’s Kappa

A Complete, SEO‑Optimized Technical Resource for Quality Engineers, SQA/SQE, Metrology, Automotive & Manufacturing


🔍 Introduction: Why MSA Matters

Measurement System Analysis (MSA) ensures that the data collected from a measurement system — equipment + operator + method — is accurate, repeatable, and reproducible.

A weak measurement system causes:

  • ❌ Wrong quality decisions
  • ❌ High scrap & rework
  • ❌ Failed audits (IATF 16949, VDA 6.3)
  • ❌ Customer complaints
  • ❌ Instability in SPC and process capability

MSA is required by major standards: AIAG Core Tools, IATF 16949, ISO 9001, AS9145, medical device regulations, and OEM-specific customer requirements.


🧭 What is MSA (Measurement System Analysis)?

MSA evaluates the variation introduced by the measurement system itself.
It separates measurement variation into:

  • Equipment Variation (EV)
  • Appraiser Variation (AV)
  • Part-to-Part Variation (PV)
  • Total Gage R&R (GRR)
  • Repeatability & Reproducibility
  • Bias & Linearity
  • Attribute decision consistency (OK/NOK)

1️⃣ MSA Type 1 — Bias, Linearity & Repeatability Study

(One operator, one instrument, one reference part)

This study is used to analyze instrument accuracy and short-term consistency.

🎯 Purpose

  • Verify whether a measuring device is precise and accurate.
  • Validate calibration effectiveness.
  • Detect instrument drift or instability.

🧠 What it measures

  • Bias → Difference between the average measured value and the certified reference value.
  • Linearity → How bias changes across the instrument’s range.
  • Repeatability (EV) → Consistency of the same operator measuring the same part repeatedly.
  • Stability (optional) → Long-term instrument performance.

🛠️ How it’s performed

  1. One operator measures a reference master part.
  2. 10 repeated measurements are taken.
  3. Bias, EV, and linearity are evaluated in Minitab, Q-DAS, or other software.

🟢 When to use Type 1

  • After calibration
  • During incoming inspection of a new instrument
  • For capability checks of measuring devices

2️⃣ MSA Type 2 — Gage R&R for Variable Data

(2–3 operators · 10 parts · 2–3 repetitions)

This is the classic, most widely used MSA study.

🎯 Purpose

Validate the entire measurement system (equipment + operators).

🔎 What it analyzes

  • Repeatability (EV — Equipment Variation)
  • Reproducibility (AV — Appraiser Variation)
  • Total Gage R\&R
  • Number of Distinct Categories (ndc)
  • Interaction between operator and part

🛠️ Study design

  • 10 parts covering process variation
  • 3 operators
  • 2–3 repetitions per part

🧮 Acceptance criteria (AIAG-MSA 4th Edition)

  • GRR ≤ 10% → Excellent
  • 10–30% → Conditionally acceptable
  • > 30% → Unacceptable

📊 Outputs typically include

  • ANOVA table
  • Xbar-R charts
  • Operator × Part interaction plots
  • ndc ≥ 5 recommended

3️⃣ MSA Type 3 — Attribute Data Study (Discrete Decisions)

(OK/NOK · Good/Bad · Pass/Fail · Conforming/Nonconforming)

Used when measurements are NOT numerical, but instead decisions.

🎯 Purpose

Evaluate the consistency and correctness of operator judgment.

🔍 What it assesses

  1. Accuracy vs Reference (%)
  2. Repeatability — does the same operator reach the same conclusion?
  3. Reproducibility — do operators agree with each other?
  4. Correctness vs golden reference samples
  5. Agreement beyond chance → Cohen’s Kappa

🧰 Common tools

  • Attribute Agreement Analysis (AAA)
  • Cohen’s Kappa
  • Fleiss’ Kappa (for >2 operators)
  • Confusion Matrix
  • % Overall Agreement

This study is essential for visual inspection, surface defects, cosmetic quality, assembly OK/NOK decisions, and operator-dependent evaluations.


4️⃣ Cohen’s Kappa — Statistical Agreement Level for Attribute MSA

Cohen’s Kappa measures agreement between operator decisions while removing random chance.

🧮 Formula

$ \kappa = \frac{P0 – Pe}{1 – P_e} $

Where:

  • P₀ = actual agreement
  • Pₑ = expected agreement by chance

📘 Interpretation of Kappa

Kappa ValueInterpretation
> 0.90Excellent agreement — very reliable system
0.80–0.90Very good
0.60–0.80Good / acceptable
0.40–0.60Weak
< 0.40Unacceptable

A high Kappa is critical in industries were human judgment influences decisions.


🧩 Choosing the Correct MSA Type (Quick Guide)

🔵 Use Type 1 or Type 2 if you have:

📊 Variable data (numerical measurements)
Examples:

  • Thickness
  • Length
  • Torque
  • Resistance
  • Voltage

🔴 Use Type 3 if you have:

🟥 Attribute data (OK/NOK decisions)
Examples:

  • Scratch present? (Yes/No)
  • Solder defect? (OK/NOK)
  • Cosmetic defect? (Good/Bad)


📊 Final Comparative Table — Fast Understanding

Study TypeData TypeWhat It EvaluatesBest Use CaseAcceptance CriteriaOperators Required
MSA Type 1VariableBias, Linearity, RepeatabilityInstrument validationBias <10% tolerance1
MSA Type 2 (Gage R\&R)VariableRepeatability, Reproducibility, Total GRRFull measurement system validationGRR ≤10% ideal2–3
MSA Type 3AttributeAccuracy, Repeatability, ReproducibilityVisual/OK-NOK decisions% Agreement, Kappa2–3
Cohen’s KappaAttributeAgreement beyond chanceVisual inspection reliability>0.80 recommended2 (min.)
AAA (Attribute Agreement Analysis)Attribute% Overall AgreementOK/NOK classificationIndustry-dependent2–3

Alin Nedelcu
Alin Nedelcu
Articles: 27

Leave a Reply

Your email address will not be published. Required fields are marked *