Quality Control

Quality Control (QC) in the Hematology laboratory is a subset of the broader Quality Assessment program. While QA covers the total testing process, QC focuses specifically on the analytical phase. Its primary purpose is to monitor the precision and accuracy of the analytical method, ensuring that the hematology analyzer is functioning correctly before patient results are released. In Hematology, QC is unique because the “analytes” are physical particles (cells) that must be preserved, making the stability of control materials a critical factor

Control Materials

To ensure the instrument measures various pathological states accurately, laboratories use commercial control materials that mimic patient blood. These are typically stabilized suspensions of RBCs, WBCs (often fixed avian or mammalian cells to simulate different leukocyte types), and platelet analogues

  • Levels of Control: Laboratories must run at least two, but typically three, levels of control materials to verify the analyzer’s performance across the clinical reporting range
    • Low (Abnormal Low): Checks the analyzer’s ability to detect cytopenias (e.g., anemia, thrombocytopenia, leukopenia)
    • Normal: Verifies the instrument performs correctly within the reference interval
    • High (Abnormal High): Checks the linearity and accuracy at elevated levels (e.g., leukocytosis, erythrocytosis)
  • Storage and Handling: Because hematology controls contain cellular components, they are highly sensitive to temperature and handling
    • Controls must be stored at refrigerator temperatures (2–8°C)
    • They must be allowed to equilibrate to room temperature before analysis to prevent temperature-dependent agglutination or viscosity changes
    • Proper mixing (inversion or mechanical rocking) is vital; inadequate mixing leads to settling of RBCs, causing false low counts

Statistical Parameters of Quality Control

To interpret QC data, the laboratory scientist utilizes statistical concepts to define “acceptable” limits. The goal is to distinguish between normal random variation and significant error

  • Mean (\(\bar{x}\)): The average of the data points. This represents the target value and is the measure of Accuracy (how close the result is to the true value)
  • Standard Deviation (SD or s): A measurement of the dispersion of data points around the mean. This is the measure of Precision (reproducibility). A small SD indicates high precision
  • Coefficient of Variation (CV%): Calculated as \((SD / Mean) \times 100\). This standardizes the variation, allowing the laboratory to compare precision between different methods or analytes with different units. In Hematology, RBCs and Hgb typically have very low CVs (<1–2%), while Platelets may have higher CVs due to the difficulty in counting smaller, more variable particles
  • Levey-Jennings (L-J) Charts: The primary visual tool for monitoring QC. The mean is plotted at the center, with lines representing ±1SD, ±2SD, and ±3SD. Daily control values are plotted to identify visual patterns of error

Types of Error & Patterns

When reviewing Levey-Jennings charts, the laboratory scientist looks for deviations from the mean. These deviations generally fall into two categories: Random Error and Systematic Error

Random Error

Random error affects precision and is unpredictable. It creates scatter on the L-J chart

  • Causes: Air bubbles in the reagent lines, electrical voltage fluctuations, incomplete mixing of the specific control vial, or a momentary clog in the aperture
  • Statistical Representation: A single result falling outside ±2SD or ±3SD without a pattern

Systematic Error

Systematic error affects accuracy and causes results to consistently deviate from the mean in one direction. This requires immediate intervention

  • Trend: A gradual loss of reliability where control values slowly move away from the mean over several days
    • Hematology Causes: Aging reagents, slow protein buildup on the aperture (affecting impedance), deterioration of the light source (affecting hemoglobin spectrophotometry), or gradual degradation of the control material itself
  • Shift: An abrupt change where control values jump to a new level and plateau
    • Hematology Causes: Changing to a new lot number of reagent without calibrating, recent major maintenance (e.g., replacing a laser), or failure of an internal component

Westgard Rules

Westgard rules are a set of multi-rule criteria used to determine if a QC run should be accepted or rejected. While there are many rules, the following are most common in Hematology:

  • 1-2s (Warning): One point falls outside ±2SD
    • Action: Do not reject. This is a warning. Check previous points. If no other rules are broken, release results
  • 1-3s (Rejection): One point falls outside ±3SD
    • Type: Random Error
    • Action: Reject the run. It is statistically unlikely (99.7% confidence) that this is a valid result
  • 2-2s (Rejection): Two consecutive points fall outside ±2SD on the same side of the mean
    • Type: Systematic Error
    • Action: Reject the run
  • R-4s (Rejection): The range between two points in the same run exceeds 4SD (e.g., Level 1 is +2.1SD and Level 3 is -2.5SD)
    • Type: Random Error
    • Action: Reject the run
  • 4-1s (Rejection): Four consecutive points fall outside ±1SD on the same side of the mean
    • Type: Systematic Error
    • Action: Reject the run. Indicates a shift or trend needs addressing
  • 10x (Rejection): Ten consecutive points fall on the same side of the mean
    • Type: Systematic Error (Bias)
    • Action: Reject. The instrument requires calibration or troubleshooting

Moving Averages (Bull’s Algorithm / X-B Analysis)

Unlike Clinical Chemistry, Hematology utilizes a unique continuous QC method based on patient data known as “Bull’s Algorithm” or “X-B” (X-Bar) analysis

  • Principle: This algorithm assumes that in a general hospital population, the average RBC indices (MCV, MCH, MCHC) remain stable over time. The analyzer calculates the weighted moving average of batches of 20 patient samples
  • Utility: It serves as a check on instrument performance between the scheduled commercial QC runs. It is particularly sensitive to systematic errors
  • Interpretation
    • If the commercial QC is good but the X-B is drifting, the issue may be the instrument
    • Example: A drift in the MCHC moving average often indicates instrument problems (e.g., dirty apertures or issues with the hemoglobinometer/lyse) rather than a sudden shift in the patient population’s health
  • Limitations: This method is invalid in populations with specific pathologies (e.g., a batch of samples from a Sickle Cell clinic or Oncology center), which would skew the indices

Calibration vs. Calibration Verification

It is a common error for students to confuse QC with Calibration. They are distinct processes

  • Calibration: The process of testing a “Calibrator” (a material with a known, assigned value, often traceable to a reference method) and adjusting the instrument to match that value
    • When to perform: Upon installation, after major maintenance (e.g., replacing an aperture or laser), or when QC trends/shifts cannot be resolved by troubleshooting
  • Calibration Verification: The process of testing materials of known concentration to ensure the instrument is accurate across the Reportable Range
    • Requirement: CLIA mandates this every 6 months. It involves running linearity kits (very low to very high values) to verify the Analytical Measurement Range (AMR)

External Quality Assessment (Proficiency Testing)

External QC ensures the laboratory performs comparably to peer laboratories using the same instrumentation/methods

  • Process: An external agency (e.g., CAP, API) sends “blind” samples to the lab
  • Handling: These samples must be integrated into the workflow and tested exactly like patient samples. They must not be given special treatment (e.g., do not run in duplicate if patients are run once)
  • Prohibition: Inter-laboratory communication regarding proficiency testing samples prior to result submission is strictly prohibited and constitutes a serious regulatory violation (risk of losing licensure)
  • Review: If the lab’s results fall outside the peer group mean (SDI > 2.0), an internal investigation and root cause analysis are required