Types of Validity

Validity is likely the most crucial criterion for a test's quality. The word validity refers to the test's ability to measure what it purports to assess. The items on a high-validity test will be closely related to the exam's intended emphasis. This implies that many certification and licensure examinations will be heavily connected to a certain job or occupation. When a test lacks validity, it fails to assess the job-related material and abilities that it should. In this instance, there is no need to use the test results for their intended purpose.

Types of Validity

There are four types of validity −

  • Content Validity

  • Criterion-Related Validity (further divided as)

    • Concurrent Validity

    • Predictive Validity

  • Construct Validity (further divided as)

    • Convergent Validity

    • Discriminate Validity

  • Face Validity

These are being discussed below −

Content Validity

McBurney and White (2007) define content validity as the idea that a test should sample the range of behavior represented by the theoretical concept being tested. It is a non-statistical type of validity that involves evaluating the test's content to see whether it comprises a sample typical of the assessed behavior. The items on a test that has content validity indicate the whole range of conceivable items that the exam should cover. For example, if a researcher wanted to create a spelling accomplishment test for third graders, he or she may list virtually all of the words that third graders should know.

Individual test items may be selected from a large set of objects, including various goods. Content validity is integrated into a test. After thoroughly reviewing the topic matter, items are chosen based on their compatibility with the test's standards. In some circumstances, if a test assesses a difficult-to-define attribute, an expert can score the relevance of items. Because each judge has their view on their assessment, the exam will be rated individually by two impartial judges. Items deemed extremely significant by both judges would be included in the final examination.

Criterion-related Validity

The concept of criterion-related validity states that a valid test should be closely connected to other measures of the same theoretical concept. A good intelligence test should have a strong correlation with other intelligence tests. If a test displays successful predicting criteria or construct indicators, it is said to have criterion-related validity. There are two kinds of criterion validity.

  • Concurrent Validity − It occurs when criteria measures and test scores are reached at the same time. It represents the degree to which the test results estimate the individual's current criteria status. For example, if a test assesses anxiety, it is considered to have contemporaneous validity if it accurately reflects an individual's current degree of anxiety. Concurrent proof of test validity is typically desirable for achievement testing and clinical diagnostic tests.

  • Predictive Validity − When criteria measures are collected after the test, predictive validity arises. Aptitude exams, for example, can help determine who is more likely to succeed or fail in a certain topic. Predictive validity is an important aspect of entrance exams and vocational tests.

Construct Validity

The construct validity technique is more complicated than other types of validity. McBurney and White (2007) defined construct validity as the property of a test in which the measurement assesses the constructs intended to be measured. There are numerous approaches to determine if a test produces construct-valid data.

  • The test should measure the theoretical concept being tested in the same. For example, a test of leadership aptitude should not truly evaluate extraversion.

  • The construct validity method is more sophisticated than other types of validity. McBurney and White (2007) defined construct validity as the quality of a test that the measurement truly measures the constructs it is supposed to assess.

  • There are various approaches to establish if a test generates data with construct validity. The test should truly assess the theoretical concept being tested and nothing else. A leadership capacity test, for example, should not truly evaluate extraversion.

There are two types of Construct validity −

  • Convergent Validity − It means the extent to which a measure is correlated with another measure that is theoretically predicted to correlate with.

  • Discriminant Validity − This explains the extent to which the operationalization is not correlated with other operationalizations that it theoretically should not be correlated with.

Face Validity

Face validity refers to what seems to measure on the surface. It is up to the researcher's discretion. Each question is examined and adjusted until the researcher is satisfied that it accurately measures the intended construct. The researcher's subjective judgment is used to determine facial validity.

Aspects of Validity

There are two different aspects for validity: internal and external.

Internal Validity

Internal validity is the most basic sort of validity since it deals with the logic of the relationships between the independent and dependent variables. Based on the measurements and the study methodology, this validity estimates the degree to which inferences about causal relationships may be derived. Properly designed experimental approaches that examine the influence of an independent variable on the dependent variable under well-controlled settings provide a higher degree of internal validity.

Threats to Internal Validity − There are several threats to internal validity. Some of them are −

  • Confounding − A confounding mistake arises when the effects of two variables in an experiment cannot be separated, resulting in a muddled interpretation of the results. Confounding is one of the most serious threats to experiment validity. Confounding is especially problematic in research if the experimenter has no control over the independent variable. When participants are chosen based on the existence or absence of a condition, the subject variable can impact the results. A competing hypothesis to the original cause and inference hypotheses may be formed where a misleading link cannot be avoided.

  • Selection Bias − Any bias in group selection might jeopardize internal validity. Selection bias denotes a problem arising from pre-test differences between groups, which may interact with the independent variable and thus influence the observed outcome and cause problems; examples include gender, personality, mental and physical capabilities, motivation level, and willingness to participate.

  • History − Events outside the experiment or between repeated assessments of dependent variables, such as natural catastrophes or political changes, may impact participants' responses, attitudes, and behavior during the experimental procedure. In this case, it is hard to tell whether the change in the dependent variable is due to the independent variable or a historical event.

  • Maturation − It is common for participants to alter throughout an experiment or between measurements. For example, in longitudinal studies, young children may mature due to their measurable experience, talents, or attitudes. Permanent changes (such as physical development) and transient changes (such as exhaustion and sickness) can influence how a person reacts to the independent variable. As a result, researchers may have difficulty determining whether the change is due to time or other variables.

  • Frequent Testing − Because of repeated testing, participants may become biased. Participants may recall the right answers or become conditioned due to the test's repeated delivery. Furthermore, it raises the prospect of a danger to internal legitimacy. Instrument replacement/change: If any instrument is replaced/changed during the experiment, it may influence internal validity since an alternate explanation is readily available.

External Validity

According to McBurney and White (2007), external validity concerns whether the study results can be generalized to another context, new participants, places, timeframes, etc. Experiments with human participants frequently use small samples from a specific geographic region or with distinctive traits, reducing external validity (e.g., volunteers). As a result, it is impossible to ensure that the findings concerning cause-effect correlations are relevant to persons in different geographic regions or the absence of these qualities.

Threat to External Validity − One of the primary concerns with external validity is how one might need to be corrected when forming generalizations. Generally, generalizations are constrained when the cause (i.e., independent variable) is reliant on other factors; as a result, all external validity risks interact with the independent variable.

  • Aptitude-Treatment Interaction − The sample may contain characteristics that interact with the independent variable, limiting generalizability; for example, conclusions drawn from comparative psychotherapy studies typically use specific samples (for example, volunteers, highly depressed, hardcore criminals).

  • Situations − All situational characteristics, such as treatment conditions, light, noise, location, experimenter, timing, scope, and degree of measurement, among others, may restrict generalizations.

  • Pre-Test Effects − When the cause-effect correlations can only be discovered after the pre-tests, the generality of the findings is likewise limited.

  • Post-Test Effects − When cause-effect correlations can only be studied after the post-tests are completed, this might further restrict the generalizability of the findings.

  • Rosenthal Effects − When derivations from cause-and-effect linkages are not generalizable to other investigators or researchers.


The degree to which a question, task, or item on a test represents the universe of behavior the test was supposed to sample is determined by content validity. A test has face validity if it seems valid to test users, examiners, and, most importantly, examinees. When a test predicts performance on an acceptable outcome measure, it demonstrates criterion-related validity. Internal validity arises when a cause-effect link exists between the independent and dependent variables. When the effects of two independent variables in an experiment cannot be analyzed independently, confounding arises. External validity is concerned with whether the study findings can be applied to a new situation: different participants, places, times, and so on.

Updated on: 09-Feb-2023

3K+ Views

Kickstart Your Career

Get certified by completing the course

Get Started