Calling All Skeptics: A Look into DiSC® Research

      Comments Off on Calling All Skeptics: A Look into DiSC® Research

Now that you’ve got the basics and theory of DiSC® down pat (see this post as a refresher!), we’re going to dig into something crucial to Everything DiSC®: our research. We’re often asked how accurate and/or legitimate the Everything DiSC assessment is, and those are great questions. In short, our answer is this: we’ve built the Everything DiSC Application Suite on a foundation of research and rigor to ensure high-quality, transformational learning experience—every time.

DiSC Research

Psychological instruments are used to measure abstract qualities that we can’t touch or see. These are characteristics like intelligence, extroversion, or honesty. So how do researchers evaluate these instruments? How do we know whether such tools are actually providing accurate information about these characteristics or just generating haphazard feedback that sounds believable?

Simply put, if an instrument is indeed useful and accurate, it should meet a variety of different standards that have been established by the scientific community. Validation is the process through which researchers assess the quality of a psychological instrument by testing the tool against these different standards.
Validation asks two fundamental questions:

How reliable is the tool? That is, researchers ask if an instrument measures in a consistent and dependable way. If the results contain a lot of random variation, it is deemed less reliable.

How valid is the tool? That is, researchers ask if an instrument measures accurately. The more that a tool measures what it proposes to measure, the more valid the tool is.

Note that no psychometric tool is perfectly reliable or perfectly valid. All psychological instruments are subject to various sources of error. Reliability and validity are seen as matters of degree on continuous scales, rather than reliable/unreliable and valid/invalid on dichotomous scales. Consequently, it is more appropriate to ask, “How much evidence is there for the reliability of this tool?” than, “Is this tool reliable?”

Scales

A person’s DiSC style is measured by asking them the degree to which they agree with a series of statements about themselves. These responses are used to calculate a score for that individual on eight scales. The eight scales are as follows:

Although these scales do not show up in the profile, they are used to determine which style the respondent receives

Reliability

To determine if a tool is reliable, researchers look at the stability of the instrument and the internal consistency of the instrument. Stability is easy to understand. In this case, a researcher would simply have a group of people take the same assessment twice and correlate the results. This is called test-retest reliability. Internal consistency is more difficult to understand. Here, we have the assumption that all the questions (or items) on a given scale are measuring the same trait. Consequently, all of these items should, in theory, correlate with each other. Internal consistency is represented using a metric called alpha. We can use similar standards to evaluate both test-retest and alpha. The maximum value is 1.0 and higher values indicate higher levels of reliability. Although not set in stone, most researchers use the following guidelines to interpret values: above .9 is considered excellent, above .8 is considered good, above .7 is considered acceptable, and below .7 is considered questionable. The reliability estimates for the eight DiSC scales are shown in Table 1.
As Table 1 shows, all values were well above the .70 cutoff and all but one were above .80. This suggests that the measurement of DiSC is both stable and internally consistent. For more information on the reliability of the DiSC scales, see the more in-depth Everything DiSC Research Report.

Validity

There are many different ways to examine the validity of an assessment. We will provide two such examples here, but many more are included in the full Everything DiSC Research Report. The DiSC model proposes that adjacent scales (e.g., Di and i) will have moderate correlations. That is, these correlations should be considerably smaller than the alpha reliabilities of the individual scales. For example, the correlation between the Di and i scales (.50) should be substantially lower than the Alpha reliability of the Di or i scales (both .90). On the other hand, scales that are theoretically opposite (e.g., i and C) should have strong negative correlations. Table 2 shows data obtained from a sample of 752 respondents who completed the Everything DiSC assessment. The correlations among all eight scales show strong support for the model. That is, moderate positive correlations among adjacent scales and strong negative correlations are observed between opposite scales.

Cronbach’s Alpha reliabilities are shown in bold along the diagonal, and the correlation coefficients among scales are shown within the body of the table. Correlation coefficients range from -1 to +1. A correlation of +1 indicates that two variables are perfectly positively correlated such that as one variable increases, the other variable increases by a proportional amount. A correlation of -1 indicates that two variables are perfectly negatively correlated, such that as one variable increases, the other variable decreases by a proportional amount. A correlation of 0 indicates that two variables are completely unrelated; N=752, as shown in Appendix 1 of the Everything DiSC Research Report.

A statistical technique called multidimensional scaling also adds support to the DiSC model as a circumplex. This technique has two advantages. First, it allows for a visual inspection of relationship among the eight scales. Second, this technique allows researchers to look at all of the scales simultaneously. In Figure 1, scales that are closer together have a stronger positive relationship. Scales that are farther apart are more dissimilar. The circumplex DiSC model predicts that the eight scales will be arranged in a circular format at equal intervals. As can be seen in Figure 1, the scales are arranged in a way that is expected by the DiSC model. (Keep in mind that the original MDS rotation is presented below and this rotation is arbitrary.) Although the eight scales do not form a perfectly equidistant circle (as predicted by the model), this theoretical ideal is nearly impossible to obtain with actual data. The actual distance between the scales, however, is roughly equal, providing strong support for the model and its assessment.

Results

So, why does this foundation of research matter? The answer is this: an assessment-based learning experience that is revered by learners worldwide.

In fact, Everything DiSC has a 95% satisfaction rating among organizations and a 90% accuracy rating from learners around the globe. With 8,000,000+ participants impacted in 130,000+ organizations worldwide (in over 70 countries!) we’re confident in our research and our product.

The truth is, there are lots of different personality assessments out there to choose from. But if you’re looking for a proven personality assessment to deepen understanding of self and others, inspire behavior change, and spark organizational culture improvement, Everything DiSC is your tried and true solution.