What We Know About Integrated Reasoning Six Months After Launch

Mar 1, 2013
Tags: Integrated Reasoning, Official GMAT, Validity

Larry Rudner, Vice President of Research and Development and Chief Psychometrician for GMAC Provided by Lawrence M. Rudner, vice president, Research and Development and chief psychometrician for the Graduate Management Admission Council. 

Business schools want to know if you can evaluate, synthesize and extract the important information and sort out the noise from very large volumes of data. With the launch of the Integrated Reasoning section in June 2012, the GMAT exam started measuring these skills, which are essential for learning in today’s programs, are expected in today’s workplace, and are of critical importance to the businesses you may create or join in the future. 

In the first six months of Integrated Reasoning, more than 105,000 exams have been administered. While it will take more time to establish predictive validity for individual programs – that is, to state precisely to what extent the section adds to the already high ability of the GMAT exam to predict your potential for success in the classroom – we have been able to conduct some preliminary analysis to see whether the test is showing any bias toward or against any subgroups of test takers, and how test takers who score similarly on the Quantitative and Verbal sections perform on the new section. 

An Objective Measure 

Our first big question was to ask whether different groups of test takers with the same general ability level receive the same IR score.  As the sponsors of the premier test for identifying talent around the world, GMAC needed to ensure that IR was meeting our standards as an impartial, objective measure. Our analysis shows that this requirement has been met unconditionally. The differences between native and non-native English speakers, US and non-US citizens, US white and non-white test takers, and business vs. non-business undergraduate majors in their IR scores —when matched on their Verbal and Quantitative Reasoning skills using analysis of covariance — are, in all cases, less than one quarter of a standard deviation.  The observed differences are psychometrically minor and practically inconsequential. 

The process used to develop IR assured that the section would have content and construct validity.  Our surveys of business schools informed the design and documented that IR would, in fact, measure the valued skills. Our item writing, form design, and equating procedures assures that we are assessing what we set out to measure. 

How IR and Other Scores Correlate 

A remaining issue we’re looking at is predictive validity: Does the IR section add to the already high ability of the GMAT exam to predict core course grades? While we will not have direct predictive validity data for some time, our analysis of the data so far shows that the correlation of IR with GMAT Total is .55. In other words, Quant, Verbal, and IR are all measuring something related, yet also different. The observed IR-GMAT Total score correlation indicates that IR is likely to add meaningfully to the predictive validity of the GMAT. Our 2008 meta-analysis involving 273 programs showed an average multiple R of .53 when predicting core grades from a test taker’s GMAT Total, AWA, and Undergraduate GPA.  Based on the observed correlations from the first six months as well as a number of simulations, we anticipate that adding IR will increase the average multiple R to .59 or better. This is very impressive for a 12-question subtest. 

An Additional Data Point 

The Graduate Management Admission Council, the global non-profit of leading business schools, has been advocating the use of IR as an additional data point for schools considering candidates with similar GMAT scores. We now have empirical data to support that recommendation. 

The following chart shows the distribution of IR scores for all GMAT test takers scoring between 600 and 640 during the first six months. Other score segments show similar distributions. Within each segment, there is a convincing distribution of students with these Integrated Reasoning skills. Other things being equal, the test takers demonstrating more skill in this area within each range can be expected to be better students.

  IRscores 

The implications at this juncture are clear. Schools are asking for and receiving IR score distributions for their programs, and we are conducting concurrent validity studies to explore the correlation between IR score and class performance. For students, you should take Integrated Reasoning seriously. IR provides an excellent way for you to distinguish yourself and demonstrate your capabilities. Being familiar with IR needs to be part of your test preparation. 

Lawrence M. Rudner, PhD, MBA, is vice president of Research and Development and chief psychometrician for the Graduate Management Admission Council. This blog post is reworked from a Demystifying the GMAT column that originally ran in Graduate Management News in January 2013.

OK