Frequently asked questions
What are the leading business schools saying?
Chief Brand & Sustainability Officer
XLRI Xavier School of Management
“For the first time, business management students across the world have joined hands to assess their business schools on how they perceive their positive impact in the community and society at large.
This marks a paradigm shift to foster a collaborative ecosystem and make the process of management education more meaningful and purpose-oriented.”
School of Business, Economics and Law at the University of Gothenburg
“We are proud to be among the top 30 in the Positive Impact Rating for business schools. It confirms our conscious path of development and spurs us to go further.
This new sophisticated rating mechanism brings with it an important modernization of the roles that business schools are expected to play in society.”
Steven de Haes
Antwerp Management School
“We applaud the evolution toward measuring the value of a business school with holistic criteria that span beyond a mere financial focus.
The Positive Impact Rating (PIR) underlines the importance of taking an integrated view at the “return” of education and we are honored to have received this recognition. It fuels our continued commitment to partner with customers and corporations on creating positive impact.”
Audencia Business School
“The Positive Impact Rating encourages business schools to make responsible management education a key priority, replying thus to the expectations of students, business and society.
Business schools have a key role to play in preparing responsible leaders who will invent and deploy new business models and strategies that contribute to the United Nations’ sustainable development goals.”
Julia Christensen Hughes, Founding Dean
Gordon S. Lang School of Business and Economics, University of Guelph
“We are delighted to be included in this innovative, global, student-led rating of sustainable-focused business schools. This recognition is an important affirmation for our vision.
The world needs courageous leaders committed to advancing business as a force for good. We have desperately needed a new approach to rankings and metrics - congratulations to the PIR team for disrupting tradition rankings and helping us pivot to a better future.”
Maastricht University, School of Business and Economics
We strongly believe schools of business and economics, like ours, play an important role in developing future leaders. Leaders, who address the challenges our world is facing.
We are very proud to get acknowledged by the Positive Impact Rating for having a culture which supports personal development of both students and staff. We hope that the recognition will encourage other schools to join us walking this path.”
What is the PIR?
The Positive Impact Rating (PIR) is a radically new student-based rating measuring the positive impact of business schools. It aims to change the thrust of existing rankings from leading schools to be the best in the world to be the best for the world. It addresses the ongoing critique of existing rankings to support mainly economic and selfish goals of already privileged actors, without reflecting the schools’ role as an important social actor. In times of pressing global challenges and increasing societal conflicts this cannot suffice. To remain positive contributors, business schools need to change in education and research, but also in their structures, cultures and public outreach. Rankings and ratings are seen as a key lever for change in the business school landscape. The ambition of the PIR is to trigger positive change by providing insights for schools into what the next generation thinks and aspires to.
What is the purpose of the PIR?
The purpose is to measure how business schools contribute to solving societal challenges by energizing the school and its culture, by educating responsible leaders, by providing relevant research results and offers for continuing education, by participating in the public debate and by being a role model institution.
What does positive impact mean?
The PIR is a rating that is inspired by societal purpose and outcome of business schools in the spirit of their responsibility as custodians of society. Business schools traditionally are seen to serve mainly students in developing their management competences and business organizations in providing them with educated talent, insights from research and continuous education for their staff. They thereby support business and the economy. Providing a positive impact to society has not been seen as core to business schools, traditionally, but demands for it have been increasing steadily in recent years. The PIR responds to these demands for a positive impact of business schools, as exemplified by the UN Sustainable Development Goals.
How is positive impact measured?
What is the object of evaluation?
The PIR looks at and evaluates the business school as a whole and thereby applies a holistic perspective. It does not focus on specific programs (e.g. the MBA program) or activities (e.g. campus operations) as many other ranking or rating systems do.
In which way is the PIR a rating “by students and for students”?
The PIR is based on an assessment done by (undergraduate and graduate) students who assess their own school, a place which they know very well, and which is close to their hearts and minds. Students are “a”, if not “the” main stakeholders of business schools. Their evaluations are highly relevant for the school. The collection of data is organized through student associations at their own school. They take responsibility for assessing the positive impact of their own schools and get access to the data collected through an online dashboard. The PIR thereby serves also as a tool for empowering students to engage in using and communicating the data at their schools and beyond.
Why is the PIR “perception based” rather than “fact based”?
The PIR has been designed as perception based, using subjective assessments by students, not as facts based. Why do we use perceptions? Perceptions provide insights into qualitative assessments of reality as perceived by relevant actors. By collecting perceptions of students about their own school, these perceptions can be seen as highly relevant for the school and for (actual and future) students. Perceptions define reality for the actors and guide their actions. Moreover, perceptions reach beyond the present and provide foresight into the expected future, which is difficult to achieve through the collection of facts. Facts typically will not take into account different societal and cultural conditions and needs. The PIR deliberately provides an alternative perspective to traditional rankings which mostly rely on facts.
Why is the PIR structured as a rating and not as a ranking?
A rating categorizes schools into different, but similar groups, while a ranking positions business schools in a highly differentiated league table. Rankings are being criticized increasingly for creating differences between schools which are often not practically meaningful. And they pit schools against each other, in a field where competition is a lot less relevant than in fields like business. Also, ranking management has become an important new discipline for business schools, diverting attention and resources away from other, often more important tasks. Cooperative and collective activities, however, should not be discouraged through rankings, but they should be supported. The PIR reduces the potential for competitiveness by grouping the schools in 5 different levels (“quintiles”) according to their overall scores. The schools are listed by alphabet in these levels not by position. And only schools on the higher levels are named.
Why does the PIR classify schools on an absolute scale and not on a relative scale
Most rankings define their scales in a relative way, by using the best performing school for the upper end of the scale and the poorest performing school for the lower end. Then all other schools are positioned between these two ends. This way the performance is measured relative to the other participating schools. When the field of participating schools changes the scale changes as well. And, more importantly, it measures the performance of the schools relative to the existing level of impact. The PIR, however, measures and classifies business schools on an absolute scale, which is independent of the schools participating in the rating. And it measures their performance against a required level of impact, as expressed by the expectations of their students. It thereby highlights the potential for improvement, even for leading schools.
What is the value proposition of the PIR?
The PIR allows students to find a business school that prepares them as global change makers in the 21st century and equips them with the required competences. It allows participating schools to use the survey results and their data as a tool for external benchmarking and internal development. It allows business and other organizations to evaluate the schools and their graduates based on their performance and ambitions to have a positive impact on society and the world. And it allows business and civil society actors to find business schools as like-minded partners for their own positive impact strategies and actions.
In which way is the PIR supporting change and development in the business school sector?
Who is behind the Positive Impact Rating?
The Positive Impact Rating was initiated in 2017 by a large group of academics and institutional leaders from the management education field (GRLI, PRME, HESI, GBSN) with the intention to support fundamental change in the business school sector with regards to the schools’ societal responsibility and impact. Its activities are endorsed and supported by WWF Switzerland (Environment), OXFAM (Society), Global Compact Switzerland (Business) and it is operated in close collaboration with student organizations - oikos International, Net Impact, AIESEC, SOS UK, Studenten voor Morgen. It is supported by Partners: Viva Idea (Costa Rica, financial and operational support) and Fehr Advice (Zurich, Switzerland, Data Management). It is operated by a core group of actors and the Swiss foundation MISSION POSSIBLE which are part of the Positive Impact Rating Association. It has been inspired by the 50+20 vision.
How were the participating business schools selected?
We selected the top 50 business schools from the Financial Times Masters in Management (MiM) Ranking 2018 and the top 50 business schools from the Corporate Knights Green MBA Ranking 2018. In the spirit of openness and inclusiveness we took in other schools as well that expressed their interest in getting rated.
Of a total of 97 schools contacted, 51 schools agreed to participate in the survey. 6 schools actively prevented their students from conducting the survey, 12 schools said they were not ready for participation or the students realized that they didn’t have the capacity to conduct the survey at this time. At the remaining 28 schools, the local student organizations could not be located or reached.
How many and which business schools have participated in the rating?
51 business schools collected data with 3000 students completing the online survey. They come from 22 countries and 5 continents.
Which business schools ended up being rated in the PIR?
Of all participating business schools 33 collected a sufficient number of responses to be rated (30 or more responses). The number of student responses had to be reduced from to 2450 for data consistency. 30 schools are being featured. 9 schools are positioned on level 4 of a five-level rating, 21 schools on level 3. No single school reached the highest rating level 5. You can find an overview of the top schools in the 2020 PIR rating here.
The 30 featured business schools come from 15 countries and 4 continents: 16 come from Europe, 10 from North America, 3 from Asia, and 1 from Central America.
How was the data collected?
The survey was run online between October and December 2019 with questions and explanations provided in English (only). Local student organizations, contacted through the participating student associations, or student organizations and engaged students located through local sustainability offices or professors distributed the survey to fellow bachelor and master students. PhD students were excluded. They thereby used different strategies and routes to reach the students.
In distributing the survey and inviting their fellow students to participate, the student organizations sent out a school-specific link, which allowed the students to directly access the survey tool. Although we instructed the student organizations to respect the sensitivity of this link and we demanded all respondents at the beginning of the survey to pledge to honestly and truthfully respond to the questions, we cannot guarantee that this link was not misused (e.g. more than one questionnaire being filled out by a student).
How were the business schools rated?
In answering the 20 questions distributed across the three areas and seven dimensions, the same rating scale was used for all questions. It ranges from 1 (“I don’t agree”) to 10 (“I completely agree”). A 0 option (“I am not sure”) was provided for every question as well, ensuring that students had the chance to opt out. The overall PIR scores of a school were calculated by using the mean of all responses to a question and then averaged to determine the score of the 7 dimensions and the 3 areas. In cases where a 0 option was chosen by a student, special precautions had to be taken to ensure data consistency. This reduced the sample number of students included in the survey from 3000 to 2450.
How were the levels defined?
The overall PIR score of the business school was used to position the school on one of five levels (quintiles). The levels were defined using a decreasing size of a level on the 10 point scale, to express an increasing challenge to reach higher levels. The end point for level 1 was chosen by using the lowest score achieved by a school. The characterizations of the different levels refer to the developmental stage of the business school.
What are the results from the statistical analysis?
School scores are all very close to each other: The average score of all schools is 7, the standard deviation between them is 1, which means there is very little difference in answers between the schools. Also, the correlations between the scores of the 3 areas and the 7 dimensions are very high.
There is a significant effect between age of the student and rating score: the older the student is, the higher the rating. There is a significant negative effect between time of study and rating score: the longer a student has studied the more critical is his rating. And there is some gender bias: the rating of men is higher than the rating of women. This means, that women rated more critically. In terms of representation, female participant – while higher – was about equally higher everywhere. Bachelor and master students were also equally represented across the regions.
There is no significant cultural bias by region: meaning that there is no significant relationship between the school’s region and its PIR score. There are no significant differences between the responses of national students and international students; and there is no significant differences between bachelor & master student responses.
How valid are the data and the results?
A number of limitations concerning the reliability and validity of the data and results of this first round of the PIR results have to be pointed out:
We cannot exclude selection bias in our sample. The sample will probably not be representative for the business school as a whole, but there will an overrepresentation of students with an affinity to issues of responsibility and sustainability. And as different approaches have been used by the local student organizations to get to the student responses their distribution may and will vary as well.
Using 30 responses as a cut-off point to include a school in the rating was a pragmatic decision. This is admittedly at the lower end of our expectations and, naturally, we would have liked to get to a much higher number of responses. How valid our first PIR results are, will only become clear in the coming years with more ratings and larger numbers.
Also, some questions received a fairly high number of zeros, with students admitting that they don’t know the answer, which points to a mismatch between some of the questions asked and the students answering.
Based on these limitations, which we transparently share, we have to remain careful in our interpretation of the results. School results and their ratings may and probably will look differently next year as we learn and improve our processes and increase the number of schools participating.