Frequently asked questions

Your PIR questions answered

PIR aims and objectives

What is the PIR?

The Positive Impact Rating (PIR) is the only student-based rating measuring the positive impact of business schools. It aims to change the thrust of existing rankings from leading schools to be the best in the world to be the best for the world.

What is the purpose of the PIR?

The purpose is to measure how business schools contribute to solving societal challenges by energizing the school and its culture; by educating responsible leaders; by providing relevant research and offers for continuing education; by participating in the public debate and being a role model institution.

What is the value proposition of the PIR?

The PIR allows students to find a business school that prepares them as global change makers in the 21st century and equips them with the required competences. It allows participating schools to use the survey results and their data as a tool for external benchmarking and internal development. It allows business and other organizations to evaluate the schools and their graduates based on their performance and ambitions to have a positive impact on society and the world. And it allows business and civil society actors to find business schools as like-minded partners for their own positive impact strategies and actions.

Why did we create the PIR?

The PIR addresses the ongoing critique of existing rankings to support mainly economic and selfish goals of already privileged actors, without reflecting the schools' role as an important social actor. In times of pressing global challenges and increasing societal conflicts business schools must rethink and adapt their role and contribution to society. The two dominant global accreditation standards for business schools (EQUIS and AACSB) now demand a full integration of responsibility and sustainability into all core elements of their standards. To remain positive contributors, business schools need to adapt their offerings, but also their structures and cultures. Rankings and ratings are seen as a key lever for change in the business school landscape. The ambition of the PIR is to trigger positive change by providing insights for schools into what the next generation thinks and aspires to.

Positive impact in business schools

How is positive impact measured?

The PIR is based on a clear conceptual model of the Positive Impact of business schools as originally developed by the 50+20 vision. It looks at the whole school in all of its key areas and dimensions. The model distinguishes between 3 areas and 7 dimensions and is operationalized through 20 questions:

Area 1: Energizing - is comprised of the 2 dimensions Governance and Culture. It enables and energizes business schools to effectively go for - and eventually create - positive impact.

Area 2: Educating - is comprised of the 3 dimensions Programs, Learning Methods, and Student Support. It refers to a core function of business school impact: preparing students to become responsible future leaders in business and society.

Area 3: Engaging - is comprised of the 2 dimensions Institution as a Role Model and Public Engagement. It refers to the need for business schools to earn the trust by students and society but also to engage as respected public citizens.

What are the survey changes in 2021?

The survey experts met to review the survey based on our First Edition results in 2020 as well as feedback from participating schools and students. Two changes in the area of "Educating" resulted from this. First, one question in the "Learning Methods" dimension was slightly rephrased to improve understanding. Second, the "Student Engagement" dimension was entirely reconsidered. Since the PIR survey assesses the performance of a business school, rather than the performance of its students, we changed this dimension to "Student Support", hence measuring the activities of the school, rather than the engagement of its students. There are now three new questions to assess the school's ability to support and encourage students in their societal engagement activities. The expert team shall meet again after the completion of the 2021 edition for a review to discuss areas of improvement.

A student-led initiative

In which way is the PIR a rating “by students and for students”?

The PIR is based on an assessment done by (undergraduate and graduate) students who assess their own school, a place which they know very well, and which is close to their hearts and minds. Students are "a", if not "the" main stakeholders of business schools. Their evaluations are highly relevant for the school. The collection of data is organized through student associations at their own school. They take responsibility for assessing the positive impact of their own schools and get access to the data collected through an online dashboard. The PIR thereby serves also as a tool for empowering students to engage in using and communicating the data at their schools and beyond.

Why is the PIR “perception based” rather than “fact based”?

The PIR has been designed as perception based, using subjective assessments by students, not as facts based. Why do we use perceptions? Perceptions provide insights into qualitative assessments of reality as perceived by relevant actors. By collecting perceptions of students about their own school, these perceptions can be seen as highly relevant for the school and for (actual and future) students. Perceptions define reality for the actors and guide their actions. Moreover, perceptions reach beyond the present and provide foresight into the expected future, which is difficult to achieve through the collection of facts. Facts typically will not take into account different societal and cultural conditions and needs. The PIR deliberately provides an alternative perspective to traditional rankings which mostly rely on facts.

How do students rate their school?

Student associations are responsible for the coordination and communication of the PIR survey in their school. They engage with fellow students to anonymously complete the survey.

Each student association is provided with a unique PIR dashboard and link to their survey, which includes 20 questions related to the three areas and seven dimensions of the PIR. In each of the dimensions, students are asked to assess their school's current state to create a positive impact.

A further three open-ended questions ask students what their schools should start, stop, and continue doing in support of its commitment to providing management education that results in a positive impact for the world.

Data collection

How is the data collected?

The 2021 survey was run online between December 2020 and February 2021 with questions and explanations provided in English (only). Local student organizations distributed the survey to bachelor and master students. They were prepared and supported by the PIR student coordinator.

The local student organizations had access to their school specific dashboard, which they could use to monitor the number of student responses. Their goal was to reach a minimum of 100 responses, 50 from both, Bachelor and Master students.

How are the business schools rated?

In answering the 20 questions distributed across the three areas and seven dimensions, the same rating scale was used for all questions. It ranges from 1 ("I don't agree") to 10 ("I completely agree"). A 0 option ("I am not sure") was provided for every question as well, ensuring that students had the chance to opt out. The overall PIR scores of a school were calculated by using the means of all individual responses to a question, a dimension, or an area. In cases where a 0 option was chosen by a student, special precautions had to be taken to ensure data consistency.

How were the levels defined?

The overall PIR score of the business school was used to position the school on one of five levels (quintiles). The levels were defined using a decreasing size of a level on the 10-point scale, to express an increasing challenge to reach higher levels. The end point for level 1 was chosen by using the lowest score achieved by a school. The characterizations of the different levels refer to the developmental stage of the business school.

What are the results from the statistical analysis?

There are significant effects between different demographic characteristics of the student sample and the overall rating scores of their schools. Males rate them higher than females do. Master students rate them higher than Bachelor students. International students rate them higher than national students do. Older students rate them higher than younger students do. Ratings are highest among students who have spent only one year and lowest among those who have spent three or more years in business school.

What are the methodological limitations?

A limitation of the PIR survey lies in the high correlations between the survey questions in the seven dimensions, leading up to the three assessment areas. On the one hand, a high correlation confirms the solidity of the model and how tightly the questions cover the one thing we want to measure, namely the positive impact contribution of business schools. On the other hand, a high correlation between the PIR dimensions and areas suggests opportunities of removing redundancies among the questions. Our experts have reviewed the pros and cons and have adopted the position that the survey methodology was specifically designed to respond to the expectations of the expert panel that created the methodology and the multi-stakeholder panel that finally decided on its structure and elements. Its purpose is not only to assess the positive impact of business schools but also to provide them with practical guidance on how to report on their activities and what to do to improve its positive impact. Fewer questions leading to fewer dimensions may improve the stringency of the survey, but it would at the same time reduce the value of the results as a management tool for transforming business schools.

Beyond these limitations, we remain careful in our interpretations of the results. As we have seen after two editions of the PIR, school results and their ratings may and will look differently every year as we continue to learn and improve our processes and increase the number of schools participating.

From competition to collaboration

Why is the PIR structured as a rating and not as a ranking?

A rating categorizes schools into different, but similar groups, while a ranking positions business schools in a highly differentiated league table. Rankings are being criticized increasingly for creating differences between schools which are often not practically meaningful. And they pit schools against each other, in a field where competition is less relevant than in business. Also, ranking management has become an important new discipline for business schools, diverting attention and resources away from other, often more important tasks. Cooperative and collective activities, however, should not be discouraged through rankings, but they should be supported. The PIR reduces the potential for competitiveness by grouping the schools in 5 different levels ("quintiles") according to their overall scores. In addition. the schools are listed alphabetically in these levels not by position. And only schools on the higher levels are named.

Why does the PIR classify schools on an absolute scale and not on a relative scale?

Most rankings define their scales in a relative way, by using the best performing school for the upper end of the scale and the poorest performing school for the lower end. Then all other schools are positioned between these two ends. This way the performance is measured relative to the other participating schools. When the field of participating schools changes the scale changes as well. And, more importantly, it measures the performance of the schools relative to the existing level of impact. The PIR, however, measures and classifies business schools on an absolute scale, which is independent of the schools participating in the rating. And it measures their performance against a required level of impact, as expressed by the expectations of their students. It thereby highlights the potential for improvement, even for leading schools.

In which way is the PIR supporting change and development in the business school sector?

The PIR is a joint effort by academic actors and institutions together with prominent actors from civil society to support change and transformation in a change resistant industry. By evaluating business schools on their positive impact and by highlighting progressive players and relevant innovations, the PIR supports a transformation of the business school sector towards purpose orientation. It is aligned with the Global Agenda of the UN Sustainable Development Goals and offers a basis for measuring the positive impact of a transformed management education for the world. Also, by providing students and school management with easy access to their data through a dashboard, student organizations and other actors are empowered to support the purpose orientation of their schools.

Participating in the PIR

What was required from the schools to participate in the PIR?

For the 2021 edition of the PIR the school administrations were approached by the PIR office and asked to sign-up for participation. They had to pay a participation fee of 1,880 Euro and ensure a committed student association for independent coordination of the data collection. They had to agree to follow the PIR principles and respect the integrity of the student voice.

The PIR is formally organized as an independent not-for-profit association under Swiss law. The fee is used exclusively to cover the costs of operating the PIR. Also, the PIR Association aims to be as inclusive as possible of schools from all countries, including emerging regions.

How many business schools participated in the rating?

We invited over 200 business schools to take part in the 2021 edition of the PIR. Students from 47 schools located in four continents and 21 countries ended up participating in the survey. While the number of participating schools and countries remained comparable to the first edition, the number of student participants more than tripled. It went from 3000 to 9600 collected responses, or from 2450 to 8800 usable responses. While in the first edition, the average number of participating students per school was 48, it increased to 187 in the second edition.

Where did the schools come from?

Of the participating schools, about half of them rejoined from the 1st edition with the other half participating for the first time. Western Europe was represented with 16 schools, North America, Northern Europe, and Southern Europe each with 6 schools. Asia and Eastern Europe/Russia with 4 schools each, Central/South America with 2 schools and Africa with 1 school.