Benchmarking has emerged as the buzzword during the last decade as a tool for quality assessment and improvement. It is a planned, systematic effort with clear objectives and processes to measure and compare with the best in class. As stated by UNESCO in their benchmarking study that desire to learn from each other and to share aspects of good practice is almost as old as the university itself. Thus, improving performance by collaboration or comparison with other universities is nothing new in higher education. New paragraph: use this style but recent efforts on benchmarking stress on the formalization of such comparisons. Although it is not as easy as it seems. Higher Education Institutions (HEIs) globally face many challenges in formalizing and systematizing the benchmarking approach. Many HEIs simply imitate the best practices without consideration for the level playing field; which ultimately results in mismatch and brings chaos instead of improvements. This study provides a framework for formal benchmarking as a tool by using Analytical Hierarchy Process (AHP) for adapting the best practices for quality enhancement in Saudi Arabian Business Schools.
The motivation to undertake the present study has been developed taking a cue from the benchmarking efforts and the problems and challenges faced by the College of Business Administration Al-Kharj, hence forth CBAK, the serving institute of researchers itself. The curriculum of CBAK and many more colleges in Saudi Arabia was designed by taking into consideration the top business schools in USA. Needless to mention that certainly this was informal benchmarking but done without adopting any scientific basis and many senior faculty laments that it could not render the desired results within the Saudi context. Moreover, the curriculum taught in US universities had been developed incrementally over a span of several decades.
In addition there is a great amount of difference between the Saudi students
and the USA students entry level requirements in a university; also USA
students have their mother tongue as English, while Saudi Arabian students learn
English at the undergraduate level only. Only recently on the behest of National
Commission for Assessment and Academic Accreditation (NCAAA) that efforts are
made to upgrade the entry requirements in Saudi Universities; with the introduction
of compulsory preparatory year for students poor in English language and/or
mathematics before any formal enrolment in the undergraduate program.
Another important aspect is the cultural differences; Saudi Arabia being a
Muslim country has some Islamic requirements that need to be fulfilled by the
curriculum, while students in USA do not have any such requirements.
Benchmarking in higher education: Benchmarking has emerged as one of
the most efficient tools of management and quality improvement in various types
of organizations. The literature on benchmarking has evolved over years and
getting richer year to year. Some of the reviews can be cited as (Jackson
et al., 1994; Zairi and Youssef, 1995, 1996;
Vig, 1995; Czuchry et al.,
1995; Dorsch and Yasin, 1998; Dattakumar
and Jagadeesh, 2003) etc.
Any sector specific study may find enough literature to materialize the research
problem. The present study also does not hold any exception. In the field of
education the use of benchmarking its importance, different models, approaches
and many more are studied and applied by various authors in independent studies
and funded projects. Major contributor in this stream can be listed as (Tang
and Zairi, 1998; Yarrow and Prabhu, 1999; McKinnon
et al., 2000; Wan Endut et al., 2000;
Weeks, 2000; Prasnikar et al.,
2005; Stella and Woodhouse, 2007), etc. The list
of these kinds of studies is very comprehensive. Most of the studies under this
category emphasize the use of benchmarking in the Higher Education Institutions
(HEIs) as a vital tool for total quality management and performance improvement.
AHP and benchmarking: Analytical Hierarchy Process (AHP) is a multi-criteria
decision making process initially developed by Thomas L. Satty in the late seventies.
The technique has grown up with the time and widely used by the researchers.
It is also frequently used for benchmarking studies in various fields. Kabir
et al. (2012), used the fuzzy logics with analytic hierarchy process
(FAHP) approach to support the online retail benchmarking process. Faisal
et al. (2011) applied analytic hierarchy process (AHP) in order to
rank the seventeen Total Quality Management (TQM) practices in the service industry.
Joshi et al. (2011) integrate the Delphi-AHP-TOPSIS
techniques to benchmarking the benchmarking framework for evaluating the cold
chain performance of a company. Kannan (2009), used
AHP to assist the ocean container carriers operator industry for benchmarking
the service quality. Raharjo et al. (2007) used
AHP with Quality Function Deployment (QFD) analysis to integrate the dynamics
of competitors' performance and the dynamics of customer preference, along with
their interaction for benchmarking. Stella and Woodhouse
(2007) used AHP in order to identify the best benchmarking partner for Value
Management (VM) in China. Shee (2006) proposed the
framework for Competence Set (CS) expansion by using the analytical hierarchy
process presenting a case study of small and medium enterprise (SME). Chan
et al. (2006), employed the double AHP methodology on benchmarking
the logistics performance of the postal industry.
Wang et al. (1998), in their paper concluded
after comparing the two important techniques, i.e. AHP and prioritization matrix
that if time, cost and difficulty are the major concerns in product improvement
of a company, the prioritization matrix method should be preferred; where accuracy
is the major requirement, the AHP method would be a better choice.
Although, AHP is a well-developed tool but its applications are rarely noticed in HEIs. And the negligible presence in the Saudi Higher Education sector for benchmarking.
The study is an empirical work based on the primary data collected by administering the questionnaire and survey responses were analyzed using the statistical package for Social Sciences Research (SPSS). The questionnaire is available with authors upon request. Secondary data was collected mainly from the public domain. The following paragraphs explain the AHP approach adopted in this study.
AHP approach: The Analytic Hierarchy Process (AHP) is a procedure suited
for complex technological, economic and socio-political decision-making (Saaty,
1990). AHP was developed by Thomas L. Saaty in the early 1970s to help individuals
and groups deal with multi-criteria decisions. By incorporating both subjective
and objective data into a logical hierarchy framework, AHP provides decision-makers
with intuitive or common sense approach to evaluating the importance of every
element of a decision through a pair-wise comparison process (Saaty
and Vargas, 1991).
There are four steps in AHP method Saaty (2000). First
step involves decomposing the problem into attributes. Each attribute is further
decomposed into Sub-attributes/Alternatives until the lowest level of the hierarchy.
In the second step weighing for each two of the attributes and sub-attributes
by using a rating scale developed by Saaty (2000).
The third step i.e. evaluating, involves in calculating the weight of each attribute. From this step we get the overall priority for each alternative and the best choice is the alternative which has the largest overall priority value.
The fourth step i.e. selecting, measures how consistent the judgments have
been relative to large samples of purely random judgments (Coyle,
2004). The consistency ratio (CR) which is calculated as follows:
||Calculate the biggest eigenvector (λmax)
||Compute the consistency index (μ) for each matrix of
order n by the Eq:
||Finally, the CR of a pair-wise comparison matrix is the ratio
of its consistency index, μ, to the corresponding Random Index (RI)
value in Table 1. i.e., CR = μ/RI
RI is obtained from a large number of simulations runs and varies depending
upon the order of matrix (Kannan, 2009). Table
1 shows the value of the RI for matrices of order 1-15 obtained by approximating
random indices using 50,000 simulations (Saaty, 2009).
If the value of CR is equal to, or less than 0.1, it implies that the evaluation
within the matrix is acceptable. If CR is more than 0.1, the inconsistency of
judgments within that matrix has occurred and the evaluation process should
therefore be reviewed, reconsidered and improved (Crowe
et al., 1998).
Saaty (2009) describes that if CR is larger than desired
we need to do three things (1) Find the most inconsistent judgment in the matrix,
(2) Determine the range of values to which the judgment can be changed corresponding
to which the inconsistency would be improved, (3) Ask the judge to consider,
if he can, changing his judgment to a reasonable value in that range. In order
to carry out these steps many software are available e.g. smart choice, super
decisions and even an AHP Excel template. Not only all these software automates
the above mentioned four but also accelerates and works with precision. For
the purpose of our analysis we used Expert choice software.
Benchmarking survey response analysis: Collection of primary data was the most tedious and time taking job for this project. Initially, we administered the survey in online mode using Google analytics and due to poor response rate (less than five percent) were forced to distribute the hard copies. The online survey was targeted to collect opinion of the policy makers of different business schools throughout the kingdom. The survey link was emailed to the Dean, Vice-dean of different Business Schools in the kingdom as a reminder were followed up by the telephonic call. But unfortunately the response rate was murkier. In the later stage we decided to make a personal visit to the nearby business schools in the Riyadh region.
The survey revealed some interesting results about benchmarking in the Kingdom. In what follows, we will present the survey instrument with the demographics of the respondents and statistical analysis of the collected data followed by the discussion of the results and conclusion.
Benchmarking survey and demographics of respondents: The survey instrument
was distributed to various business schools in the Riyadh region. The sample
and the characteristics of the sampled subjects are as follows.
Analysis of the sample: About 150 questionnaires were distributed to Dean, Vice-dean and Heads of academic departments in addition to faculty members. Later on follow ups were made. As a result, there were 52 respondents (respondent rate 34.67%) and 98 non-respondents. It is important to note that each respondent answered only one questionnaire.
Analysis of the responses to the questionnaire: Among the 52 respondents
who responded to the questionnaire, there were:
||Overall 42.31% had experience of using benchmarking in their
||About 53.85% did not used or participated in a benchmarking
project at institutional level
||There were 3.85% missing cases, i.e. did not prefer to respond
||Summary of Institutions experience with benchmarking
|| Priority for adapting the benchmarking as a tool
Therefore, the affirmative response rate equals 44.01% whereas the negative response rate represents 55.09% (Fig. 1).
Majority of respondents who said they did not use benchmarking expressed their level of priority for adapting the benchmarking as a tool to be on high priority (Fig. 2).
Characteristics of the respondents: The 52 respondents are characterize in terms of their job title, administrative position, nationality, academic department and work experience Fig. 3.
Analysis of data: The analysis of the data collected through the survey revealed the main purposes for benchmarking in the higher education environment, process of benchmarking and the challenges associated with designing and implementing effective benchmarking regimes in the institutions.
Table 2 shows the descriptive statistics for each of ten
identified purposes (McKinnon et al., 2000) for
which benchmarking could be applied in the higher education environment.
In general, benchmarking has mostly been seen as a building performance assessment into learning and teaching. Interestingly, benchmarking has been importantly seen as a building organization-wide commitment to goals, research performance and Strategic Planning. General management improvement and strengthening service support links were viewed as being only moderately important objectives for undertaking benchmarking. Areas such as keeping ahead of competition, Particular functional area improvement and staff development were rated lowly as a use for benchmarking by respondents. Interestingly the respondents agree on the major points but still some advocated that replicating can also work while go for benchmarking.
Process of benchmarking: To draw a general line by taking the perception of respondents for benchmarking process some questions were made specific to the process part of the benchmarking. The respondents in different job title agreed that benchmarking is a continuous process. In addition, opinioned that benchmarking could be against other higher education institutions with similar characteristics in the kingdom or overseas. Majority of them nodded that Benchmarking should be against other business firms/organizations. As a matter of fact Benchmarking should be against some self determined goals based on the circumstances and life stage of the institution at the time.
Major challenges for benchmarking in business schools: The following
are the major challenges raised by the respondents while replying to the open
ended question, What are the challenges associated with designing and
implementing effective benchmarking regimes in your institutions? Though
majority of the respondents were of the view that the major challenging task
while benchmarking is the selection of benchmarking partner institutions
In addition to this the responses can be summed up as follows:
||Lack of manpower and resources
||Characteristics of the respondents, (a) Administrative title,
(b) Job title, (c) Nationality, (d) Department and (e) Work experience
|| Descriptive statistics
||Lack of time, earmarked resources, availability of information
||Gap between benchmarking institutions and our students
||Designing and implementing effective benchmarking regions
from faculty perspective since they lack orientation and philosophy of benchmarking
||Finding the right benchmarking institutions/partner and then
adapting their way of handling issues
||Lack of proper direction and availability and accessibility
of various data
||Convincing the management/higher authorities about the gap
areas for benchmarking
||Faculty and staff involvement
||Conceptual clarity and adhoc usage of benchmarking
||Trying to benchmark only on the secondary data and not having
partnerships or collaborations to get primary data which is confidential
||Transparency and integrity while sharing data/information
Thus from the above discussion we can conclude that choosing a right partner is the biggest challenge perceived by the majority of respondents, the decision makers in benchmarking. Moreover, comprehensive survey of literature discussed earlier found no clear guideline on the scientific approach for the selection of benchmarking partner. Therefore, an attempt has been made in the subsequent section to develop/suggest a scientific approach for the benchmarking partner selection.
ANALYTIC HIERARCHY PROCESS (AHP) MODEL
Setting up of AHP model in expert choice: Applying the first step in the AHP model discussed in the methodology section above, we define the objective as identifying the ideal benchmarking partner. Next, it is further decomposed into seven criteria on which the benchmarking partner is considered. The criteria are then structured into a hierarchical form to represent the relationships between the identified factors as shown in Fig. 4 along with alternatives i.e. candidates for benchmarking.
Choosing the right criterion was a cumbersome task before us. We not only rely
on the literature but also the views of major stakeholders, which were given
due importance. In order to get the views of the major stakeholders, i.e. aculty
members and the managers (Dean, Vice Deans and HODs), we organized a workshop
and almost all voluntarily participated. The first half of the workshop was
devoted to educate the participants about benchmarking methodology in Higher
education and the second half was dedicated to brainstorm to identify several
alternatives/criterion to select a benchmarking partner. The outcome of the
workshop produced seven criterias to identify the ideal benchmarking partner
as depicted in Table 3.
Next we needed to choose the sample of candidates for benchmarking. We populate the sample of candidates with the guidance note from the strategic plan of our university i.e., Salman bin Abdul-Aziz University. Moreover, we used two criteria: (1) Academic ranking of World universities and (2) Accreditation to short list the candidates. The final list had seven universities out of the ten as possible candidates for benchmarking show on in Table 4.
Determining weights from pair-wise comparison matrix: An initial set of seven indicators identified in step 1 was evaluated by seven experts as the participants who represented six academic departments including the top management, all belonging to authors own college, i.e.CBAK. In order to fill out the pair-wise comparison, participants were given an orientation about some basic information about AHP and an extensive description of how to use the Satty scale for pair-wise comparison. Later participants were asked about their judgments and the researchers directly fed in expert choice. The sample of the pair-wise comparison matrix is shown in Fig. 5.
|| AHP benchmarking model in expert choice
|| Benchmarking criteria with explanation
|| Prospective benchmarking partners/alternatives in AHP model
|| Sample of the pair-wise comparison matrix.
AHP uses linear algebra and graph theory to calculate the relative importance of weighting for the selection criteria. Once the pair wise comparison is done, expert choice then calculates the Consistency Ratio (CR) for criterion. Saatys rule of thumb is to accept only judgment matrices with CR<0.1.
Initially our combined matrix had an inconsistency of 0.11 so in order to meet the strict limit of 0.1 the consistency ratio had to be revisited 1st and 2nd judgments i.e., the highest inconsistency to finally arrive at the value of CR = 0.09. Figure 6 depicts the derived priorities for the selection of benchmarking partner.
Majority of the experts give the highest priority to the criteria Have
Superior Performance in the Areas to be benchmarked. Though it is again
a matter of debate that is how we will judge the performance of the prospective
|| Ratings of alternatives in AHP model
|| Synthesize results for priorities with respect to goal
|| Combined rating of alternatives
However it raises a series of questions as follows which need a scientific
approach to answer: Who is doing it the best? How do they do it? How can we
adapt what they do to our institution? How can we be better than the best? In
order to materialize these questions we do the second stage AHP model that would
answer the above raised questions by providing ratings to the alternatives (candidates
to be benchmarked). In other words, we synthesize by combining ratings to find
out the ideal benchmarking partner and conduct sensitivity analysis for the
Rating alternatives (candidates to be benchmarked): The combined ratings of alternatives are presented in Table 5 and Fig. 7.
On synthesis, CIM-KFUPM and BS-NUS emerges as the benchmark with the combined priority of 75.7 and 71.5%, respectively, which is followed by CSB-UA with 64.4% and CSB-IU with 61.9% if the threshold limit of 60% is fixed.
To see if the relative change in weights of criteria causes any change in the
ranks of benchmarking partners we perform the sensitivity analysis. After a
series of sensitivity analyses, it is found that CIM-KFUPM and BS-NUS emerged
as winners since they were no slight change in the ranking. Figure
8 and 9(a, b) show the performance and
dynamic sensitivity graphs pre and post sensitivity, respectively.
Thus, the selection of an ideal benchmark partners is critical for the success
of any benchmarking effort. The multitude of criteria makes the selection become
a difficult and complex task.
|| (a) Performance sensitivity (preliminary) and (b) Dynamic
|| (a) Performance sensitivity (post) and (b) Dynamic sensitivity
An analytical model based on AHP was developed for the selection of ideal
HEIs benchmarking partners.
The current study conducted a benchmarking survey and used Analytical Hierarchy Process (AHP) technique to present a benchmarking framework that support decision makers/managers in adapting the best practices for quality improvement at CBAK.
Benchmarking survey revealed that majority of respondents did not used or participated in any benchmarking project and expressed adapting the benchmarking tool as their top most priority. Although, majority of the respondents hold forth that the most challenging task is the selection of benchmarking partner institutions.
Using a scientific approach viz. AHP model, the current study successfully identified CIM-KFUPM and BS-NUS as the ideal benchmarking partners under all the circumstances based on pre- and post-performance sensitivity and dynamic sensitivity, respectively.
With the proposed benchmarking framework CBAK can easily understand its strengths and weaknesses as compared to its seven colleges chosen for this study. It can identify the good practices and can benchmark them for improving the weaknesses.
Indeed, gathering information from these partners is not an easy task. Even though, information can be collected from the public domain without directly contacting them. We recommend, in gathering benchmarking data CBAK should forge partnerships with the two ideal benchmarking partners, viz., CIM-KFUPM and BS-NUS in an ethical and legal manner.
Furthermore, the CBAK need not just copy the best practices learnt from its partners, it can adapt and go beyond the learning and use innovative means to create what is the most relevant as per its operational strategy. And in this way they can instill a culture of continuous and organizational learning, a process that provides continuous development, innovation in order to become the best-in-class.
A suggestion for future research could be the use of proposed framework to identify benchmarking partners for the Key Performance Indicators identified by NCAAA.
We are grateful to the Deanship of Scientific Research, Salman bin Abdulaziz University, KSA for providing us financial assistance to our project Grant No: 1432/?/62.