HOME JOURNALS CONTACT

Journal of Applied Sciences

Year: 2008 | Volume: 8 | Issue: 24 | Page No.: 4603-4609
DOI: 10.3923/jas.2008.4603.4609
A Heuristic Methodology for Multi-Criteria Evaluation of Web-Based E-Learning Systems Based on User Satisfaction
I. Mahdavi, H. Fazlollahtabar, A. Heidarzade, N. Mahdavi-Amiri and Y.I. Rooshan

Abstract: Web-based E-Learning Systems (WELSs) have emerged as new means of skill training and knowledge acquisition, encouraging both academia and industry to invest resources in the adoption of these systems. Traditionally, most pre- and post-adoption tasks related to evaluation are carried out from the viewpoints of technology. Since users have been widely recognized as being a key group of stakeholders in influencing the adoption of information systems, their attitudes about these systems are considered as pivotal. Therefore, based on the theory of multi-criteria decision making and the research results concerning user satisfaction in the fields of human-computer interaction and information systems, a heuristic multi-criteria methodology using the learner satisfaction perspective to support evaluation-based activities occurring at the pre and post-adoption phases of the WELS life cycle is proposed. In addition, using the methodology, we empirically investigate learners` perceptions of the relative importance of decision criteria as beneficial tools for evaluation in management.

Fulltext PDF Fulltext HTML

How to cite this article
I. Mahdavi, H. Fazlollahtabar, A. Heidarzade, N. Mahdavi-Amiri and Y.I. Rooshan, 2008. A Heuristic Methodology for Multi-Criteria Evaluation of Web-Based E-Learning Systems Based on User Satisfaction. Journal of Applied Sciences, 8: 4603-4609.

Keywords: learner satisfaction, decision making and E-learning

INTRODUCTION

The capability and flexibility of the Web-based E-Learning Systems (WELSs) having been demonstrated in both training and education, resulted in their adoption by academia as well as industry. Since, the commercial application package (or commercial off-the-shelf) strategy of system development is so widespread, the proliferation of WELS applications has caused confusion for the potential adopters having to make selective decisions from among candidate products or solutions.

At the same time, organizations with adopted systems are faced with issues arising from the post-adoption phase. Thus, a prior evaluation is required to address these questions and, of course, a sound methodology is the key to an effective evaluation.

Conventional approaches for evaluating an Information System (IS) have leaned towards the standpoints of technical personnel. In contrast, a WELS places particular stress on certain aspects such as the content and the ways in which it is presented, demonstrating it to be a highly user-oriented system. Since users are widely recognized as the key stakeholders in any IS or IS service (Jiang et al., 2001), their attitudes toward the system are pivotal and should be regarded. This is evidenced by the fact that user satisfaction is often seen as a key antecedent to predict the success of a particular IS, or to anticipate a user`s behavior of reuse (Lin and Wang, 2006; Lin et al., 2005). Hence, in this study we will apply the constructs of user satisfaction to evaluate a WELS. In the context of a WELS, however, there is a special group of users, the learners, who hold a unique view regarding satisfaction (Wang, 2003). Actually, they are the e-learners. This means that traditional IS measures for assessing user satisfaction and for assessing learner satisfaction in the context of classroom teachings are not suitable for web-based e-learning.

Present purpose is twofold. First, we will propose a step-based, heuristic multi-criteria methodology from the perspective of e-learner satisfaction in order to support important evaluation-related tasks (e.g., selection from candidate products or solutions and maintenance), which will b e carried out at the pre- or post-adoption phase of the WELS life cycle. This includes defining the constituent steps and recommending tools and techniques that can be used in each step. Second, by following the proposed methodology, we will investigate how e-learners perceive the relative importance of the decision criteria and the sub criteria of these criteria in order to construct a preference structure, to be served as a key in the decision-making process.

THEORETICAL BACKGROUND

Web-based e-learning system: E-learning refers to the use of electronic devices for learning, including the delivery of content via electronic media such as Internet/Intranet/Extranet, audio or video tape, satellite broadcast, interactive TV, CD-ROM and so on. This type of learning moves the traditional instruction paradigm to a learning paradigm (Jönsson, 2005) and thereby relinquishing much control over planning and selection to the learners. In addition, it offers the following advantages to learners: Cost-effectiveness, timely content and access flexibility (Hong et al., 2003).

E-learning applications may appear with different forms of designations such as web-based learning, virtual classrooms and digital collaboration.

Present study is focused on web-based e-learning based on using the Internet (or Intranet/Extranet) and web technologies. This type of e-learning places a greater emphasis on enabling or facilitating the role technology plays in data search and transmission, interactivity and personalization. Concerning the design and construction of a WELS, the trend is towards incorporating two different technologies: Webpage based computer-assisted instruction, which is basically tutorial, drill and practice and learning networks (Hiltz and Turoff, 2002), including extensive learner-learner and instructor–learner communications and interactions, into an integrated environment. Learners in this environment can thus access remote resources, as well as interact with instructors and other learners to satisfy their requirements.

From user satisfaction to e-learner satisfaction: Satisfaction is the pleasure or contentment that a person feels when she/he does something or gets something that she/he wanted or needed to do or get. When used in research, satisfaction is usually conceptualized as the aggregate of a person`s feelings or attitudes towards the many factors that affect a certain situation. In the field of human-computer interaction, user satisfaction is usually visualized as the expression of affections gained from an interaction (Mahmood et al., 2000). This means that user satisfaction is the subjective sum of interactive experiences influenced by many affective components in the interaction (Lindgaard and Dudek, 2003). In the field of IS, the concept of user satisfaction is usually used to represent the degree to which users believe the IS they are using conforms to their requirements.

In the past, many scholars have attempted to measure user satisfaction. The results of their efforts reveal that user satisfaction is a complex construct and its substance varies with the nature of the experience or case.

In the field of human-computer interaction, user satisfaction is traditionally measured in terms of visual appeal, productivity and usability (Hassenzahl et al., 2001; Lindgaard and Dudek, 2003). Since the early 1980s, many scholars in the field of IS began to conduct systematic studies to develop a comprehensive and reasonable set of factors to measure user satisfaction.

The prevalence of web-based e-learning applications stimulates the development of e-learner satisfaction scales directly adapted from teaching quality scales in the field of educational psychology, or from user satisfaction scales in the field of human-computer interaction or IS. However, the application of the achievement from any single field is deemed insufficient because it can omit some critical aspects of learner satisfaction with WELS. Based on the scales of student`s evaluation of teaching effectiveness and user satisfaction, Wang (2003) conducted an exploratory study directed at e-learners. The results of his study showed that a total of 17 items applicable to measuring e-learner satisfaction could be classified into the following dimensions: Content, personalization, learning community and learner interface.

Multi-criteria decision making: Multi-Criteria Decision Making (MCDM), which deals mainly with problems about evaluation or selection, is a rapidly developing area in operational research and management science. A complete MCDM process involves the following basic elements: Criterion set, preference structure, alternative set and performance values. While the final decision will be made based on the performance of alternatives, a well-defined criterion set and preference structure are key influential factors and should be prepared in advance. In order to obtain the criterion set and preference structure, a hierarchical analysis must be carried out. Such an analysis helps decision makers to preliminarily derive an objective hierarchy structure to demonstrate the relationships between the goal and the decision criteria. The goal of the hierarchy may be a perceived better direction of a decision organization. On the other hand, the criteria represent the standards for judging, which should be complete, operational, decomposable, non-redundant and minimal in size. Based on this hierarchy structure, decision makers can set about deriving the relative importance of the criteria and then assessing alternatives against each criterion. By integrating the assessments of alternatives with the relative importance of criteria, an organization can select an alternative which best meets the requirements for accomplish its goal.

The Analytical Hierarchy Process (AHP) is used to face complex decision-making problems. Fundamentally, AHP works by developing priorities for goals in order to value different alternatives. This multi-criteria method has become very popular among operational researchers and decision scientists.

Basically, AHP fits our purposes better because it has methodological tools for (1) structuring the decision problem, (2) weighting criterions/goals and alternatives and (3) analyzing judgment consistency. As negative points, it requires a larger number of inputs than other discrete multi-criteria methods. Nevertheless, these inputs can be reduced by optimizing the hierarchy.

It was mentioned before, the AHP method has tools for consistency analysis. The most-used tool is the Consistency Ratio (CR). The CR tests the consistency of every decision matrix A. A totally consistent matrix A has a CR equal to 0. Notwithstanding, a CR ratio less than 0.1 is acceptable. In the case of group decision making, the most extended tool for aggregating the expert judgments is the geometric mean over numeric entries of the paired comparisons aij (Saaty and Vargas, 2001). Sometimes there is a large number of alternatives that need to be assessed. In these cases, the absolute measurement can be applied to rank the alternatives.

Here, we first apply the AHP method, being considered as one of the main techniques for multi-attributes decision making (MADM) problems. It can be used to evaluate an alternative from the set of alternatives, characterized in terms of their attributes. It is based on a simple intuitive concept, but it enables a systematic and consistent aggregation of attributes.

Entropy is a main technique in physics, sociology and information theory, indicating the uncertainty in the expected content of the information. In other words, entropy is a criterion to express the amount of the uncertainty based on a discrete probability distribution (Pi). A short description of that uncertainty is follows here.

Initially, define E as:

where, k is a positive constant to guarantee 0 ≤ E ≤ 1. E is calculated using the Pi probability distributions and when the Pi are equal, i.e., for all i, then E gets its maximum value, as:

TOPSIS (Technique for Order Preference by Similarity to an Ideal Solution), known as a classical MCDM method, has been developed for solving the MCDM problems. The basic principle of the TOPSIS is that the chosen alternative should have the shortest distance from the positive ideal solution and the farthest distance from the negative ideal solution. The TOPSIS introduces two reference points, but it does not consider the relative importance of the distances from these points.

THE METHODOLOGY

Figure 1 shows a system life cycle under a commercial application package implementation strategy. The main characteristic of this strategy, compared with an in-house development one, is that the organization communicates its system requirements in a form of either request for proposal or request for quotation to candidate WELS vendors.

Fig. 1: A system life cycle

Afterwards, the vendors submit their products or solutions as alternatives for evaluation. Present proposed methodology aims at supporting the tasks taking place at the decision analysis phase or at the operation and maintenance phase. The explanations, including the tasks and the tools or techniques applicable in each step, are as follows.

The start-up step defines the problems and the goal. Problems may be the ones mentioned earlier, as experienced by an organization in the decision analysis or operation and maintenance phase when the commercial application package implementation strategy is used. To solve them, evaluation is necessary. Therefore, the goal is defined as the evaluation of WELS alternatives. After defining the problems and the goal, the next step involves the development of the hierarchy structure. In this step, a hierarchical analysis based on e-learner satisfaction is carried out. Literature review, systematic analysis, empirical investigation, brainstorming and interpretive structural modeling are the feasible methods.

The hierarchy structure used in this study for evaluating WELS alternatives was adapted from Wang`s (2003) empirical work, because what we refer to as learner satisfaction in the WELS context is conceptually close to his e-learner satisfaction. However, a considerable amount of Wang`s (2003) measurement items, which would violate the principle of minimal size in the criterion set and complicate the MCDM process, inspired us to further examine those items. As a result, a total of four items were eliminated, with the remainder transformed into the form of decision criteria. As shown in Fig. 2, we ended up with four criteria, comprising a total of 13 sub-criteria in the hierarchy structure.

Based on the hierarchy structure, the third step, deriving the preference structure, explores learners` perceptions on the relative importance of the criteria and the sub-criteria of these criteria. This may help answer what it is that users regard highly in terms of learner satisfaction in the context of WELS. We apply an innovative heuristic method proposed as a combination of AHP (Analytical Hierarchy Procedure), Entropy and TOPSIS (Technique for Order Preference by Similarity to an Ideal Solution) and we call it AET method, for short. The methodology of the proposed model is shown in Fig. 3.

AET algorithm
Notations and definitions:

n = No. of criteria
m = No. of vendors
p = Index for vendorsp = 1,....,m

Fig. 2: A hierarchy structure for evaluating WELS

Fig. 3: The methodology of the proposed model

j = Index for sub-criteria j = 1,...,13
q = Index for criteria q = 1,...,n = 4
W`p,j = The weight of pth vendor with respect to jth sub-criterion
Wj,q = The weight of jth sub-criteria with respect to qth criterion
Rp,q = The weight of pth vendor with respect to qth criterion
wq = The weight of qth criterion

The proposed algorithm that is based on AHP, Entropy and TOPSIS approaches (AET algorithm) is summarized as the following steps.

Algorithm AET: A combined AHP, Entropy and TOPSIS Approach.

Step 1:Define the decision problem and goal.

Step 2: Structure the hierarchy from the top through the intermediate to the lowest level.

Step 3: Construct the vendor-criteria matrix using steps 3-1 to 3-5 by the AHP.

Step 4: Construct matrices of pair-wise comparisons for each of the lower levels with one matrix for each element in the level immediately above by using a relative scale measurement. The decision maker has the option of expressing his or her intensity of preference on a nine-point scale. If two criteria are of equal importance, a value of 1 is given in the comparison, while a 9 indicates an absolute importance of one criterion over the other. Table 1 shows the measurement scale.

Step 5: Compute eigenvalue by the relative weights of the criteria and the sum is taken over all weighted eigenvector entries corresponding to those in the next lower level of the hierarchy.

Pair-wise comparison data can be analyzed using the eigenvalue technique. Using these pair-wise comparisons, the parameters can be estimated. The eigenvector of the largest eigenvalue of matrix A constitutes the estimation of relative importance of attributes.

Step 6: Construct the consistency matrix and perform consequence weights analysis, as follows:

if matrix A is consistent (that is aij = aik akj for all i, j, k = 1, 2,....,n), then A contains no errors (the weights are already known) and we have:


Table 1: The criteria preferences with their numerical values

Table 2: The vendor-sub criteria matrix

Table 3: The sub criteria-criteria matrix

Table 4: The vendor-criteria matrix

If the pair-wise comparisons do not include any inconsistencies, then let λmax = n. The more consistent the maximum comparisons are, the closer the value of computed λmax is to n. A Consistency Index (CI), which measures the inconsistencies of pair-wise comparisons, is set to be:

and a Consistency Ratio (CR) is set to be:

where, n is the number of columns in A and RI is the random index, being the average of the CI obtained from a large number of randomly generated matrices.

RI depends on the order of the matrix and CR value of 10% or less is considered acceptable.

Step 7: Configure vendor-sub criteria and the sub criteria-criteria matrix as shown in Table 2 and 3. These weights are determined by the decision makers.

Step 8: The vendor-criteria matrix is formed as shown in Table 4.

where, and j is the number of sub-criterion.

Step 9: Calculate the weights of criteria using the Entropy method.

Step 10: Use the normalized decision matrix gained by the AHP as shown in Table 4 for the Rij and calculate Eq:

so that,

Step 11: Set the uncertainty in decision making or deviation degree (dq) for qth criterion as follows:

Step 12: Compute the weights (wq) for the criteria as follows:

Step 13: Obtain the weights of vendors using TOPSIS. Based on vendor-criteria matrix (Table 4) Rpq and a diameter matrix including the weights of the criteria in step 12, a new matrix is configured as follows:

where, W = diag (w1, w2, w3, w4) Note that the qth column of V is the weighted qth column of R, i.e., Vq = wq Rq, 1 ≤ q ≤ 4.

Step 14: Determine the Positive Ideal Solution (PIS) and Negative Ideal Solution (NIS) by:

where, J is associated with the benefit criteria, J` is associated with the cost criteria, is the best solutions of vendors with respect to qth criterion and is the worst solution of vendors with respect to qth criterion.

Step 15: Compute the separation measures, using the Euclidean distance. The separation of each vendor from the positive ideal solution is given by:

Similarly, compute the separation from the negative ideal solution by:

Step 16: Compute the relative closeness to the ideal solution. The relative closeness of alternative Ai with respect to PIS is defined by:

Note that since , then clearly, .

Step 17: Rank the preference order. For ranking alternatives using this index, we can rank alternatives in decreasing order.

After the preference structure is obtained, the center of MCDM activities shifts to the evaluation of the WELS alternatives. For the decision analysis phase, organizations can gather the alternatives and then evaluate the alternatives against the criteria. In this evaluation, two methods, rating-based and ranking-based, are recommended.

The rating-based method involves assessing a particular alternative by rating it under each criterion or sub-criterion. The overall performance of this alternative can be acquired by summing up its weighted performances under each criterion or sub-criterion and the deciding organization can thus select from alternatives according to the overall performance of each alternative.

The ranking-based method involves ranking the alternatives by their key attributes. Traditional ranking procedures usually used in social sciences include methods such as rank order, paired comparisons, constant stimuli and successive categories. Here, the method of paired comparisons is recomended because it can be integrated with the preference structure (weights of criteria and sub-criteria) to facilitate the overall assessment of the alternatives.

Finally, in the operation and maintenance phase, the existing system will be assessed against the criteria or sub-criteria. Under these circumstances, only the rating-based method is applicable. Under a particular criterion or sub-criteria, a performance value below the pre-defined threshold indicates the need for improvement value or enhancement value. If there are many such indications and if organizational resources are so limited that maintenance efforts are only allowed to be devoted partially, then the areas with the greatest weighted distances from the perfection level will have the priorities.

CONCLUSIONS

The use of the e-learner-satisfaction perspective and a large-sample, learner-based AET, contribute to adapting the conventional MCDM paradigm to problems that are highly user-oriented. Our methodology would supply management in both education and industry with not only a less complex but also a more appropriate and flexible approach to effectively analyze their currently deployed Web-based E-Learning Systems (WELS). It can also support their selections of appropriate WELS products, solutions, or modules by assessing the alternatives available for adoption of such technological innovations. At the same time, it would allow the technical personnel of WELS vendors (e.g., the analysts or designers) to gain better understandings of learners` preferences towards system features before a WELS is implemented, pinpointing any necessary improvements or enhancements. This would allow for achieving a higher level of e-learner satisfaction while elevating the acceptance level of the system and its continued use.

ACKNOWLEDGMENTS

The first and third authors thank Mazandaran University of Science and Technology, the second author appreciates Mazandaran Telecommunication Research Center, the fourth author thanks the Research Center of Sharif University of Technology and the fifth author thanks Mazandaran University for supporting this research with grant No. 85635.

REFERENCES

  • Hassenzahl, M., A. Beau and M. Burmester, 2001. Engineering joy. IEEE Software, 18: 70-76.
    Direct Link    


  • Hiltz, S.R. and M. Turoff, 2002. What makes learning networks effective. Commun. ACM., 45: 56-59.
    Direct Link    


  • Hong, K.S., K.W. Lai and D. Holton, 2003. Students satisfaction and perceived learning with web-based course. Edu. Technol. Soc., 6: 116-124.


  • Jiang, J.J., G. Klein, J. Roan and J.T.M. Lin, 2001. IS service performance: Self-perceptions and user perceptions. Inform. Manage., 38: 499-506.
    CrossRef    


  • Jonsson, B.A., 2005. A case study of successful e-learning: A web-based distance course in medical physics held for school teachers of the upper secondary level. Med. Eng. Phys., 27: 571-581.
    CrossRef    


  • Lin, C.S., S. Wu and R.J. Tsai, 2005. Integrating perceived playfulness into expectation-confirmation model for web portal context. Inform. Manage., 42: 683-693.
    CrossRef    


  • Lin, H.H. and Y.S. Wang, 2006. An examination of the determinants of customer loyalty in mobile commerce contexts. Inform. Manage., 43: 271-282.
    CrossRef    


  • Lindgaard, G. and C. Dudek, 2003. What is this evasive beast we call user satisfaction?. Interacti. Comput., 15: 429-452.
    CrossRef    


  • Mahmood, M.A., J.M. Burn, L.A. Gemoets and C. Jacquez, 2000. Variables affecting information technology end-user satisfaction: A meta-analysis of the empirical literature. Int. J. Human Comput. Stud., 52: 751-771.
    CrossRef    


  • Saaty, T.L. and L.G. Vargas, 2001. Models, Methods, Concepts and Applications of the Analytic Hierarchy Process. 1st Edn., Kluwer Academic Publishers, Boston, MA., ISBN 0792372670


  • Wang, Y.S., 2003. Assessment of learner satisfaction with asynchronous electronic learning systems. Inform. Manage., 41: 75-86.
    CrossRef    

  • © Science Alert. All Rights Reserved