HOME JOURNALS CONTACT

Information Technology Journal

Year: 2016 | Volume: 15 | Issue: 1 | Page No.: 26-30
DOI: 10.3923/itj.2016.26.30
Using a Discount Usability Engineering Approach to Assess Public Web-based Systems in Saudi Arabia
Eman Ahmed Al-taisan, Ghadah Salman Alduhailan and Majed Aadi Alshamari

Abstract: Presently, public web-based systems are gradually replacing the conventional paper-based public services. As such, this can significantly reduce costs and save time for the government and the public. This study aims to review the current and up-to-date literature for discount usability approach. Additionally, it also aims to assess the usability of public web-based systems in Saudi Arabia based on the discount usability engineering approach. The study results showed that usability testing is more powerful method than heuristic evaluation and cognitive walkthroughs, because it is able to identify more usability problems compared to other useable methods. However, the heuristic evaluation did not perform well as expected and failed to reveal a number of problems linked to usability principles. It was also found that the usability testing method reveals more problems related to a number of usability principles such as operation visibility, synthesizability and recoverability.

Fulltext PDF Fulltext HTML

How to cite this article
Eman Ahmed Al-taisan, Ghadah Salman Alduhailan and Majed Aadi Alshamari, 2016. Using a Discount Usability Engineering Approach to Assess Public Web-based Systems in Saudi Arabia. Information Technology Journal, 15: 26-30.

Keywords: Discount usability engineering method, heuristic evaluation, usability testing, web-based evaluation, conventional paper-based service, cost, operation visibility and recoverability

INTRODUCTION

People expect that technology should be helpful and the expectations are increasing considerably with the emergence of new technologies and services. Therefore, each sector (public and private) is not only attempting to computerize their processes but also to automate them and make online availability. Currently, e-services of civil affairs in Saudi Arabia are one of the most important services for all its citizens and residents. Since, a number of these e-services are mandatory, therefore the users have to use them for different important transactions. However, the type of citizens and residents targeted depend on their culture, level of education, gender and computer skills. Despite these differences, they have to use the same system. Therefore, the system should have high usability levels so that the users can achieve maximum performance and efficiency. The main objective of this study is to assess the usability of the public web-based systems in Saudi Arabia based on the discount usability engineering method.

The discount usability engineering approach consists of three usability evaluation methods i.e., heuristic evaluation, cognitive walkthrough and the usability testing (Nielsen, 1994). The heuristic evaluation method requires three to five experts to inspect a targeted system using a set of defined guidelines. It is claimed to be fast and cheap but a number of studies have seriously questioned the effectiveness of this method as stated by Hertzum and Jacobsen (2001) and Nielsen et al. (1998). This is due to the difference in expert’s background, abstract heuristics and lack of customization, because the heuristics need to be more domain-specific rather than general. While, Lewis et al. (1990) reported that cognitive walkthroughs also need experts to inspect the system when using a set of tasks. It is also claimed to be quick and cheap method. The usability testing method requires representative users to perform a number of tasks which represent the main business process in the targeted system (Lewis, 2006a; Rubin and Chisnell, 2006). It was also criticized due to high cost and being a time consuming. Although it usually identifies a high number of usability problems due to the involvement of different users.

In Saudi Arabia, although limited studies were carried in order to assess the public website usability, but it can be visualized that most of these studies emphasized the importance of adopting and assessing usability principles. This comment can be justified by the large number of emerging web-based systems which target a wide range of users. The E-government services in Saudi Arabia were given a UN global ranking of 70 and 58 for 2008 and 2010, respectively in spite of the fact that 8 out of 21 ministries were not fully working due to incomplete online services (Al-Nuaim, 2011). It was reported by Alasem (2013) that the usability of the Saudi Digital Library (SDL) was not satisfactory due to its presentation, layout and online services based on a usability survey among its targeted users. This study again implies the importance of conducting usability studies for public websites in Saudi Arabia. Another interesting study, the usability of Saudi university websites showed an acceptable level of usability, but the usability of private university websites scored 5% less than the public universities (Alotaibi, 2013). Although the author adopted only one usability evaluation method namely the heuristic evaluation, but the results are in line with current studies in regards to public website usability in Saudi Arabia. The importance of conducting usability studies on public websites in Saudi Arabia is clear. Therefore, this research has attempted to show a number of the current literature of using discount usability engineering, in particular for Saudi Arabian websites. Hence, this study aims to assess the usability of public web-based systems based on the discount usability engineering approach.

MATERIALS AND METHODS

The discount usability engineering approach was carried applying three evaluation techniques i.e., heuristic evaluation, usability testing and cognitive walkthrough. These three methods were adopted in this study in order to have a comprehensive evaluation. The details of each of the three methods is given below.

Heuristic evaluation: This method utilized the list of heuristics proposed by Nielsen (2005). Two usability engineers have developed and validated detailed guidelines in order to avoid heuristic evaluation limitations as being too short. The problems identified by the evaluators were ranked using the following categories as proposed by Nielsen (1995):

•  I don’t agree that this is a usability problem at all
Cosmetic problem only: need not be fixed unless extra time is available on project
Minor usability problem: fixing this should be given low priority
Major usability problem: important to fix, so should be given high priority
Usability catastrophe: imperative to fix this before product can be released

Cognitive walkthrough: This method was used with two evaluators selecting six user tasks after interviewing representative users in order to choose realistic scenarios. The cognitive walkthrough allowed one or more evaluators to assess early mock-ups of designs quickly. This method does not need a complete prototype or user’s participation (Rieman et al., 1995). The following procedure was followed to conduct a cognitive walkthrough evaluation:

•  A general description of who the users will be and what relevant knowledge they possess
A specific description of one or more representative tasks to be performed with the system
A list of the correct actions required to complete each of these tasks with the interface being evaluated

Usability testing: This method was used on five participants who were recruited to perform a set of four tasks. The number of participants was discussed by Nielsen (2006) and Lewis (2006b). These researches recommended five users according to their experience as this number is enough as long as the selection was based on the targeted systems to eliminate any significant risks and help to avoid any validity questions. During this method, four performance metrics were measured while the users performed four tasks i.e., task time, success rate, error rate and user satisfaction. The task performances were recorded using Snagit which helps to identify each user’s action for usability purposes (Snagit, 2014). In addition, an observation form was developed in order to help the tester to document the participant’s comments. The test environment was maintained as natural as possible taking into account the Internet speed, browser type, operating system and other factors important to the validity of the study.

Targeted website: The researchers ensured that the selected website or system would support the research objectives. It was ensured that the selected website consisted of a number of functionalities, targeted a wide range of users, represented online public services in Saudi Arabia and could be changed during the study period.

Study procedure: Prior to conducting this study, all the research instruments, such as the developed heuristic evaluation, cognitive walkthrough procedure and usability testing tasks and forms were tested. It was also checked that the users were recruited appropriately and that the experts were also organized and that the test environment was ready for the participants.

Statistics analysis of data: The various statistical techniques such as ANOVA and regression were applied for data analysis according to procedures given in SAS (2010). The level of significance was considered at 5%.

RESULTS AND DISCUSSION

Qualitative assessment and quantitative measurement were applied to analyze the results in order to have a comprehensive evaluation. These results were expected to offer a better understanding of the current status of public web-based system usability.

Data in Table 1 shows a comparison of three methods with respect to their cost, resources needed, status of the targeted system, the time needed and general observations while applying the three methods.

Performance of the three usability evaluation methods: Generally, 21 usability problems were identified by all three usability evaluation methods as shown in Table 2. Usability testing identified 13 usability problems, whereas the heuristic evaluation and cognitive walkthrough identified 6 and 8 usability problems, respectively. All three methods performed almost the same when discovering catastrophic problems and the usability testing and cognitive walkthrough found more major usability problems than the heuristic evaluation. It can also be noted that minor problems were revealed more by usability testing. This is due to involving users who behave differently while using the system. The study findings agree with those of Lewis (2006a) and Rubin and Chisnell (2006) who found that the usability testing method requires representative users to perform a number of tasks which represent the main business process in the targeted system. They also identified a number of usability problems due to different users.

Relationship between usability methods and usability principles: Further analysis was conducted to identify the usability evaluation method in order to determine the type of usability problem. It was found that the usability testing method revealed mostly the usability problems related to operation visibility, synthesizability and recoverability. However, the usability testing method was able to reveal all of the usability problems as presented in Table 3. The cognitive walkthrough method highlighted more problems related to operation visibility and guessability. The heuristic evaluation method performed better when identifying problems related to consistency. Furthermore, this method was not able to identify problems related to task adequacy, synthesizability, familiarity and guessability.

Table 1: Comparison of discount usability engineering methods

Table 2: Number of usability problems found by all the usability evaluation methods

Table 3: Relationship between usability methods and usability principles

This implies the importance of breaking down the heuristic into more detailed guidelines and the development of specific domain rules or criteria. The findings indicated that it seems to be essential to conduct usability testing in order to assess usability, as this method was able to discover almost double the number of usability problems compared to other methods. The results of study also showed that either the heuristic evaluation or cognitive walkthrough may be enough if time and/or money are short. In a similar study, Alotaibi (2013) concluded that the usability of Saudi university websites showed an acceptable level of usability, but it was less in the private university websites. Table 3 shows the relationship between the performance of the each of the three usability evaluation methods and usability principles.

Despite the limited number of users who participated in this research, the results showed an acceptable user satisfaction rate (three out of five gave an above average user satisfaction rating). In addition, this study found that the emerging public web-based system was successful with respect to task completion and adopting usability principles, although it was launched shortly targeting a wide-range of users. The results suggest that the public web-based system in Saudi Arabia maintains an adequate level of usability practices. Similar results were reported by Alasem (2013) who observed that the usability of the Saudi Digital Library (SDL) was not satisfactory due to its presentation, layout and online services based on a usability survey among its targeted users.

CONCLUSION, LIMITATIONS AND FUTURE WORK

This study presented the importance of combining a set of usability methods in order to have comprehensive results. Each usability evaluation method has its own strengths and limitations. The results showed that this public web-based system has an acceptable level of usability, although it is recently launched. However, this assessment was only conducted on the available services. More services are expected to be launched and so more assessments and evaluations are expected in order to ensure the usability principles are adopted. This study also found that there is a need to develop the heuristic evaluation method. It still needs further considerations, while applying this method, such as adding more guidelines and breaking down the heuristics further. Overall, the study results are in line with the current available literature.

Adopting usability principles is critical for effective interaction. A number of studies and researches have claimed that it’s a costly decision to employ a number of usability evaluation methods to assess a single system. Therefore, improving the effectiveness of a single method can be attractive to usability engineers and/or the business sector. This study is similar to other studies in its limitations. The lack of users who participated in this research is one of its limitations as a number of statistical analyses cannot be conducted on such a small sample size. Testing more than one system could also offer fruitful and interesting results. The culture and its impact on usability was not within the scope of this research. This could be a further study that considers that different cultures may need customizable usability guidelines.

REFERENCES

  • Alasem, A.N., 2013. Evaluating the usability of Saudi Digital Library's interface (SDL). Proceedings of the World Congress on Engineering and Computer Science Volume 1, October 23-25, 2013, San Francisco, USA., pp: 178-181.


  • Allen, J., E. Drewski, A. Engelhardt and J. Kim, 2007. Project 3: Usability testing vs. heuristic evaluation. October 4, 2007, pp: 1-6. http://jenniferleeallen.com/portfolio7_docs/OneStartClassifieds_HeuristicEvaluation.pdf.


  • Al-Nuaim, H.A., 2011. An evaluation framework for Saudi e-government. J E-Government Stud. Best Practices, Vol. 2011.
    CrossRef    


  • Alotaibi, M.B., 2013. Assessing the usability of university websites in Saudi Arabia: A heuristic evaluation approach. Proceedings of the 10th International Conference on Information Technology: New Generations. April 15-17, 2013, Las Vegas, NV., USA., pp: 138-142.


  • Alroobaea, R.S., A.H. Al-Badi and P.J. Mayhew, 2013. A framework for generating a domain specific inspection evaluation method: A comparative study on social networking websites. Proceedings of the Science and Information Conference, October 7-9, 2013, London, UK., pp: 757-767.


  • Anandhan, A., S. Dhandapani, H. Reza and K. Namasivayam, 2006. Web usability testing-CARE methodology. Proceedings of the 3rd International Conference on Information Technology: New Generations, April 10-12, 2006, Las Vegas, NV., pp: 495-500.


  • Baker, K., 1997. Heuristic evaluation. Computer Science 681: Research Methodologies, March 1997. http://grouplab.cpsc.ucalgary.ca/saul/681/1997/kevin/home.html.


  • Hertzum, M. and N.E. Jacobsen, 2001. The evaluator effect: A chilling fact about usability evaluation methods. Int. J. Hum.-Comput. Interact., 13: 421-443.
    CrossRef    Direct Link    


  • Jeffries, R. and H. Desurvire, 1992. Usability testing vs. heuristic evaluation: Was there a contest? ACM SIGCHI Bull., 24: 39-41.
    CrossRef    Direct Link    


  • Law, L.C. and E.T. Hvannberg, 2002. Complementarity and convergence of heuristic evaluation and usability test: A case study of universal brokerage platform. Proceedings of the 2nd Nordic Conference on Human-Computer Interaction, October 19-23, 2002, Aarhus, Denmark, pp: 71-80.


  • Lewis, C., P.G. Polson, C. Wharton and J. Rieman, 1990. Testing a walkthrough methodology for theory-based design of walk-up-and-use interfaces. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, April 1-5, 1990, Seattle, WA., USA., pp: 235-242.


  • Lewis, J.R., 2006. Usability Testing. In: Handbook of Human Factors and Ergonomics, Salvendy, G. (Ed.). 3rd Edn., John Wiley and Sons, New York, ISBN-13: 978-0471449171, pp: 1275-1316


  • Lewis, J.R., 2006. Sample sizes for usability tests: Mostly math, not magic. Interactions, 13: 29-33.
    CrossRef    Direct Link    


  • Nielsen, J., 1994. Heuristic Evaluation. In: Usability Inspection Methods, Nielsen, J. and R.J. Mack (Eds.). John Wiley and Sons, New York, ISBN: 0-471-01877-5, pp: 25-61


  • Nielsen, J., 2005. Topic: Heuristic Evaluation. http://www.useit.com/papers/heuristic/.


  • Nielsen, J., 1995. How to conduct a heuristic evaluation. Nielsen Norman Group, USA., January 1, 1995. http://www.nngroup.com/articles/how-to-conduct-a-heuristic-evaluation/.


  • Nielsen, J., 2006. Quantitative studies: How many users to test. http://www.useit.com/alertbox/quantitative_testing.html.


  • Nielsen, J., M. Hertzum and B. John, 1998. The evaluator effect in usability studies: Problem detection and severity judgements. Proceedings of the 42nd Annual Meeting Human Factors and Ergonomics Society, October 5-9, 1998, Chicago, IL., USA., pp: 1336-1340.


  • Rieman, J., M. Franzke and D. Redmiles, 1995. Usability evaluation with the cognitive walkthrough. Proceedings of the Conference Companion on Human Factors in Computing, May 7-11, 1995, Denver, Colorado, pp: 387-388.


  • Rubin, J. and D. Chisnell, 2006. Handbook of Usability Testing: How to Plan, Design and Conduct Effective Tests. 2nd Edn., Wiley Publishing, Inc., New York, USA


  • SAS., 2010. Base SAS 9.2 Procedures Guide: Statistical Procedures. 3rd Edn., SAS Institute Inc., Cary, NC., USA., pp: 17-34


  • Snagit, 2014. SNAGIT, is a tool for deeply understanding customer experiences. http://www.techsmith.com/snagit.html.

  • © Science Alert. All Rights Reserved