HOME JOURNALS CONTACT

Information Technology Journal

Year: 2011 | Volume: 10 | Issue: 5 | Page No.: 1031-1037
DOI: 10.3923/itj.2011.1031.1037
CuQuP: A Hybrid Approach for Selecting Suitable Information Systems Development Methodology
Maryati Mohd. Yusof, Zarina Shukur and Azlan Long Abdullah

Abstract: The development of information systems projects can be facilitated by an Information Systems Development (ISD) methodology that addresses different system development stages. This study introduces a hybrid approach to selecting a system development methodology known as CUQuP (Complexity, Uncertainty, Quality and Phase) that is based on four main factors: complexity level, uncertainty level, quality criteria and methodology phase. This approach was used to select methodologies for a Malaysian army information management system. The case study was carried out through observations, interview and document analysis. The main findings show that the factors considered in the proposed approach are applicable and comprehensive in the selection of an ISD methodology.

Fulltext PDF Fulltext HTML

How to cite this article
Maryati Mohd. Yusof, Zarina Shukur and Azlan Long Abdullah, 2011. CuQuP: A Hybrid Approach for Selecting Suitable Information Systems Development Methodology. Information Technology Journal, 10: 1031-1037.

Keywords: procedure comparison framework, software development technique, Information based software construction and approach selection

INTRODUCTION

The importance of an ISD methodology lies in its role in assisting system developers to manage, control and monitor the system development process in an effective and efficient manner (Kiely and Fitzgerald, 2005). However, the application of ISD methodologies has not significantly improved the ISD process and system quality. This problem is attributable to the inappropriate use of ISD methodologies. The selection of an ISD methodology is not an easy task and the presence of various methodologies has made the selection process more difficult (Carroll, 2003; Collignon et al., 2009; Iivari et al., 2001).

The selection of an ISD methodology depends largely on the developer’s experience and knowledge of a particular methodology. Thus, a developer tends to choose a methodology with which he is familiar (Hughes, 1998). An ISD methodology is also selected by comparing its strength and appropriateness with the ISD environment. This selection basis makes it difficult to justify the rationale for selecting an ISD methodology. In addition, an ISD methodology is often selected based on the problem situation, including the organizational setting as well as the objective and the uncertainty and complexity level of a project (Avison and Fitzgerald, 2006; Little, 2005; Yaghini et al., 2009). This approach seems to be more practical, but the scope of the ISD methodology used is limited and is inclined towards the methodology selected by the researcher. Further, the process of selecting an ISD methodology based on this problem situation is not clear. We found that there is no clear framework or approach to ISD methodology selection that can assist organizations in selecting ISD methodologies in a systematic and holistic manner. Hence, we proposed a hybrid approach to ISD methodology selection to address this need.

In this study, an approach to selecting an ISD methodology has been identified, which then used the Malaysian Army Information Management System (MAIMS) as a case study. The level of complexity and uncertainty, the scope of the methodology phases and the quality criteria of the new system are highly prioritized in the methodology selection approach due to the requirements of the system. This approach, which we call CUQuP, was developed using a weighted scale generated from the aforementioned factors. A prototype for selecting a methodology has also been developed based on the proposed approach.

APPROACHES TO THE SELECTION OF METHODOLOGIES

A number of approaches to the selection of methodologies have been introduced, including the Contingency approach by Burns and Dennis (1985), Big-M approach by Cockburn (1999, 2000) weight based approach Cropley et al. (2003) and multi-faceted approach by Yaghini et al. (2009).

The Contingency approach by Burns and Dennis (1985): This approach is based on the uncertainty and complexity of a project. Three elements have been identified to determine the uncertainty of a project: the degree of structure, the users’ levels of understanding of their role and the skill of system developer. Elements that determine the complexity of a project include the project size, the number of users, the amount of new information generated and the complexity of new information generated. Based on these elements, a suitable methodology is selected by using 2 dimensional contingency grid as in Fig. 1.

The Big-M approach by Cockburn (1999, 2000): This approach is based on four factors: project size, critical level of a system, cost and communication among users and developer. The approach is known as the Methodology Grid, is which formulated in terms of the number of people involvedxcriticality levelxproject priority. A detailed explanation can be found by Cockburn (2000).

Weight based approach by Cropley et al. (2003): In this study, Cropley et al. (2003) considered two aspects when selecting a methodology: the phases in the methodology and the quality criteria. Methodology candidates are based on formal methodologies, including information engineering, Structured Systems Analysis and Design Methods (SSADM), the structured approach, Soft Systems Methods (SSM), multi-view methods and dynamic systems development methods. In this approach, each phase in the system development is given a weight to reflect its level of priority and importance to the respective system. The quality criteria are contributed by each phase in the system methodology. For example, the respective phases that contribute to the acceptability criteria are the analysis, logical design and evaluation phases. Table 1 shows the relationships between the quality criteria and their respective phases. The frequencies of the methodology phases contributed by the quality criteria obtained from Table 1 are recorded and normalized in Table 2. In general, a single methodology does not cover all phases of system development. Therefore, Avison and Fitzgerald (2006) assign weights (values from 0 to 3) to the phases on which the methodologies focus. Table 3 shows the weights. In this approach, each methodology candidate is given a score based on the importance of a phase, the quality criteria considered and the scope of the methodology. Based on the resulting score, Cropley et al. (2003) then select the combination of methodologies (not more than three) to be used.

Fig. 1: Two dimensional contingency (Source: Burns and Dennis (1985))

Table 1: Generic quality criteria (Source: Cropley et al. (2003))
*Phase key: S: Strategy, A: Analysis, LD: Logical design, PD: Physical design, P: Programming, T: Testing, I: Implementation, E: Evaluation, M : Maintenance

Table 2: Phases versus frequency contributed by quality criteria (Source: Cropley et al. (2003))
0-5 (Quality Criteria Range Frequency) = 1 (normalized frequency); 6-10 = 2, 11-15 = 3; and 16-20 = 4

The Multi-faceted approach by Yaghini et al. (2009): This approach classifies methodology candidates according to hard and soft systems approaches, system paradigms, objects, domains, applications, major modeling techniques, system development phases, background and participants.

Table 3: Weighting scale based on type of methodologies

THE PROPOSED APPROACH: CUQUP

The approach that we propose in this study is built from a combination of work done by Burns and Dennis (1985), Cropley et al. (2003) and Avison and Fitzgerald (2006) as well as other findings from the literature. We identify a number of attributes that influence system development complexity and uncertainty (Fig. 2). We propose an approach to ISD methodology selection that considers these attributes and we employ a scoring method to provide a more rational, systematic and flexible approach to methodology selection.

We produce a weighting scheme based on 3 main elements: complexity and uncertainty level, system quality criteria and the scope of the methodology phase. From these elements, we assign a score for each methodology and in general, the methodology with the highest score represents the most suitable methodology for the system. We call this approach CUQuP (C-complexity, U-Uncertainty, Qu-Quality, P-Phase).

Measuring the level of complexity: We defined the complexity level as the degree of difficulty of controlling and monitoring the development process of a system. Therefore, nine attributes are chosen, based on work done by Little (2005), to determine the level of system complexity: project organization size, capacity and competency of the project participants, location of participants, critical level of system, system process and integration, user requirements, number of users, evolution, data requirements and process. In brief, for each attribute there are 5 categories and each category has a different weight (i.e., 1, 3, 5, 7 and 10). Therefore, the total weight is 90. If the obtained weight is less than 45, then the complexity level is considered low, otherwise the complexity level is considered high.

Measuring the level of uncertainty: Uncertainty in this study is defined as an unfamiliar problem with complicated and unexpected requirements. Six attributes involved in the determination of this level are problem structure, users’ level of understanding, project team experience, project duration, project scope and technology.

Fig. 2: Attributes that affect the complexity and uncertainty of system development

As for complexity level, we use the weighting scale of Little (2005) for the uncertainty. The total weight is 60. If the obtained weight is less than half of the total weight (i.e., 30) then the uncertainty level is low; otherwise, it is high.

Weighting scale based on complexity and uncertainty factors: We illustrate the relationship between complexity and uncertainty level and its impact on a methodology, as described by Satzinger et al. (2002) in Fig. 3.

We convert and adapt Fig. 3 to the form of a weighting scale, as illustrated in Table 4.

Once we know the level of complexity and uncertainty from the previous two steps (either high or low), by using Table 4, we can determine the weight of each phase.

Weighting scale based on the quality criteria concerned: System quality criteria refer to the contribution of each methodology phase in producing a good quality system (Cropley et al., 2003). As an example, important phases that contribute to compatibility are strategy, feasibility, analysis, logical and physical design and evaluation. This study employs system quality criteria that are based on those of Cropley et al. (2003) and Avison and Fitzgerald (2006), listed in Table 1.

Fig. 3: The focus of methodology phase based on complexity and uncertainty

Table 4: Weighting scale based on complexity and uncertainty level
4: Completely important, 3: Important, 2: Fairly important, 1: Not important

Table 5: Weighting scale based on quality criteria

These quality criteria are important to all systems but focus should be given on certain quality criteria according to project context. As an example, the MAIMS project focus mainly on the following quality criteria: compatibility, flexibility, scalability and usability (MAIMS, 2004). In this approach, the frequency of the methodology phases that contribute to the concerned quality criteria will be doubled and added to the original frequency proposed by Cropley et al. (2003) in order to indicate their importance. To illustrate this approach, assume that compatibility, flexibility, scalability and usability are the four concerned quality criteria for the developed system. Based on Table 1, three of the four criteria, except for usability, are contributed by the methodology phase PD (Physical Design). Therefore, we doubled the PD frequency to 6 (3x2) and add it to the original frequency in Table 1 (i.e., 17), for a total of 23, as shown in Table 5. This process is applied to the other methodology phases that contribute to the concerned quality criteria.

Methodology candidates: In order to test our framework, we need to select suitable methodologies candidates. Hence, we use common methodologies from work done by Yahya et al. (2002) and Avison and Fitzgerald (2006). Yahya et al. (2002) found that there are 12 different methodologies used by most organizations in Malaysia, while Avison and Fitzgerald (2006) list 13 methodologies based on philosophy, model, technique and tool, scope, output, practice and product. From those methodologies, we identified eight common methodologies that are described in both works: JSD, YSM (Yourdon), SSADM, IE, OOA, ETHICS, SSM and RAD.

Weighting scale based on the type of methodology: The scope of the methodology phase in this work is based on Avison and Fitzgerald (2006), while the weighting scheme is based on Cropley et al. (2003). Therefore, we list the methodologies that appear in both works in Table 6.

Table 6: Weighting Scale based on selected types of methodologies

The methodology score calculation: The score for each methodology can be calculated once we have clearly identified the methodology phase that is important to the respective system. The score can be used as a basis to select the suitable methodologies. The following formula is used to calculate the score for each methodology:

where, i is the i th phase, starting from phase 1 (strategy) to phase n (maintenance), Ci is the weight for the ith phase based on the complexity and uncertainty factors (refer to Table 3), Qi is the weight for the ith phase based on quality criteria (refer to Table 4) and Si is the weight for the ith phase based on the methodology of type S (refer to Table 5).

THE CASE STUDY

We validate the proposed approach against Malaysian Army Information Management System (MAIMS) case study. The case study is done through observation, interview and document analysis at the Information Technology Center of Malaysian Army between 2008 and 2009. During these activities, the focus was on the complexity and uncertainty level, the system quality criteria and the scope of the methodology phase. From this information, the score for each methodology was determined. The development strategy of a system is determined to be in-house if it is a small-scale system. In contrast, a large-scale system will be outsourced. The system planning focuses more on the process aspect compared to the infrastructure. The estimated number of system users was 150. The organization had hardly acquired any experienced and/or competent staff for system development; this explains one of the reasons why they preferred the outsourcing strategy. The project team members consisted of both experienced and new technical staff, as well as representative users from the same location. Normally, the appointed contractor served as a more experienced and highly competent team member. The system used large amount of data and involved large numbers of processes. Therefore, the integration of modules was highly critical. The problem structure of the system could be clearly defined because it was easy to identify the user requirements. The scope of system development was very broad and flexible. From the analysis, we found that most projects in the system development used an outsourcing strategy. Therefore, the focuses of the methodology phase were planning (strategy and feasibility), analysis and design (logical and physical). Thus, the respective values for i were 1, 2, 3, 4 and 5, representing the row numbers of Table 4.

Based on the data obtained from the case study, the weight for complexity level of the system was 52 out of 90. Therefore, the system was considered highly complex. The weight for the uncertainty level was 35 out of 60, which was also considered to reflect high uncertainty. From Table 3, the values for Ci were 3, 2, 4, 3 and 1, with respect to the methodology phase. From the internal report of the organization, the quality criteria that were taken into consideration to develop the system were compatibility, flexibility, scalability and usability. Therefore, from Table 5, the values for Qi were 3, 4, 5, 5 and 5, with respect to the methodology phase. Lastly, to obtain the score for each methodology candidate, the Si values from Table 6 were used. For example, from Table 6, the Si values for SSADM were 2, 3, 3, 3 and 3, with respect to the methodology phase. Table 7 shows the score calculation for SSADM.

In the case of MAIMS, the detail score for each phase and each type of methodology are shown in Table 8.

Table 7: Calculation example for SSADM

Table 8: Result based on MAIMS case study

Based on the results in Table 8, the highest score was obtained by IE, which was 163. However, the other two candidates, SSADM and RAD, obtained scores of 162, very close to that of IE. Therefore, these three candidates should be considered by the organization. Among these three, the familiarity of the organization with the methodology, as well as the expertise, tools and support to use the methodology can be considered in the final decision of which candidate they want to use. Apart from the above criteria, if the organization plans to use multiple methodologies, the score for each phase can guide the selection process. For example, for the strategy phase, IE and SSM obtained the highest score (27), while YSM, SSADM and RAD obtained the highest score (24) in the feasibility phase. Therefore, the organization can use a combination of SSM for the strategy phase and SSADM for the feasibility phase.

DISCUSSION AND CONCLUSION

Our proposed hybrid approach to system development methodology selection, CuQuP, is based on system methodology, project and human attributes that are combined with uncertainty, complexity and scoring method (Cropley et al., 2003; Burns and Dennis (1985); Avison and Fitzgerald (2006). We combine these previous approaches with a number of adaptations including the measure of complexity and uncertainty level and their respective scoring scale and scoring scale for quality criteria. We argue that this approach is capable of assisting system developer to select system methodology easily and systematically as well as to evaluate the suitability level of a certain methodology against a particular system development project. CuQuP has a number of advantages and disadvantages over several previous approaches. The Contigency approach (Burns and Dennis, 1985) is easier to understand; however, it does not specify how the complexity and uncertainty can be measured. In addition, the methodology only includes two methodology candidates, namely the Waterfall model and prototyping. As for the Big M approach (Cockburn, 1999, 2000), it is difficult to implement, as the methodology must be adapted frequently, particularly in terms of the size and density of a methodology. In this respect, the user need to posses extensive knowledge and experience in ISD methodology. Generic categories are proposed for the methodology size: lightweight, medium weight and heavy weight, without assigning specific methodology candidates to these categories. With regards to the Weight based approach, it is suitable to be used for project with clear problem structure or high complexity and certainty level; it enables a more careful and detailed selection of methodology. The system quality featured in this approach is comprehensive. However, the approach does not provide a selection method for projects with unclear and low complexity and certainty levels. The multi-faceted approach proposed by Yaghini et al. (2009) includes a wide range of selection categories. However, some of the selection categories do not reflect the assigned attributes and are confusing. In addition, the framework is rigid in that it only recommends six methodology candidates for a fixed combination of selection attributes.

In this study, we have described our CUQuP approach to selecting a methodology for information systems development. Building on the findings of previous research works, we proposed a weighting scheme based on three main elements, namely, complexity and uncertainty levels, system quality criteria and the scope of the methodology phase. A score is assigned to each methodology based on these elements. In general, the highest score obtained represents the most suitable methodology for the system. The approach was used to select methodologies for the development of a number of Malaysian Army Operation Information Systems. The results show that the factors considered in the proposed approach are applicable and comprehensive in the selection of system development methodology. By using this approach, team project members can convince their top management to use the selected methodology using a scientific method. Although the CUQuP approach is applicable and comprehensive, we found that the data gathering activities required for this approach were time-consuming. Therefore, our next task is to find ways to obtain the respective information in a shorter time. The approach can also be validated in other research contexts for further refinement.

REFERENCES

  • Avison, D. and G. Fitzgerald, 2006. Information Systems Development: Methodologies, Techniques and Tools. 4th Edn., McGraw Hill, London


  • Burns, R.N. and A.R. Dennis, 1985. Selecting the appropriate application development methodology. ACM Sigmis Database, 17: 19-23.
    Direct Link    


  • Carroll, J., 2003. The process of ISD methodology selection and use: A case study. Proceedings of the European Conference on Information Systems, (ECIS`03), Association for Information Systems. University of Melbourne, pp: 1-12.


  • Cockburn, A., 1999. Methodology per project. Humans and Technology Technical Report HaT TR 1999. http://alistair.cockburn.us/Methodology+per+project


  • Cockburn, A., 2000. Selecting a project`s methodology. IEEE Software, 17: 64-71.
    CrossRef    


  • Collignon, S., S.C. Cook and N.J. Davidson, 2009. Towards the development of a methodology to develop information systems in a research and development environment. Int. J. Intel. Defense Support Syst., 2: 222-245.
    Direct Link    


  • Cropley, D., Y. Yi and S. Cook, 2003. On identifying a methodology for land C2 architecture development. Proceedings of the Land Warfare Conference, Oct. 28-30, Adelaide, Australia, pp: 401-409.


  • Hughes, J., 1998. Selection and evaluation of information systems methodologies: The gap between theory and practice. IEEE Proc., 145: 100-104.
    CrossRef    


  • Iivari, J., R. Hirscheim and H.K. Klein, 2001. A dynamic framework for classifying information systems development methodologies and approaches. J. Manage. Inform. Syst., 17: 179-218.
    Direct Link    


  • Kiely, G. and B. Fitzgerald, 2005. An investigation of the use of methods within information systems development projects. Electron. J. Inform. Syst. Dev. Countries, 22: 1-13.
    Direct Link    


  • Little, T., 2005. Context-adaptive agility: Managing complexity and uncertainty. IEEE Software, 22: 28-35.
    Direct Link    


  • Satzinger, J.W., R.B. Jackson and S.D. Burd, 2002. Systems Analysis and Design in a Changing World. 2nd Edn., Springer, Massachusetts


  • Yaghini, M., A. Bourouni and R.H. Amiri, 2009. A framework for selection of information systems development methodologies. Comput. Inform. Sci., 2: 1-9.
    Direct Link    


  • Yahya, Y., M.M. Yusof, M. Yusof and N. Omar, 2002. The use of information system development methodology in Malaysia. Int. J. Inform. Technol., 2: 15-34.


  • MAIMS, 2004. Malaysian Army Information Management System (MAIMS) Report, 2004. Ministry of Defense, Malaysia.

  • © Science Alert. All Rights Reserved