HOME JOURNALS CONTACT

Information Technology Journal

Year: 2012 | Volume: 11 | Issue: 1 | Page No.: 49-57
DOI: 10.3923/itj.2012.49.57
Towards Quantitative Assessment Model for Software Process Improvement in Small Organization
Jingfan Tang, Ming Jiang and Qin Zhu

Abstract: Software Process Assessment (SPA) was conducted prior to Software Process Improvement (SPI) activities to identify the process areas to be improved. The process assessment methods such as ISO/IEC 15504-2 and SCAMPI were available to all enterprises. But they were difficult for most of small organization because of the complexity and the consequent large investment in terms of time and resources. In order to conduct the SPA activities with few efforts in the small organization effectively, this study presented a quantitative assessment model. It was an efficient approach to evaluate the quality of the process execution in the projects for improvement of the organization’s maturity. The project activities and deliverables were identified as organizational standard assessment items which could be tailored for each specific project. The weight was assigned to the tailored assessment items and the corresponding review activities according to importance and priority. Through an effective quantitative approach on the milestone-driven assessment, the compliance score could be obtained based on the characterization of the assessment results from document inspection and interview session. The initial practices results showed that the presented lightweight assessment model was indeed suitable for small organization to support the process assessment and improvement activities.

Fulltext PDF Fulltext HTML

How to cite this article
Jingfan Tang, Ming Jiang and Qin Zhu, 2012. Towards Quantitative Assessment Model for Software Process Improvement in Small Organization. Information Technology Journal, 11: 49-57.

Keywords: small organization, Software process assessment, software process improvement, quantitative assessment and compliance score

INTRODUCTION

In order to assure the quality of the software product, software development organizations needed to ensure that the tailored and customized software processes could be strictly executed in the software projects (Capretz, 2004; Fadila and Said, 2006; Akbar et al., 2011). The quality of a software product was largely determined by the quality of the software processes adopted to develop and maintain it (Changchien et al., 2002; Suula et al., 2009; Kheirkhah et al., 2009; Kaur and Singh, 2010). Software Process Improvement (SPI) was conducted in the software organization continuously to meet the requirements of business goals. In order to expose the improvement opportunities, Software Process Assessment (SPA) was conducted against a process assessment model to find out the weakness of the executed processes in the software projects (Nasir et al., 2008; Makinen and Varkoi, 2008; Khokhar et al., 2010). It could also be regarded as a catalyst for the SPI initiative (McCaffery and Coleman, 2009).

Some standard process models such as ISO/IEC 15504 and CMMI models were referenced for the software development organization to determine the processes’ capability or the organization’s maturity. They needed large investment in terms of time and resources and really brought challenge for the small organization to conduct the SPA and SPI activities (Von Wangenheim et al., 2006).

Sunetnanta et al. (2009) has presented a model to reduce the cost of CMMI implementation in the offshore-outsourced software development projects. They continuously collected and analyzed project data from project repositories for ongoing quantitative CMMI assessment to reduce the need for site visits with evidence of process quality and improvement.

A lot of these models did not consider the special characteristics of projects in small organizations. First, they needed more detailed guideline and practices to enable them to understand the overall of company situation and the improvement opportunities for deploying an SPI initiative in this type of organization (Pino et al., 2010). Second, it was not easy to incorporate the uncovered improvement opportunities from the assessment into the practices of their processes. Furthermore, it was also difficult for small organizations to execute improvement activities which needed the greatest amount of effort upon the organization (Hurtado et al., 2008; Pino et al., 2010).

COMPETISOFT (Oktaba et al., 2007) aimed to help small companies set up and implement process improvement practices including a methodological framework for guiding SPI activities, a process reference model and a process evaluation model. It also pointed out that one success factor for SPI initiatives in small organization was for the improvement effort to be guided by means of specific procedures and the combination of different approaches.

PmCOMPETISOFT (Pino et al., 2009) offered a set of well-defined process entities (such as purpose, objectives, roles, activity diagram, activities, work products and tool support) to the whole process improvement life cycle. This process described five activities: (a) initiating the cycle, (b) diagnosing the process, (c) formulating improvements, (d) executing improvements and (e) revising the cycle.

There were a number of researchers who provided conceptual guidelines to utilize quantitative techniques to understand software processes for improvement (Radice, 1998; Florac and Carleton, 1999; Shen et al., 2011; Fazal-e-Amin et al., 2011). The existing implementations frequently focused on the benefits of quantitative results (Florac and Carleton, 2000; Jalote et al., 2000; Jacob and Pillai, 2003). It also lacked satisfactory guidelines to implement quantitative techniques in small organizations (CMU/SEI, 2002; Sargut and Demirors, 2006; Colla and Montagna, 2008; Habra et al., 2008). Tarhan and Demirors (2006) introduced an assessment model to guide quantitative analyses and applied it in a small organization. Quantitative techniques were frequently used with adoption of organizational maturity or process capability frameworks by some process improvement models. They usually demanded investment of time, effort and money for several years (Ayca and Demirors, 2008). It was really difficult to afford by small organizations and brought big challenge to understand and manage processes based on quantitative data.

In order to support small organizations to conduct the software process assessment and activities effectively with few efforts, this study aimed to present a lightweight assessment model to guide the execution of process assessment through quantitative techniques.

MODEL OVERVIEW

In this study, it presented a quantitative assessment model for software process improvement in small organization based on tailored assessment items, quantitative assessment approach and effective assessment process.

AI (Assessment Items) meant the items which would be assessed against the assessment rules to determine the capability of the process executed in the inspected project
AA (Assessment Approaches) meant the quantitative approaches used in the assessment including scoring system, indicator documents inspection and interview session
AP (Assessment Processes) meant the process of the assessment including tailoring, planning, execution and result analysis

Figure 1 showed the architecture overview of the model.

Fig. 1: Model overview

Through the analysis on the small organization, the related activities and documents were identified as the assessment items which could be tailored according to the specific requirements of the projects. In order to assure the quality of the assessment items such as requirement specification, design specification and source code etc., the review activities for the assessment items were also considered in this model. The weight for both the assessment items and reviewed activities was assigned according to the importance and priority. It was used in the quantitative assessment approach along with the assessment rules for the sessions of document inspection and interview. In order to increase the efficiency of assessment, an effective assessment plan was drafted to guide the whole assessment activities which was project milestone driven. According to the quantitative assessment on the assessment items and review activities in each milestone of the project, the final compliance score could be obtained. It was used to effectively identify the strength and weakness of the process execution in the projects for software process improvement in small organizations.

TAILORED ASSESSMENT ITEMS

During the Software Development Life Cycle (SDLC), there were two kinds of activities to achieve the project objectives: engineering activities and management activities. In spite of the SDLC methodologies, this model focused on the deliverables produced through engineering activities during SDLC such as requirement specification, design specification, source code and test cases, etc.

In order to meet the quality, cost and schedule goals of the project, the attention should be paid to the management activities such as project planning, project monitoring and controlling, risk management, issue management and change management, etc. which were executed throughout the whole SDLC.

So, the assessment items would reflect the execution of the engineering activities and management activities against the standard processes. Since there were different kinds of projects with different development methodologies, the assessment items could be tailored according to the specific requirements. Meanwhile, a standard tailor guide would be provided for the tailoring process to reflect the best practices in the organization. The standard templates for the assessment items were also provided to assure the consistency of the quality among different projects. For the tailoring of the mandatory items, the strong reason should be provided by the project for reviewing and approval before execution.

Table 1: Assessment items checklist
The items marked with * mean mandatory items

The organizational standard assessment items checklist was shown in Table 1.

QUANTITATIVE ASSESSMENT APPROACH

Definitions
Definition 1: Tailored assessment items with prioritized weight: Define i = {i1, i2, i3…in} as a set of tailored items after tailoring procedure which would be assessed against the assessment rules to determine the capability of the process executed in the inspected project.

Define p as priority type of the assessment item. A priority type could be critical, high, medium and low, representing by ij.p ε {critical, high, medium, low} which described the importance of the assessment item for the process execution in the project.

Define iw as weight of the assessment item for quantitative assessment which corresponded to ij.p, representing by ij.iwε {8, 4, 2, 1}. If ij.p = critical, then ij.iw = 8.

Definition 2: Review type with corresponding weight: Define r as review type of the assessment item. A review type could be internal, external and none, representing by ij.r ε{external, internal, none} which described the requirements of review for each assessment item. If ij.r = external, it meant the assessment item ij needed to be reviewed by external experts. If ij.r = internal, it meant the assessment item ij only needed to be reviewed by internal experts. If ij.r = none, it meant there was no requirement of review on the assessment item ij.

Define rw as weight of the review activity which was decided by iw and r. If ij.r = internal, then ij.rw = ij.iw0.5. If ij.r = external, then ij.rw = ij.iw. If ij.r = None, then ij.rw = 0.

Definition 3: Target score: Define ts as target score of the assessment result on the tailored assessment item representing by ij.ts = ij.iw+ij.rw.

Define total (ts) as the total target score of the assessment results on the tailored assessment items:

Total (ts) =

Definition 4: Actual score: Define as_i as the actual score of the evaluation on the assessment item representing by ij.as_i,.

Define as_r as the actual score of the evaluation on the review activity of the assessment item representing by ij.as_r.

Define ‘as’ as total actual score of the assessment result on the tailored assessment item, representing by ij.as = ij.as_i + ij.as_r. It would be obtained through the execution of the quantitative assessment described in the following sessions.

Define total (as) as the total actual score of the assessment results on the tailored assessment items:

Total (as) =

Definition 5: Compliance score: Define cs as compliance score of the assessment result on the tailored assessment item against the target score ij.ts, representing by ij.cs = ij.as x 100/ ij.ts.

Define total (cs) as the total compliance score of the assessment results on the tailored assessment items:

Total (cs) = Total (as) x 100/Total (ts)

Indicator document inspection: During the assessment, the indicator documents for the assessment items would be inspected to see whether they conformed to the defined standard. The standard was simplified into two questions:

Whether the indicator document existed
Whether the indicator document followed the template strictly

For the quality of the indicator document itself, it would be assured through the quality control activities which was out of the scope in this study.

There were four kinds of the results for the inspection:

Fully Compliance (FC): The indicator document existed and followed the template standard strictly
Partly Compliance (PC): The indicator document existed but did not follow the template standard strictly
Non-Compliance (NC): The indicator document did not exist
Non-Applicable (NA): The related activity did not happen to result in the indicator document

The inspection result for each assessment item needed to be recorded.

Interview session: Borrowing the ideas from SCAMPI, interview session would also be conducted for some assessment items to support what had been found in the indicator document inspection if needed. Comparing with SCAMPI, the question list for the interview was simplified which could also be tailored according to the specific requirements.

There were three kinds of results for interview:

Pass: the answers of the interview questions conformed to the requirements of the standard procedures
Failed: the answers of the interview questions did not conform to the requirements of the standard procedures
Non-applicable (NA): the default value for the assessment items which needed no interview

In order to assure the quality of the interview session, the predefined question list was prepared to cover the assessed activities and indicator documents.

Table 2 showed the examples of the question list for interview session on engineering related activities, including the stages of initial, requirements, design, construction, system test, UAT, release and closure.

Table 3 showed the examples of the question list for interview session on management related activities, including risk management, monitoring and control, configuration management and change control.

Assessment characterization and scoring: Finally, the characterization needed to be done according to the inspection result and interview result which was used to generate the final actual score.

Table 2: An example of question list for interview session- engineering

Table 3: An example of question list for interview session- management

There were four kinds of characterization results for the assessment item:

Full Implemented (FI, 100%): (FC and NA) or (FC and Pass), it meant the actual score (as_i) of the assessment item achieved 100 percentage of the target score (iw)
Largely Implemented (LI, 75%): PC and Pass, it meant the actual score (as_i) of the assessment item achieved 75 percentage of the target score (iw).
Partially Implemented (PI, 35%): (PC and Fail) or (NC and Pass), it meant the actual score (as_i) of the assessment item achieved 35 percentage of the target score (iw)
Not Implemented (NI, 0%): NC and Fail, it meant the actual score (as_i) of the assessment item equaled 0.

The scoring ratio for the different kinds of characterization results could be configured according to the actual requirements of the organization.

For the review activities of the assessment item, there were two kinds of characterization results:

Full Implemented (FI, 100%): There were evidences for review activities, it meant the actual score (as_r) of the review activities on the assessment item achieved 100 percentage of the target score (rw)
Not Implemented (NI, 0%): There was no evidence for review activities, it meant the actual score (as_r) of the review activities on the assessment item equaled 0

The evidence for review activities included the review meeting minutes, review log and emails, etc.

With the characterization rules, the score for each assessment item (as_i) and review activity (as_r) could be generated automatically.

For example, assume ij.p = high and ij.r = external, so ij.iw = 4 and ij.rw = 4. So, ij.ts = ij.iw+ ij.rw = 8. If the inspection result was PC and the interview result was Pass, the characterization result for the item was LI according to the rule of PC and Pass. So, ij.as_i = ij.iw x 75% = 3. If the result of review activities was FI, ij. as _r = ij.rw x 100% = 4 and ij.as = ij.as_i+ij. as_r = 7. So, ij.cs = ij.as x 100/ ij.ts = 87.5.

After getting the score for each assessment item and review activity, the final actual score (total (as)) and compliance score (total (cs)) could be calculated for the assessment according to the defined quantitative approach.

ASSESSMENT PROCESS

Assessment planning: In this model, it adopted milestone-driven assessment timely to understand the status of the process execution. When the project was initiated, the milestones needed to be identified with the related deliverables according to the specific requirements of the project. The assessment items could be confirmed by the project manager through the tailoring process.

An assessment plan would be drafted according to the collected information of the project milestones which included the assessment team, assessment time, assessment items and related question list. If there were changes on the project schedule, the times of changes were recorded to evaluate the capability of the project estimation. The latest version of the project schedule and milestones was used to update the assessment plan for each project.

Assessment execution: When the milestone was completed, the project would upload the related documents into the specified directories in the project data asset library. For example, share point, a Microsoft product, could be used as document library for project data storage. After getting the notification from the project, the assessment team could initiate the assessment according to the assessment plan. They could inspect the indicator documents online after obtaining the related authorization. Some additional documents could be acquired from the project if needed during the procedure of the document inspection. The inspection result was recorded for final characterization and scoring.

After the document inspection, the interview session could be performed according to the assessment plan. Project manager and some key staffs in the project were invited to attend the interview session which was well scheduled to minimize the influence on the regular execution of the project. The prepared question list was used for the interview session and the synthesis result could be obtained for final characterization and scoring.

Based on the results of document inspection and interview session, the final characterization and scoring could be performed according to the related rules defined Assessment, characterization and scoring and the final actual score and compliance score could be calculated for each milestone of the project.

Table 4 showed the efforts distribution for the execution of the assessment activities which had been practiced in more than 10 software development projects in one small organization. From the practices and the initial results, it had been observed that the lightweight assessment approach in this model was indeed suitable for small organization to support the process assessment and improvement activities.

Assessment result analysis and reporting: Since the assessment would be executed according to the requirements of project milestones, the compliance score could be obtained for each milestone to understand the status of process execution. Figure 2 showed an example of milestone compliance score distribution.

In this example, there were seven milestones in this project: initial, requirement, design, construction, system test, release and closure.

Fig. 2: An example of milestone compliance score distribution

Fig. 3: An example of compliance score distribution for assessment items

The compliance scores were 59.83, 74.38, 76.51, 76.70, 53.97, 69.62 and 72.36 correspondingly. From the compliance score for each milestone, it could be observed that the activities in the milestones of requirement, design and construction had followed standard process better than the activities in the milestones of initial and system test.

After getting the compliance score for each project, the comparison among the projects could be done to identify the quality of the different projects on process execution. The organization’s process capability could also be identified for continuous improvement with the feedback from the projects.

For example, the average compliance score could be obtained for each assessment item according to the assessment result of each project. It could help us to understand the strength and weakness of the process execution in the projects for software process improvement in the organization.

Table 4: Efforts distribution for assessment execution
ATM: Assessment team member, PM: Project manager

Figure 3 showed an example of compliance score distribution for each assessment item.

From the compliance score distribution in this Fig. 3, it could be found that the weakness of the process execution and related activities included: planning (i.e., project plan, system test plan and configuration management plan), designing (i.e., UI designing, functional designing and database designing) and release (i.e., release notes, release application form), etc.

CONCLUSION

In this study, it described a quantitative assessment model which could be customized to help small organization to perform effective assessment at reasonable cost. The standard assessment items could be defined according to the specific requirements of the small organization which covered both the engineering activities and management activities in the whole SDLC to assure the quality of the deliverables. The weight was assigned to the tailored assessment items and corresponding review activities to address the target score of the assessment. Through the lightweight document inspection and interview session with the quantitative assessment rules, the final compliance score could be obtained for each milestone of the project which could be used to identify the potential opportunities for software process improvement in the small organization.

REFERENCES

  • Capretz, L.F., 2004. A software process model for component-based development. Inform. Technol. J., 3: 176-183.
    CrossRef    Direct Link    


  • Fadila, A. and G. Said, 2006. A new textual description for software process modeling. Inform. Technol. J., 5: 1146-1148.
    CrossRef    Direct Link    


  • Akbar, R., M.F. Hassan, A. Abdullah, S. Safdar and M.A. Qureshi, 2011. Directions and advancements in global software development: a summarized review of GSD and agile methods. Res. J. Inform. Technol., 3: 69-80.
    CrossRef    Direct Link    


  • Khokhar, N.M., A. Mansoor, M.N. Khokhar, S.U. Rehman and A. Rauf, 2010. MECA: Software process improvement for small organizations. Proceeding of the International Conference on Information and Emerging Technologies, (ICIET'10), Karachi, pp: 1-6.


  • Changchien, S.W., J.J. Shen and T.Y. Lin, 2002. A preliminary correctness evaluation model of object-oriented software based on UML. J. Applied Sci., 2: 356-365.
    CrossRef    Direct Link    


  • Suula, M., T. Makinen and T. Varkoi, 2009. An approach to characterize a software process. Proceedings of the Portland International Conference on Management of Engineering and Technology, Aug. 2-6, Portland. pp: 1103-1109.


  • Kheirkhah, E., A. Deraman and Z.S. Tabatabaie, 2009. Screen-Based prototyping: A conceptual framework. Inform. Technol. J., 8: 558-564.
    CrossRef    Direct Link    


  • Kaur, K. and H. Singh, 2010. Candidate process models for component based software development. J. Software Eng., 4: 16-29.
    CrossRef    Direct Link    


  • Von Wangenheim, C.G., A. Anacleto and C.F. Salviano, 2006. Helping small companies assess software processes. IEEE Software, 23: 91-98.
    CrossRef    


  • Nasir, M.H.N.M., R. Ahmad and N.H. Hassan, 2008. An empirical study of barriers in the implementation of software process improvement project in Malaysia. J. Applied Sci., 8: 4362-4368.
    CrossRef    Direct Link    


  • Makinen, T. and T. Varkoi, 2008. Assessment driven process modeling for software process improvement. Proceedings of the Portland International Conference on Management of Engineering and Technology, July 27-31, Cape Town, pp: 1570-1575.


  • McCaffery, F. and G. Coleman, 2009. Lightweight SPI assessments: What is the real cost. Software Process. Improv. Pract., 14: 271-278.
    CrossRef    


  • Sunetnanta, T., N.O. Nobprapai and O. Gotel, 2009. Quantitative CMMI assessment for offshoring through the analysis of project management repositories. LNBIP., 35: 32-44.
    CrossRef    


  • Florac, A.W. and A.D. Carleton, 1999. Measuring the Software Process: Statistical Process Control for Software Process Improvement. Pearson Press Canada, pp: 250.


  • Shen, W.H., N.L. Hsueh and P.H. Chu, 2011. Measurement-based software process modeling. J. Software Eng., 5: 20-37.
    CrossRef    Direct Link    


  • Fazal-e-Amin, A.K. Mahmood and A. Oxley, 2011. A review of software component reusability assessment approaches. Res. J. Inform. Technol., 3: 1-11.
    CrossRef    Direct Link    


  • Florac, A.W. and A.D. Carleton, 2000. Statistical process control: Analyzing a space shuttle onboard software process. IEEE Software, 17: 97-106.
    CrossRef    


  • Jacob, L. and S.K. Pillai, 2003. Statistical process control to improve coding and code review. IEEE Software, 20: 50-55.
    CrossRef    


  • Jalote, P., K. Dinesh, S. Raghavan, M.R. Bhashyam and M. Ramakrishnan, 2000. Quantitative quality management through defect prediction and statistical process control. Proceedings of the 2nd World Quality Congress for Software, September 2000, Japan, pp: 48-55.


  • CMU/SEI, 2002. Process maturity profile of the software community 2001 year end update. https://acc.dau.mil/adl/en-US/31102/file/5587/23_2002mar.pdf


  • Sargut, K.U. and O. Demirors, 2006. Utilization of statistical process control (SPC) in emergent software organizations: Pitfalls and suggestions. Software Qual. J., 14: 135-157.
    CrossRef    


  • Colla, P.E. and J.M. Montagna, 2008. Framework to evaluate software process improvement in small organizations. Lecture Notes Comput. Sci., 5007: 36-50.
    CrossRef    Direct Link    


  • Habra, N., S. Alexandre, J.M. Desharnais, C.Y. Laporte and A. Renault, 2008. Initiating software process improvement in very small enterprises: Experience with a light assessment tool. Inf. Software Technol., 50: 763-771.
    CrossRef    


  • Tarhan, A. and O. Demirors, 2006. Investigating suitability of software process and metrics for statistical process control. Software Process. Improv., 4257: 88-99.
    CrossRef    


  • Ayca, T. and O. Demirors, 2008. Assessment of software process and metrics to support quantitative understanding. Software Process Prod. Measure., 4895: 102-113.
    CrossRef    


  • Pino, F.J., O. Pedreira, F. Garcia, M.R. Luaces and M. Piattini, 2010. Using scrum to guide the execution of software process improvement in small organizations. J. Syst. Software, 83: 1662-1677.
    CrossRef    


  • Oktaba, H., F. Garcia, M. Piattini, F. Ruiz, F.J. Pino and C. Alquicira, 2007. Software process improvement: The competisoft project. Computer, 40: 21-28.
    CrossRef    


  • Pino, F., J. Hurtado, J. Vidal, F. Garcia and M. Piattini, 2009. A process for driving process improvement in VSEs. Proceedings of the International Conference on Software Process, (ICSP'09), Vancouver, Canada, pp: 342-353.


  • Hurtado, J., F. Pino, J. Vidal, C. Pardo, L. Fernandez and S.P.I. Agile, 2008. Software Process Agile Improvement: A Colombia Approach to Software Process Improvement in Small Software Organizations. In: Software Process Improvement for Small and Medium Enterprises: Techniques and Case Studies, Oktaba, H. and M. Piattini (Eds.). Idea Group Inc., USA., pp: 177-192.


  • Radice, R., 1998. Statistical process control for software projects. Proceedings of the 10th Software Engineering Process Group Conference, March 1998, Chicago, Illinois, USA -.

  • © Science Alert. All Rights Reserved