Asian Science Citation Index is committed to provide an authoritative, trusted and significant information by the coverage of the most important and influential journals to meet the needs of the global scientific community.  
ASCI Database
308-Lasani Town,
Sargodha Road,
Faisalabad, Pakistan
Fax: +92-41-8815544
Contact Via Web
Suggest a Journal
 
Articles by Hamidah Ibrahim
Total Records ( 9 ) for Hamidah Ibrahim
  Payam Hassany Shariat Panahy , Fatimah Sidi , Lilly Suriani Affendey , Marzanah A. Jabar , Hamidah Ibrahim and Aida Mustapha
  Improving data quality is a basic step for all companies and organizations as it leads to increase opportunity to achieve top services. The aim of this study was to validate and adapt the four major data quality dimensions’ instruments in different information systems. The four important quality dimensions which were used in this study were; accuracy, completeness, consistency and timeliness. The questionnaire was developed, validated and used for collecting data on the different information system’s users. A set of questionnaire was conducted to 50 respondents who using different information systems. Inferential statistics and descriptive analysis were employed to measure and validate the factor contributing to quality improvement process. This study has been compared with related parts of previous studies; and showed that the instrument is valid to measure quality dimensions and improvement process. The content validity, reliability and factor analysis were applied on 24 items to compute the results. The results showed that the instrument is considered to be reliable and validate. The results also suggest that the instrument can be used as a basic foundation to implicate data quality for organizations manager to design improvement process.
  Payam Hassany Shariat Panahy , Fatimah Sidi , Lilly Suriani Affendey , Marzanah A. Jabar , Hamidah Ibrahim and Aida Mustapha
  Awareness of data quality dimensions and their relationships, poses new challenges to the database provider during the past two decades. Although, information systems have continuous improvement against their data problems, their success progressively depends on their methodology. This paper presents a methodology to measure, analyze and evaluate data quality dimensions by using subjective and objective measurement. Applying empirical methods and data mining techniques are steps of this methodology to improve database quality in the information systems. The applied rules and methods can be used to visualize and analyze attribute identification of the databases which is powerful and efficient to extract and reduce inconsistencies of the data. This methodology can be applied to compute other measurable quality dimensions and can help the information system providers to have intelligent and highly sophisticated opinions on creating databases.
  Mohammed Radi , Ali Mamat , M. Mat Deris , Hamidah Ibrahim and Subramaniam Shamala
  Replication is a well known technique to improve reliability and performance for a Data Grid. Keeping consistent content at all distributed replica is an important subject in Data Grid. Replica consistency protocol using classical propagation method called the radial method suffers from high overhead at the master replica site, while line method suffers from high delay time. In Data Grid not all replicas can be dealt with in the same way, since some will be in greater demand than others. Updating first replica having most demand, a greater number of clients would access the updated content in a shorter period of time. In this study, based on asynchronous aggressive update propagation technique, a scalable replica consistency protocol is developed to maintain replica consistency in data grid which aimed to reaching delay reduction and load balancing, such that the high access weight replicas updated faster than the others. The simulation results shows that the proposed protocol is capable of sending the updates to high access replicas in a short period while reducing the total update propagation repose time and reached load balancing.
  Mahmoud Shaker , Hamidah Ibrahim , Aida Mustapha and Lili Nurliyana Abdullah
 

Problems statement: Nowadays, many users use web search engines to find and gather information. User faces an increasing amount of various HTML information sources. The issue of correlating, integrating and presenting related information to users becomes important. When a user uses a search engine such as Yahoo and Google to seek specific information, the results are not only information about the availability of the desired information, but also information about other pages on which the desired information is mentioned. The number of selected pages is enormous. Therefore, the performance capabilities, the overlap among results for the same queries and limitations of web search engines are an important and large area of research. Extracting information from the web pages also becomes very important because the massive and increasing amount of diverse HTML information sources in the internet that are available to users and the variety of web pages making the process of information extraction from web a challenging problem.
Approach: This study proposed an approach for extracting information from HTML web pages which was able to extract relevant information from different web pages based on standard classifications.
Results: Proposed approach was evaluated by conducting experiments on a number of web pages from different domains and achieved increment in precision and F measure as well as decrement in recall.
Conclusion: Experiments demonstrated that our approach extracted the attributes besides the sub attributes that described the extracted attributes and values of the sub attributes from various web pages. Proposed approach was able to extract the attributes that appear in different names in some of the web pages.

  Monir Abdullah , Mohamed Othman , Hamidah Ibrahim and Shamala Subramaniam
  Problem statement: In many data grid applications, data can be decomposed into multiple independent sub-datasets and distributed for parallel execution and analysis. Approach: This property had been successfully employed by using Divisible Load Theory (DLT), which had been proved as a powerful tool for modeling divisible load problems in data-intensive grid. Results: There were some scheduling models had been studied but no optimal solution has been reached due to the heterogeneity of the grids. This study proposed a new optimal load allocation based on DLT model recursive numerical closed form solutions are derived to find the optimal workload assigned to the processing nodes. Conclusion/Recommendations: Experimental results showed that the proposed model obtained better solution than other models (almost optimal) in terms of Makespan.
  Waheed Yasin and Hamidah Ibrahim
  Problem statement: Traditional IP networks have many limitations such as routing tables, which can be complex and time consuming. These limitations affect the performance of the network in some applications of triple play services (i.e., voice, video and data) which are characterized as time sensitive applications. Thus, Multi Protocol Label Switching (MPLS) technology has been proposed to speed up the traffic flow in the network using labels. Approach: In this study, an experiment using the Network Simulator NS-2 was performed to evaluate the impact of MPLS technology on the Triple Play Services based on the average throughput of the network, total number of packets received at destination nodes and packet loss rates and this is compared to that provided by traditional IP networks. Results: The results showed that MPLS performs better since it utilizes all the available paths to the destinations. Conclusion: MPLS allows Internet Services Providers (ISPs) to provide better triple play services for end-users.
  Meghdad Mirabi , Hamidah Ibrahim , Ali Mamat , Nur Izura Udzir and Leila Fathi
  Problem statement: In order to facilitate XML query processing, labeling schemes are used to determine the structural relationships between XML nodes. However, labeling schemes have to reliable the existing nodes or recalculate the label values when a new node is inserted into the XML document during XML update process. EXEL as a labeling scheme is able to remove relabeling for existing nodes during XML update process. Also, it is able to compute the structural relationship between nodes effectively. However, for the case of skewed insertions where nodes are always inserted at a fixed place, the label size of EXEL scheme increases very fast. Approach: This study discussed how to control the increment of label size for the EXEL scheme. In addition, EXEL does not consider the process of deleting labels. We also study how to reuse the deleted labels for future label insertions. Results: We proposed an algorithm which is able to control the label size increment. Conclusion: It required less storage size to store the inserted binary bit string and thus can improve query performance.
  Siti Noorasmah Hashim , Rusli Abdullah and Hamidah Ibrahim
  This study proposes a comprehensive Collaborative Knowledge Management System Strategic Planning (CKMS2P) Model to assist the Knowledge Management System (KMS) developers and implementers in managing and executing the development and implementation of CKMS systematically and strategically in the organization. The model was developed based on Systematic Literature Review (SLR) and preliminary survey was done to gauge the perception of the proposed model to support the objective of this research. The data were analyzed using the Rasch Measurement Model. The results show that the CKMS2P may assist the CKMS developers and implementers as a guideline to develop and implement CKMS in their organization comprehensively and strategically.
  Sharmila Mat Yusof , Fatimah Sidi , Hamidah Ibrahim and Lilly Suriani Affendey
  Data Warehouse (DW) can provide an excellent approach in transforming Online Transaction Processing (OLTP) data into useful and reliable information to support organization’s decision making. As such, it can be a basis for data analysis techniques such as multidimensional analysis and data mining. However, DW design process has been known as a complex task that requires systematic and structured approach to guarantee its success. Therefore, there have been various methodologies proposed to carry out the design process which can be classified into requirement-driven, data-driven and hybrid approaches. In this study, a literature survey was made to obtain related works by a set of pertinent key words related to DW conceptual design. The objective of this survey is to provide the state of the art of DW conceptual design methodologies in narrative and summarized forms. The main contribution is to provide understanding of the trend, issues and solutions proposed to date in DW conceptual design and along the way to discover the novel and great contribution works that form an important basis in the DW conceptual design.
 
 
 
Copyright   |   Desclaimer   |    Privacy Policy   |   Browsers   |   Accessibility