HOME JOURNALS CONTACT

Journal of Applied Sciences

Year: 2013 | Volume: 13 | Issue: 1 | Page No.: 133-139
DOI: 10.3923/jas.2013.133.139
Divide and Conquer Approach in Reducing ANN Training Time for Small and Large Data
Mumtazimah Mohamad, Md Yazid Mohd Saman and Muhammad Suzuri Hitam

Abstract: Artificial Neural Networks (ANN) are able to simplify recognition tasks and have been steadily improving both in accuracy and efficiency. Classical ANN, as a universal approximator, has been proven to be a more versatile and flexible method compared to modern, high-end algorithms. However, there are several issues that need to be addressed when constructing an ANN used for handling large-scaled data, especially those with a low accuracy score. Parallelism is considered as a practical solution to solve such large workload problems. A comprehensive understanding is needed to generate scalable neural networks in order to achieve an optimal training time for a large network. This study proposed several strategies for distributing data to several network processor structures to reduce the time required for recognition tasks without compromising the achieved accuracy. The initial results obtained indicate that the proposed strategies are able to improve the speed up performance for large scale neural networks while maintaining the accuracy.

Fulltext PDF Fulltext HTML

How to cite this article
Mumtazimah Mohamad, Md Yazid Mohd Saman and Muhammad Suzuri Hitam, 2013. Divide and Conquer Approach in Reducing ANN Training Time for Small and Large Data. Journal of Applied Sciences, 13: 133-139.

© Science Alert. All Rights Reserved