Asian Science Citation Index is committed to provide an authoritative, trusted and significant information by the coverage of the most important and influential journals to meet the needs of the global scientific community.  
ASCI Database
308-Lasani Town,
Sargodha Road,
Faisalabad, Pakistan
Fax: +92-41-8815544
Contact Via Web
Suggest a Journal
 
Articles by S.P. Rajagopalan
Total Records ( 18 ) for S.P. Rajagopalan
  B. Venkata Raju and S.P. Rajagopalan
  This research is concerned with the adoption of a collective stance in which the human species is viewed as a single organism recursively partitioned in space and time into sub-organisms that are similar to the whole. These parts include societies, organizations, groups, individuals, roles and neurological functions. The concept of expertise arise because organism adapts as a whole through adaptation of its interacting parts. The mechanism is one of positive feedback from parts of the organism allocating resources for action to other parts on the basis of those latter parts past performance of similar activities. The knowledge-level phenomena of expertise, such as meaning and its representation in language and overt knowledge, arise as byproducts of the communication, coordination and modeling processes associated with the basic exchange-theoretic behavioral model. The model is linked to existing analyses of human action and knowledge in biology, psychology, sociology and philosophy.
  Vidyaathulasiraman and S.P. Rajagopalan
  This study proposes a technique of using biometrical user authentication in a simple ECMA authentication model proposed for a heterogeneous environment in an Online Database Transaction Processing (OLTP) system. An elaborated view of the ECMA authentication model is discussed. Significance of using biometrical authentication is brought about. An algorithm is also proposed in this study, which is expected to assure security to high level of resources, where there is no much physical security. Thus we conclude that the proposed technique would be a challenge to the hackers.
  L. Josephine Mary and S.P. Rajagopalan
  Multi-party signing and encapsulation of messages (transactions/documents) that support multiple asynchronous signers and multiple recipients. There must be some mechanisms and protocols to create and distribute protected transactions/documents between participants with flexible verification of such transactions/documents.The main objectives of our new model of a Multi-party transaction protocol to support for secure creation, distribution, verification and different legal assurance levels of signatures of multi-party (sequential and parallel) processing of transactions between multiple signers and verifiers. Therefore, it may be used for protection of various electronic transactions or documents exchanged between multiple participants.
  L.J. Mary and S.P. Rajagopalan
  Transaction protocols and network applications are rapidly changing from a simple client-server model towards multi-party, multi–purpose, multi– device models. The main objective of a model of Multi-Party Security System (Multi-PaSS) provides the security services like Security infrastructure, supporting secure transactions between several parties in an open global environment with registration, certification, distributed remote authentication and trusted security administration, Secure multi-party protocolsfor protection of transactions in transfer, storage, authenticity delegation, authorization forwarding, etc. and Security extensions of various network applications suitable for multi-party transactions providing user authentication, cryptography, certification function, message protection, etc.. The structure of the Multi-PaSS may be shown in the form of three layers. The bottom layer, called security platform, contains cryptographic modules and security tokens, such as ATM cards. The middle layer comprises a collection of security proxies (Multi-PaSS proxies), which can securely communicate with each other performing security operations in a multi-party environment. The top layer, security infrastructure, comprises multiple cross-certified Public Key Infrastructure (Multi-PKI domains) and other supporting servers. All participants of a transaction must communicate through Multi-PaSS proxyservers and they handle authentication and all transactions between them. This authentication is performed using sequentialand parallelconcept. Since, MPT protocol messages contain corresponding timestamps, verifiers can verify creation time and verification time. Multiple signatures together with timestamps provide authenticity, integrity and non-repudiation security services for multi-party transactions.
  K. Satyanarayana and S.P. Rajagopalan
  In this study, we discuss the process of developing a Recommender System for Educational Institutions (RSEI). We considered the application domain of the Educational Institutions and examined the algorithms and architectures pertaining to recommender systems. We discuss the dependencies and present a methodology for developing RSEI. The same can be applied, at very early stages of RSEI development. We have considered the economic factors that affect the design of RSEI based on cost and availability of information. We have also discussed the common approaches available for development of recommender systems and focused them to RSEI.
  M. Sundara Rajan and S.P. Rajagopalan
  We propose a mixed language query disambiguation approach by using co-occurrence information from monolingual data only. A mixed language query consists of words in a primary language and a secondary language. Our method translates the query into monolingual queries in either language. Two novel features for disambiguation, namely contextual word voting and 1-best contextual word, are introduced and compared to a baseline feature, the nearest neighbor. Average query translation accuracy for the 2 features improved considerably compared to the baseline accuracy.
  K. Shyamala and S.P. Rajagopalan
  Since the introduction of association rules, many algorithms have been developed to perform the computationally very intensive task of association rule mining. During recent years, there has been the tendency in research to concentrate on developing algorithms for specialized tasks, for example, mining optimized rules or incrementally updating rule sets. The classic problem of association rules deals with efficient generation of association rules with respect to minimum support and minimum confidence. But the performance problem concerning this task is still not adequately solved. In this study, a theoretical model of algorithm is presented which generates set of essential rules directly without generating the entire set of association rules. A set of pruning rules are formed and they are applied in the design of the algorithm for generating the essential set of rules. The set of essential rules are the set of predictive class association rules. The efficiency of the proposed algorithm is analyzed theoretically. The application of this algorithm avoids redundant computation and also the time required for generating the essential set of rules is subsequently reduced.
  K. Shyamala and S.P. Rajagopalan
  Compared to traditional analytical studies that are often hindsight and aggregate, data mining is forward looking and is oriented to individual students. This study presents the work of data mining in predicting the drop out feature of students. This study applies decision tree technique to choose the best prediction and clustering analysis. The list of students who are predicted as likely to drop out from college by data mining is then turned over to teachers and management for direct or indirect intervention.
  R. Radha and S.P. Rajagopalan
  A data mining technique usually generates a large amount of patterns and rules. However, most of these patterns are not interesting from a user`s point of view. Beneficial and interesting rules should be selected among those generated rules. This selection process is what we may call a second level of data mining. To prevent the user from overwhelming with rules, techniques are needed to analyze and rank them based on their degree of interestingness. There are two aspects of rule interestingness, objective and subjective aspects. In this study, we are concentrating on both the subjective and objective measures of interestingness. A generic problem of finding the interesting ones among generated rules is addressed and a new mathematical measure for finding the interestingness is explained. We have used fuzzy linguistic terms for the attributes, so that the semantics of such rules are improved by introducing imprecise terms in both the antecedent and the consequent, as these terms are the most commonly used in human conversation and reasoning. The terms are modeled by means of fuzzy sets defined in the appropriate domains. However, the mining task is performed on the fuzzy data. These fuzzy association rules are more informative than rules relating precise values.
  Vidyaathulasiraman and S.P. Rajagopalan
  This study provides a detailed review on the generation of the Minimal Cutsets and Minimal Pathsets between any single pair of terminals and for multi-terminal pairs for s-t network. A comparative study is made between the first technique, which deals with the generation of Minimal Cutsets by the node removal method over the given s-t network versus the second technique, which deals with the generation of Minimal Cutsets from Minimal Pathsets by the application of Inter-conversion over minimization technique for a given s-t network. The study concludes that the second technique is comparatively more advantageous, because in the calculation of network reliability, both the Minimal Pathsets and Minimal Cutsets play a vital role. The first technique generates only Minimal Cutsets and the generation of Minimal Pathsets should be done separately. But this is not so in the case of second technique, as it uses the Inter-conversion and minimization technique to convert Minimal Pathsets to Minimal cutsets for a given s-t network. Since this technique is applied over the generation of minimal pathsets for a given s-t networks based on the decomposition of the given network. Decomposition is performed, so that if 2 or more sub-networks are homogeneous, calculation of Minimal Pathsets, for each sub-network are not required. Thus the time required to generate Minimal Pathsets and from it Minimal Cutsets for the network is reduced. Thus the second technique is more efficient for large and complex networks.
  S. Jayalakshmi and S.P. Rajagopalan
  In a gene expression data matrix a bicluster is a submatrix of genes and conditions that exhibits a high correlation of expression activity across both rows and columns. The problem of locating the most significant bicluster has been shown to be NP-complete. Heuristic approaches such as Cheng and Church’s greedy node deletion algorithm have been previously employed. It is to be expected that stochastic search techniques such as evolutionary algorithms or simulated annealing might improve upon such greedy techniques. In this study we show that an approach based on modified simulated annealing is well suited to this problem and we present a comparative evaluation of simulated annealing and node deletion. We show that modified simulated annealing discovers more significant biclusters.
  V.N. Rajavarman and S.P. Rajagopalan
  Genetic algorithm is one of the commonly used approaches on data mining. In this study, we apply a genetic algorithm approach for classification problems. Binary coding is adopted in which an individual in a population consists of a fixed number of rules that stand for a solution candidate. The evaluation function considers 4 important factors which are error rate, entropy measure, rule consistency and hole ratio, respectively. Adaptive asymmetric mutation is applied by the self-adaptation of mutation inversion probability from 1-0 (0-1). The generated rules are not disjoint but can overlap. The final conclusion for prediction is based on the voting of rules and the classifier gives all rules equal weight for their votes. Based on three databases, we compared our approach with several other traditional data mining techniques including decision trees, neural networks and naive bayes learning. The results show that our approach outperformed others on both the prediction accuracy and the standard deviation.
  A. Iyem Perumal and S.P. Rajagopalan
  Simulated Annealing (SA) is a powerful stochastic search method applicable to a wide range of problems. It can produce very high quality solutions for hard combinatorial optimization problems. SA can be generalized to fit non-convex cost-functions arising in a variety of problems which is known as Boltzmann Annealing (BA). The purpose of describing Simulated Quenching (SQ) and Fast Annealing (FA) is to highlight the importance of Adaptive Simulated Anealing(ASA). ASA is a global optimization algorithm based on an associated proof that the parameter space can be sampled much more efficiently than by using the previous Simulated algorithm.
  K. Satyanarayana and S.P. Rajagopalan
  Recommender systems are more and more needed in order to exploit the huge amount of information available on the Internet. Recommeder system makes automatic prediction with regard to the interest expressed by the user. All shop owners do not give same importance to all products since these products do not possess the same value. Any shop owner will prefer products, which will give him/her higher profits and carries high product value. In this study, we have discussed about fuzzy classification of product values and use of linguistic variables, method of associating fuzzy classification with recommender system.
  M. Sundara Rajan and S.P. Rajagopalan
  Medical NLP systems, generally designed to analyze medical texts for decision support or indexing purposes, have to deal with ambiguities in language. Information Retrieval (IR) in the Medical Domain often ignores word senses in document relevance calculations. This is largely due to the fact that word sense disambiguation is not an easy task. However, in IR some of the strictest requirements of traditional approach to detailed disambiguation can be relaxed. This study discusses a successfully implemented approach to using word senses in IR tasks, which can be combined with various IR methods for successful use in the Medical Domain.
  A. Iyemperumal and S.P. Rajagopalan
  In this study, we analyse alternatives for the parallelization of the simulated annealing algorithm when applied to the placement of modules in a VLSI circuit considering the use of PVM on an Ethernet cluster of workstations. It is shown that different parallelization approaches have to be used for high and low temperature values of the annealing process. The algorithm used for low temperatures is an adaptive version of the speculative algorithm. Within this adaptive algorithm, the number of processors allocated to the solution of the placement problem and the number of moves evaluated per processor between synchronization points change with the temperature. At high temperatures, an algorithm based on the parallel evaluation of independent chains of moves has been adopted. It is shown that results with the same quality of those produced by the serial version can be obtained when shorter length chains are used in the parallel implementations.
  Vidyaathulasiraman and S.P. Rajagopalan
  This study proposes the use of finger print as the only biometrical user authentication in a simple ECMA authentication model proposed for a heterogeneous environment in an Online Database Transaction Processing (OLTP) system. An algorithm is proposed in this study which uses finger print scanning as the primary tool for biometrical user authentication and is expected to assure security to high level resources, where there is no much physical security. The study presents the existing authentication models and an analysis is made to justify finger print as the best suitable model among them, for OLTP systems under heterogeneous environment.
  D. Pugazhenthi and S.P. Rajagopalan
  Access to the complete human genome sequence as well as to the complete sequences of pathogenic organisms provides information that can result in an avalanche of therapeutic targets. Structure-based design is one of the first techniques to be used in drug design. Drug design is the approach of finding drugs by design, based on their biological targets. Structure based design refers specifically to finding and complementing the three dimentional (3D) structure (binding and/or active site) of a target molecule such as a receptor protein. The aim of this review is to give an outline of studies in the field of structure based drug design that has helped in the discovery process of new drugs. The emphasis will be on comparative homology modeling.
 
 
 
Copyright   |   Desclaimer   |    Privacy Policy   |   Browsers   |   Accessibility