Asian Science Citation Index is committed to provide an authoritative, trusted and significant information by the coverage of the most important and influential journals to meet the needs of the global scientific community.  
ASCI Database
308-Lasani Town,
Sargodha Road,
Faisalabad, Pakistan
Fax: +92-41-8815544
Contact Via Web
Suggest a Journal
Articles by G. Balakrishnan
Total Records ( 3 ) for G. Balakrishnan
  S. Lokesh and G. Balakrishnan
  One of the draw back in Spoken Dialogue System through speech is the brittleness of Automatic Speech Recognition (ASR). ASR Systems often unpredicted the user input and they are unreliable when it comes to judging, recognition failures and lacking in estimating the own performance of an interaction system. Humans overtake ASR Systems on most tasks related to speech understanding. One of the reasons is that humans make use of much more knowledge. For example humans appear to take a variety of knowledge-based aspects of the current dialogue into account when processing speech. The main purpose of this study is to investigate whether speech recognition also can benefit from the use of higher level knowledge sources and dialogue context when used in human computer interaction. This review provides more insight into what type of knowledge sources in spoken dialogue systems would be potential contributors to the task of ASR and how such knowledge can be represented computationally. The purpose of this survey was also to exemplify the difficulties that arise when using speech in dialogue systems. Many of these difficulties have also been encountered in the experiments.
  G. Balakrishnan and M. Indra Devi
  The development of technical and internet information databases build the huge, complex and various types of heterogeneous data. These real-world databases contain variety of relations and attributes. Query forms which are predefined forms that able to satisfy various informal queries from users on relational databases. We proposed the work intends DQFHD, a novel database query form interface which is able to dynamically generate query forms for heterogeneous databases. The fortitude of DQFHD is to capture a user’s preference supporting the user to make conclusion. The query form generation is an iterative process and can be refined by the user by modifying attributes of particular databases. In the each retrieval of information, the system routinely generates ranking lists of form components and the user then adds the preferred form components into the query form. The ranking of form components is based on the captured user preference. A user can also fill the query form and submit queries to view the query result. In this way, a query form could be dynamically refined till the user satisfies with the query results. This system is designed query forms to support various database queries like relational and XML databases. A probabilistic model is developed for estimating the goodness of a query form in DQFHD.
  S. Lokesh and G. Balakrishnan
  The goal of the proposed study is robust speech feature prediction using mel-LPC to improve the performance of speech recognition in adverse conditions and compares the performance with those standard LPC and MFCC through English dictation system with 14,000 isolated words and 9,000 connected words. The mel-LPC feature prediction is estimated by an optimal value of frequency warping factor that can be estimated from the auto-correlation coefficients and it is computed as the inverse Fourier transform of the power spectrum to generate feature extract vector. Results of the feature extraction are a sequence of 18 mel-LPC coefficients which characteristic of the time-varying spectral properties of the speech signal and these are continuous that can map to discrete vectors in vector quantization codebook. This system is trained by 10 male and 10 female speakers and tested with 200 speakers in noisy and clean environments. Experiments results for various tasks show that with new mel-LPC feature vector system attains isolated and connected word accuracy of 97.5 and 93.2% for male speakers and 96.6 and 92.3% for female speakers with large vocabularies. The result shows that recognition accuracy is relatively higher than LPC and MFCC, respectively.
Copyright   |   Desclaimer   |    Privacy Policy   |   Browsers   |   Accessibility