HOME JOURNALS CONTACT

Journal of Artificial Intelligence

Year: 2014 | Volume: 7 | Issue: 1 | Page No.: 1-12
DOI: 10.3923/jai.2014.1.12
Development of Contactless Integrated Interface of Complex Production Lines
Andrey Ostroukh, Tatiana Morozova, Viacheslav Nikonov, Irina Ivanova, Konstantin Sumkin and Dmitry Akimov

Abstract: The purpose of this work is to improve the quality of the design and organization of operation of production lines by the creation and implementation of a contacting integrated interface. The study presents an analysis of methods and means of computer vision. Revealed are the hallmarks of the methods used in the development of provinces and differences of methods from existing analogues. To identify and determine the direction of the gaze, author’s developed the algorithm for contactless control system, based on next models: The model of the appearance of eye area and evaluation of gaze direction. The method of tracking gaze on the basis of fixing the position of the eyeball proposed. For the application of computer vision systems developed algorithm of subsystem of smooth motion and a subsystem of saccades. We developed intellectual methods to track eye position when various positions of the head. These methods allow an operator of control room of flaw detection to control a production process without using hands.

Fulltext PDF Fulltext HTML

How to cite this article
Andrey Ostroukh, Tatiana Morozova, Viacheslav Nikonov, Irina Ivanova, Konstantin Sumkin and Dmitry Akimov, 2014. Development of Contactless Integrated Interface of Complex Production Lines. Journal of Artificial Intelligence, 7: 1-12.

Keywords: Haar wavelet, fuzzy neural network, stereo vision, procrustean analysis, Contactless integrated interface and intelligent core

INTRODUCTION

The necessity of automation of control has led to emergence of a wide class of human-machine automatic control systems for various purposes, without which it is impossible to imagine a modern industrial production. Further improvement of organizational and technical flexibility of various technological processes control at solving specific problems associate with growing role of man in modern systems due to intellectualization of Automated Control System of Technological Process (ACS TP) and growing complexity of tasks (Morozova et al., 2013a; Afanasev and Brovkin, 1999; Vladimirovich and Borisovich, 2013; Vladimirovich et al., 2012a, b, 2013a, b, 2014; Vladimirovich and Evgenievna, 2011; Vladimirovich and Vladimorov, 2012; Ismoilov et al., 2013).

There are a number of professions with their employees “operators” specific requirements. First of all, this is the ability to process large amounts of polyvariant information from diverse sources through independent channels, instant decision on the basis of the received data and the commission a significant number of operations (mainly manipulative nature) with electronic equipment (computer) in a very short period of time.

Development and implementation of a modern ACS requires organization of interaction between human and electronic machine. Structure of interaction between operator and computer for solving the task of operational management can be quite flexible depending on complexity of tasks, qualification of operator, automation level.

The main characteristics of the operator are correctness, performance, accuracy, reliability. Evaluation of the performance of the operator is time which, together with similar indicators of program-technical part, determines the performance of the whole human-machine system (Morozova et al., 2013a; Afanasev and Brovkin, 1999).

The efficiency of the entire system depends on human participation in management process is organized. This study is about the scientific and technical objectives of models development, methods and tools for creating a contactless interface for control station operator on a continuous production line and directly based on the research of leading scientists.

On conveyer lines are often used multilevel systems. The operator has a huge information overload and the operator have monitor information on multiple displays with several dialog boxes. It is possible to pass part of management actions to the eye controlled system, including the switching between windows and management of information flows.

However, there is the lack of smart system which integrates diverse information about environment in real time. These problems could be solved using universal contacting interfaces that do not require input devices for hands and any contact elements.

SIGHT IDENTIFICATION MODEL

Sight identification algorithm consists of two parts: Training of the active model of eye area appearance; using of the active model of eye area appearance and evaluation of gaze direction.

Training and application of active model of the appearance is produced on the example of face image to manage special software and hardware systems (Morozova et al., 2013b; Sumkin et al., 2012; Amit and Geman, 1999).

The learning procedure of active models of appearance begins with the normalization of all forms in order to compensate for differences in scale, skew and offset (Brunelli and Poggiot, 1997). It uses so-called generalized Procrustean analysis. A set of labels before and after normalization are shown on Fig. 1 (Stegmann, 2002).

After all of forms are normalized, the matrix that consists of all their dots is formed:

[S1,S2,..., Sm]

Where:

Sm = [x1m, x2m,..., xNm y1m, y2m..., yNm]T

Fig. 1(a-b): Set of labels (a) Before and (b) After normalization (Stegmann, 2002)

After highlighting of main components of this matrix, we obtain the following expression for the synthesized form:

S = S0SbS

where, S0 form is averaged across all implementations of training set (basic form), ΦS is matrix of main vectors, bS is parameters of form.

This expression means that the form can be expressed as a sum of the basic form S0 and linear combinations of forms contained in the matrix of ΦS. Modifying the parameter vector bS, different types of deformation of shape to fit it under the actual image can be obtained.

Learning process consists of the following steps: Extraction from training images textures that best fit basic form; using piecewise interpolation is performed mapping of received as a result of a triangulation of the regions of the training images into respective regions of formed texture; textures used to form the matrix, where each column contains values of pixels of the appropriate texture (similar to the matrix S). Textures used for training was a single channel (grayscale) and multichannel (RGB color space) (Cootes et al., 1998; Cootes and Taylor, 2001).

Expression for the synthesized textures is obtained:

t = t0tbt

where, t0 is the basic texture obtained by averaging over all textures of the training set, Φt is matrix of own textures, bt is the vector of parameters of active appearance.

Active appearance model was adapted: For light eyes (blue, green) red component of the pixels of the iris and the pupil considerably lower intensity than the white of the eye or skin.

It searches for precise coordinates of the center of the pupil and for exact eye contour (including the contour of the iris and contours of the century) after detecting of eye centers coordinates. Search the pupil center is based on three-dimensional model of the head and performed by different methods depending on lightning conditions of the image. To detect coordinates of the central flare used a multistage verification of detecting pixel, corresponding to the flare in the pupil (Hansen and Majaranta, 2012).

Conducted detection precise contours of the eyes to manage software and hardware systems: Using deformable contour models required initialization of model which is close to the final configuration that let us to obtain satisfactory results. Since the object detection with deformable models is reduction of the recognition problem to the problem of multidimensional optimization, it is necessary to determine local minimum. Another way to achieve a stable result of detection is use of a complex system of rules and restrictions on possible values of each contour parameter depending on other parameters (Boskovitz and Guterman, 2002).

Detected not the entire contour but several main points of eye contour: The eye corners, the boundaries of the pupil. This approach leads to less accurate resulting contour but it is much more stable and sustainable detection algorithm.

The algorithm combines the benefits of both approaches-precision contour allocation, stability and reproducibility of the results (Fig. 2).

Fig. 2(a-d): (a) Sample image of eye, (b) Cards of gradient modulus (more dark pixels corresponds to larger values of gradient), (c) Graph of brightness scalar field and (d) Mentioned graph after 1D low-pass filtering

To avoid wrongly detected points, the Haar wavelet is applied to separate main points of left and right sides (relative to the center of the pupil) of half of eye contour. Points are removed from the set, that lying too far from main axes.

Then the set is complemented by a set of points of upper border of the pupil and curve approximating points of upper eyelid is calculated. Calculation of curve parameters is reduced to the problem of over determined system of linear equations. The system of equations is solved by Householder transformation. Based on these studies, the markup rules were built.

RECOGNITION BIOMETRIC POINTS

We selected training set for biometric points recognition system, based on XM2VTS database containing images of faces (Fig. 3) (Vizilter et al., 2005; Matthews and Baker, 2004).

Eye pupil tracking algorithm is shown in Fig. 4. It is necessary to provide to decision-making subsystem with actual information on the situation of the eyeball in order to recognize control commands.

It is necessary to solve the problem of preparation of information to decision-making system following stages: Recognition of the eyeball and its position determination, noise filtering based on the model of eye movement and extraction of eye supplied command mark.

A system for gaze tracking based on the data analysis (smooth tracking subsystem and the subsystem of saccades) is developed (Fig. 4).

Result of the analysis of the image are six numbers: Pupil center coordinates (in pixels on the original video frame); coordinates of corneal flare (also in pixels on the original frame video sequence); width and height of the ellipse corresponding to the pupil (Robertson et al., 1993).

Fig. 3: Option of points placing in eye area in XM2VTS database

Fig. 4: Eye pupil tracking algorithm

Data are pre-processed in order to select sequence of fixations and saccades to manage program-technical systems. A modified algorithm based on following methods is developed. First method is Velocity Threshold Identification (I-VT), used in systems with relatively high frequency of registration and second method is Dispersion-Threshold Identification (I-DT), used in systems with relatively low rate of registration (Salvucci and Goldberg, 2000).

The analysis of specific indicators of oculomotor activity is carried out at the final stage of data processing. The following characteristic indicators of oculomotor activity are used: Average duration of fixation, the duration of first fixation, the position of fixations, the amplitude and latency of saccades (Salvucci and Goldberg, 2000), duration and frequency of blinking, a number of indicators related to the allocation of the image of critical areas.

To identify the pupil is calculated center of mass for each point on the equations:


Fig. 5: Boundary regions used for calibration and during command recognition

where, x, y are the coordinates of center of mass of dark pixels and I(x,y) is the intensity of the axes.

Position of the marker flare is determined in the same way: The developed model gave an opportunity to determine the relative position of the marker and the center of the pupil and an opportunity to make the conclusion of the rotation on the eye.

When analyzing the movements of the eyes were selected basic parameters, evaluation of which is necessary and is basis for control systems. To determine the commands used fuzzy inference systems. Application of this method is dictated by the fuzzy nature of determining the characteristics of eye position (Schneiderman, 2004; Vezhnevets, 2002).

To determine the parameters, as the result of analysis of which formed the control command from human, following problem formulated and solved: Special interface is exists and we need to confidently control the cursor and enter commands by using a sequence of eye movements. Cursor positioning should be controlled by smooth movement of eyes.

To specify logical operation on variables that were previously defined, we made the scaling and defining of the set of numerical values that can make variables of the model of the eye.

When configuring, maximum angles of eye gaze on special marks displayed on the monitor are calculated.

Areas, highlighted in Fig. 5, are used to recognize commands. When marker in a certain area it means that gaze is directed at frontier points are on closest to frontier points of the monitor.

This set is vector of areas:

L = {li|i∈NL}

where, NL is No. of areas.

It is necessary to NL = 9 for effective management of software and hardware system.

ALGORITHM FOR NETWORK TRAINING

To construct a production model based on fuzzy hyper-resolution, linguistic variables are defined.

To determine the meaning of linguistic variables modifiers were used. For example, for the linguistic variable "Direction" modifiers are "Left", "Left up", "Up", "Right up", "Right", "Right down", "Down", "Left down". They form the set of level of ownership to a specific value of the linguistic variable. Deviation from the center of a pupil is characterized by the linguistic variable “Deviation” which membership function depends on Dp, where Dp is relative characteristic of length and determined as a percentage of greatest axis of the reduced area of the pupil.

Production model is constructed on the basis of rules of the form:

I1: if x1 is A11 and xn is A1n then y is B1

Im: if xm is Am1 and xn is Amn then y is Bm

Mentioned rules can be represented as implications of the form PeQ. Inference in production model is carried out by the method of fuzzy hyper-resolution.

Problem area is described by a system of statements:

ib: P∧Q∧R⊃B basic premise (rule)
ia1: P∧T minor premise (fact #1)
ia2: minor premise (fact #2)
B? goal (result)

System (ib-ia1) is representable in the form of a set S of clauses and the target clause B added in S with negation:

Let is set some interpretation I on the set S and some order T of literals included in clauses S. Since S is contradictory, then there are clauses L and one of which is false in the accepted interpretation I. Proved that possible to build hyper-resolutive inference of empty clause, false in all interpretations.

The structure of fuzzy neural network classifier to control of program-technical systems (Fig. 6) is developed.

Fig. 6: Model of fuzzy classifier

Fig. 7: Block diagram of the algorithm of neural network training

In the process of fuzzy modeling used neural networks for structural and parametric identification of fuzzy model. The application of neural or adaptive network is carried out in the framework of two basic approaches. The first approach leads to cooperative Fuzzy Neural Network (FNN) and based on the interaction of neural network and fuzzy system, present in the model in the form of separate components. The second approach improves the fuzzy system and leads to hybrid FNN. The presence of a direct correspondence between elements of the neural network structure and components of the fuzzy model distributes property of interpretability, inherent to fuzzy systems, on fuzzy neural network model and gives the opportunity to present the results of training in the form of flexible logic constructions-linguistic rules (Vezhnevets, 2002, 2006; Yang et al., 2002; Brilyuk and Starovoytov, 2002).

At the output of the neural network obtained values of linguistic variable, that let us to obtaine command code to the interface.

Self-organizing fuzzy network having a simple two-layer structure was used.

The algorithm for neural network training is produced. The block diagram is shown in Fig. 7.

Fig. 8: Functional diagram of program system

Fig. 9: Architecture of program system

The algorithm performs the fuzzy processing have the following feature: Fuzzy filter calculates fuzzy increment so that it was less sensitive to local changes of the label.

Fuzzification of the input image is done using the position of the label. So, pupil position can be regarded as unclear division of certain sets or as classes of quality concepts.

Based on testing of networks we chosen fuzzy neural network with implemented algorithm of Sugeno. This is due to the fact that the fuzzy neural network with algorithm of Sugeno has less learning error and less prediction error (Brilyuk and Starovoytov, 2002; Vezhnevets, 2006).

Functional diagram of the software system is presented in Fig. 8. The developed software system architecture is shown in Fig. 9.

CONCLUSION

Passed approbation of developed models and algorithms of the intelligent support of decision-making in problems of control of program-technical systems (Vladimirovich and Borisovich, 2013).

The developed package of program applications is intended for the following tasks:

Positioning and tracking eyes on the image with video
Definition of a list of key concepts for each situation
Construction of base of fuzzy production rules for determination of the control commands
Drawing up a ranked list of all created rules in the form of a sequence sorted by relevance for each state
Modification of rules depending on the previous values

Knowledge base in the system based on a database consisting of several related tables and inference system based on resolutions.

For the software implementation of the project has been chosen Microsoft.NET Framework 4.0 on the C++ programming language. Development environment is Microsoft Visual Studio 2010. Additional libraries: Intel Integrated Performance Primitives (IPP), Microsoft Visual C++ 2005 Service Pack 1 Redistributable Package ATL Security Update to Version 8.0, Qt4.X.

Studies of developed algorithms showed high adequacy and reasonableness decision making under conditions of uncertainty; minimization of time and financial and energy costs.

As a result of analysis of methods and means of computer vision distinctive features of methods used in development of smart contactless interface have been identified: Recognition of situational environment through the integration of information and intellectual processing; using of stereo vision and smart methods to head position detection; intellectual core of decision-making based on fuzzy neural network.

Developed the algorithm of eye identification and gaze direction for contacting control of continuous technological process which consists of two parts: Training of the active model of eye area appearance; deployment of active model of eye area appearance and gaze direction evaluation.

Developed the method of gaze tracking on the basis of analysis of data on the eyeball position, processed by smooth tracking subsystem and subsystem of saccades, that allowed to develop an algorithm and user interface for application of computer vision system (stereo vision) and smart methods of above mentioned interface processing to track eye position while different position of head (without fixation), that allows the operator in dispatching office of flaw detection to control production process using the eyes or head movements without use of hands.

REFERENCES

  • Morozova, T.Y., D.A. Akimov and M.A. Chistyakova, 2013. The question of creating contactless controls large sets of data in ergatic systems. Aerospace Instrument Making, 5: 46-56.
    Direct Link    


  • Afanasev, V.O. and A.G. Brovkin, 1999. Research and Development System of Interactive Monitoring Induced Virtual Environment (Virtual Presence System). In: Cosmonautics and Rocket Engineering, Afanasev, V.O. and A.G. Brovkin (Eds.). 16th Edn., TSNIMASH, Russia, pp: 48-67


  • Morozova, T.Y., D.A. Akimov, I.V. Terekhin and V.V. Nikonov, 2013. [Situational analysis in industrial systems, human-machine interaction]. Proceedings of the 9th International Scientific-Practical Conference, January 27-February 5, 2013, Prague, Czech Republic, pp: 35-38 (In Russian).


  • Sumkin, K.S., T.Y. Morozova and D.A. Akimov, 2012. [Automation of stereozvreniya on methods of allocation control actions and fuzzy logic]. Proceedings of the Scientific and Practical Internet Conference, June 19-30, 2012, Odessa, Ukraine, pp: 60-66 (In Russian).


  • Amit, Y. and D. Geman, 1999. A computational model for visual selection. Neural Comput., 11: 1691-1715.
    CrossRef    


  • Brunelli, R. and T. Poggiot, 1997. Template matching: Matched spatial filters and beyond. Pattern Recogn., 30: 751-768.
    CrossRef    


  • Stegmann, M.B., 2002. Analysis and segmentation of face images using point annotations and linear subspace techniques. Technical Report IMM-REP-2002-22, Informatics and Mathematical Modelling, Technical University of Denmark, Denmark.


  • Cootes, T.F., G.J. Edwards and C.J. Taylor, 1998. Active appearance models. Proceedings of the 5th European Conference on Computer Vision, June 2-6, 1998, Freiburg, Germany, pp: 484-498.


  • Cootes, T.F. and C.J. Taylor, 2001. Constrained active appearance models. Proceedings of the 8th IEEE International Conference on Computer Vision, Volume 1, July 7-14, 2001, Vancouver, British Columbia, Canada, pp: 748-754.


  • Hansen, D.W. and P. Majaranta, 2012. Basics of Camera-Based Gaze Tracking. In: Gaze Interaction and Applications of Eye Tracking: Advances in Assistive Technologies, Majaranta, P., H. Aoki, M. Donegan, D.W. Hansen, J.P. Hansen, A. Hyrskykari and K.J. Raiha (Eds.). IGI Global, USA., pp: 21-26


  • Boskovitz, V. and H. Guterman, 2002. An adaptive neuro-fuzzy system for automatic image segmentation and edge detection. IEEE Trans. Fuzzy Syst., 10: 247-262.
    CrossRef    


  • Vizilter, Y.V., S.L. Karateev and I.V. Beketova, 2005. Biometric Identification Methods on Human Images of his Face. In: Vestnik Komp'iuternykh i Informatsionnykh Tekhnologii [Herald of Computer and Information Technologies], Vizilter, Y.V., S.L. Karateev and I.V. Beketova (Eds.). OAO Izdatel'stvo Mashinostroenie, Moskva, Russia


  • Matthews, I. and S. Baker, 2004. Active appearance models revisited. Int. J. Comput. Vision, 60: 135-164.
    CrossRef    


  • Robertson, G.G., S.K. Card and J.D. Mackinlay, 1993. Information visualization using 3D interactive animation. Commun. ACM., 36: 57-71.
    CrossRef    


  • Salvucci, D.D. and J.H. Goldberg, 2000. Identifying fixations and saccades in eye-tracking protocols. Proceedings of the Symposium on Eye Tracking Research and Applications, November 6-8, 2000, Palm Beach Gardens, FL., USA., pp: 71-78.


  • Schneiderman, H., 2004. Learning a restricted Bayesian network for object detection. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Volume 2, June 27-July 2, 2004, Washington, DC., USA., pp: 639-646.


  • Vezhnevets, V., 2002. Face and facial feature tracking for natural human-computer interface. Proceedings of the International Conference of Graphicon, September 16-21, 2002, Nizhny Novgorod, Russia, pp: 86-90.


  • Yang, M.H., D.J. Kriegman and N. Ahuja, 2002. Detecting faces in images: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 24: 34-58.
    CrossRef    


  • Brilyuk, D.V. and V.V. Starovoytov, 2002. [Recognition for human facial image and neural network methods]. http://en.bookfi.org/book/579188 (In Russian).


  • Vezhnevets, A.P., 2006. [Classification methods of learning by precedents in the problem of object recognition in images]. Proceedings of the 16th International Conference on Computer Graphics and Applications, July 1-5, 2006, Novosibirsk, Akademgorodok, Russia, pp: 1-8 (In Russian).


  • Ostroukh, A.V. and A.B. Nikolaev, 2013. Development of virtual laboratory experiments in iLabs. Int. J. Online Eng., 9: 41-44.
    Direct Link    


  • Vladimirovich, O.A., B.K. Aleksandrovich, N.A. Borisovich and S.V. Yurievich, 2013. Interactive game modeling concept for personnel training at the industrial enterprises. World Applied Sci. J., 28: 45-55.
    Direct Link    


  • Vladimirovich, O.A., B.K. Aleksandrovich and S.N. Evgenievna, 2013. Computer game modeling organizational structures of enterprises and industrial associations. Res. Inventy: Int. J. Eng. Sci., 3: 20-29.
    Direct Link    


  • Vladimirovich, O.A. and S.N. Evgenievna, 2011. E-Learning Resources in Vocational Education. LAP LAMBERT Academic Publishing, Saarbrucken, Germany, Pages: 184


  • Vladimirovich, O.A., P.A. Petrikov and S.N. Evgenievna, 2012. Corporate Training: Process Automation Management Staff Training Industry. LAP LAMBERT Academic Publishing, Saarbrucken, Germany, Pages: 147


  • Vladimirovich, O.A., M.I. Ismoilov and A.V. Merkulov, 2012. Corporate Training: Training of Personnel of Enterprises Based on a Virtual Model of the Professional Community and Grid Technologies. LAP LAMBERT Academic Publishing, Saarbrucken, Germany, Pages: 129


  • Vladimirovich, O.A. and L.V. Vladimorov, 2012. Distance Education Technologies: Research and Development of Software Products for Video Lectures and Webinars. LAP LAMBERT Academic Publishing, Saarbrucken, Germany, Pages: 97


  • Ismoilov, M.I., A.B. Nikolaev and O.A. Vladimirovich, 2013. Training and Retraining of Personnel and Industrial Enterprises of Transport Complexes Using Mobile Technology. Publishing House Science and Innovation Center, Saint-Louis, MO., USA., Pages: 166


  • Vladimirovich, O.A., B.K. Aleksandrovich, N.A. Borisovich and S.V. Yurievich, 2014. Formal methods for the synthesis of the organizational structure of the management through the personnel recruitment at the industrial enterprises. J. Applied Sci., 14: 474-481.
    CrossRef    Direct Link    

  • © Science Alert. All Rights Reserved