HOME JOURNALS CONTACT

Information Technology Journal

Year: 2014 | Volume: 13 | Issue: 1 | Page No.: 12-21
DOI: 10.3923/itj.2014.12.21
Apps at Hand: An Intelligent Mobile User Interface for Fire Recognition Algorithm
Yan Qiang, Yue Li, Wei Wei and Juanjuan Zhao

Abstract: The rapid growth of mobile apps gives rise to the challenge of discovering apps in the mobile systems while it concurrently brings the problem of intensive screen space consumption for mobile devices. To be specified, users always need scrolling screens of apps just for discovering and accessing one of them. The traditional forest fire recognition algorithm couldn't achieve low time-consuming and high accuracy at the same time. To the features of forest fire image, discussed the significance of forest fire prevention combining sea computing, studied and proposed the forest fire recognition algorithm based on sea computing. Used Qt/Embedded to design the algorithm according to the characteristics of the sensor node and then completed the judgment of color and region by this algorithm through additive color model and Sobel operator. Finally, the results were automatically fed back to the sink node. The experiments showed that, this algorithm not only completed the instant detection and recognition of forest fire but also synchronously achieved higher accuracy rate and lower time-consuming compared with the traditional methods.

Fulltext PDF Fulltext HTML

How to cite this article
Yan Qiang, Yue Li, Wei Wei and Juanjuan Zhao, 2014. Apps at Hand: An Intelligent Mobile User Interface for Fire Recognition Algorithm. Information Technology Journal, 13: 12-21.

Keywords: Personalized intelligent screen, mobile app, Qt/Embedded, additive color model, sobel operator and sea computing model

INTRODUCTION

Over the past decade, the severity of forest fires was increasing. Coupled with the objective conditions of global warming, experts predict that forest fires will become a serious threat to the ecosystem in future. Characteristic of forest fires was sudden and destructive it’s difficult to fight fire and the cost was high. In addition, the prevention of forest fires was also difficult as the forest area was large and there were many combustible materials under forest. It’s hard to fight in the event of forest fire (Viret and Queyla, 2004). Therefore, the prevention of forest fires had become extremely important thing. Now, most of the methods of forest fire prevention is relying on Internet of Things (IOT) (Liu et al., 2012).

Currently, the research of IOT is relatively weak in the calculation and control of the perceptual end (objective end), many operations need to be transmitted to the background through the network, excessive consumption of energy and inefficient. Thus, the concept of “Sea computing” was introduced to the development of IOT. Sea Computing was firstly proposed by General Motors Corporation Chairman and CEO Molina in the 2009 Technology Innovation Conference it’s a new technical concept. Sea computing model (Sun et al., 2010) means that computing, communications and intelligent algorithm were integrated into the objects of physical world, make one subject could interconnect with another and judge in the scene which can’t be predicted in advance. That is, the object has the function of self-organizing, self-calculation and self-feedback. The nature of the Sea computing is smart exchanges between different objects. It achieves the interaction between objects and stresses the intelligent connection of the physical world.

For the problem of traditional forest fire prevention, we studied sea computing model and the methods of forest fire prevention and proposed forest fire recognition algorithm based on sea computing in this study. Sensor node was carried with the forest fire recognition algorithm to achieve the monitoring of forest area and determined whether the fire occurred through algorithm analyzing images collected by camera. It will greatly reduce the elapsed time in the event of little reduction of accuracy. The goal of our study is that the sensor node could spontaneously analysis image and calculate in order to accurately determine whether there is a fire occurred and then feedback information automatically to the sink node according to different conditions.

Related work: In this section we describe some related works of this study, i.e., the mobile device logging, the mobile usage analysis and the applications based on app usage analysis.

Mobile device logging: The methods of forest fire prevention based on IOT were achieved by two ways (Cheong et al., 2011; Pan et al., 2010; Truong and Kim, 2012): First, the node was equipped with sensors such as temperature and humidity sensor, then the data of temperature, humidity and others were transmitted to the sink node via wireless network, finally, the sink node determined whether the fire occurred according to the returned data. Second, the images captured by camera at the sensor node were transmitted to the sink node for image processing and then sink node determined whether there is a fire occurred. These methods are based on the traditional IOT, that is, computing and controlling is implement in sink node through the network. However, in view of forest fire prevention, the first method, even high-precision sensor sometimes also can’t work accurately because of the greater space and several kinds of noise; while the second method has high requirements for rate of transmission in wireless network it always wastes too much time in image transmission. Therefore, these methods generally can’t simultaneously achieve the low elapsed time and high accuracy.

There are unique characteristics when implementing traditional forest fire recognition algorithm based on sea computing. First, make sure of the supply of the sensor network and multi-unit could collaboratively work together; second, make sure the calculation of sensor nodes normal while saving energy as possible. Therefore, the related requirements of our algorithm are accurate and condensable.

Mobile usage analysis: The Forest Fire Recognition Algorithm based on Sea Computing (FFRASC) proposed in this study, is based on the sensing layer in IOT and belong to real-time perception data processing. It was intelligent image recognition and processing algorithm in perception of the physical world. The key of this algorithm is that implemented the calculation of application layer in IOT in sensing layer, that is sea computing mode. It’s carried on the microprocessor in sensor node. When the sensor captured image by external camera, FFRASC could automatically obtain and analyze this image based on feature in order to determine whether the fire occurred. Once it determined there was a fire, FFRASC immediately transmitted the signal which meant "fire" to the sink node through the serial port and wireless network, to notify the relevant personnel take measures. Because this algorithm was based on the wireless sensor network it must be able to run on the sensor and need to be designed according to the processing capacity of microprocessor in sensor. We select the common embedded Linux as our platform.

Application based on usage analysis: The reason why the traditional methods couldn’t implement image recognition at the sensor end is that most of the traditional forest fire recognition algorithm is complex, thus microprocessor in sensor node cope with it difficultly. As a result of that, the time-complexity is always more than O(n4) (Zhang and Zhu, 2011; Truong and Kim, 2012; Li et al., 2012a, b; Xu et al., 2008). It’s high time-consuming and power consumption, not suitable for forest fire prevention. FFRASC proposed in this study reduced the algorithm volume to control time-complexity under O(n2) according to the characteristics of microprocessor and it also ensured high accuracy at the same time. The main idea of FFRASC is divided into two parts (1) It could get the features of image captured by camera; (2) It could do image processing according to these features. We selected Qt/Embedded as our tool based on the two parts. Qt/Embedded is the embedded version of the cross-platform application framework Qt. It interacts with Linux I/O and Framebuffer directly through QtAPI and it has higher operating efficiency. It uses object-oriented programming and has a good ground architecture and programming model. Therefore it’s very suitable for designing the algorithm run in embedded Linux.

SYSTEM DESIGN

The detection of forest fires is different with ordinary fire. If you use machine to determine forest fire through image, there will be many confounding factors, such as the leaves, the sunshine and so on. Therefore, when establishing the algorithm it is necessary to consider the interference of different factors. In the early stages of fire growth, the fire generally have more obvious image visual features, such as overcast kindle light, flame color, blinking and shape change (Truong and Kim, 2012; Li et al., 2012a, b; Xu et al., 2008; Feng et al., 2011). In FFRACS that we proposed, the needed fire physical characteristics.

APP USAGE PATTERN

App weight pattern color change: Since there is correlation between the fire color and the fire temperature, thus, in image processing, we can determine whether there is a fire according to the color of the pixel in this image. As shown in Fig. 1, we deduce the fire according to the color change between different regions.

App context pattern area change: After the fire occurred it has a growing trend. The trend of fire area is continuous and extensible just as we can observe from the image, the trend of increase of the extensibility. However, due to the particularity of forest fires, there may be other types of object similar to fire (a red maple leaf, for instance) in the area covered by camera it’s likely to cause interference. Therefore, according to the characteristics of the forest fire spread rapidly, we determine there is a fire occurred or just similar objects through the changes of suspected fire area. As shown in Fig. 2 and 3, the forest fire spread rapidly but the area of objects similar to fire is stable.

Env context pattern: In traditional fire recognition algorithm it’s often implemented based on the color features, the area changes, the edge changes, physical changes and other multiple criterions (Truong and Kim, 2012; Li et al., 2012a, b; Xu et al., 2008; Feng et al., 2011), the complexity of algorithm is over O(n4).

Fig. 1: Differentiation of color in forest fire

The Forest Fire Recognition Algorithm based on Sea Computing (FFRASC) proposed in this study, only use two modules: The fire color judgment and the fire area judgment, to implement the fire determining as well as keep the accuracy and control the complexity of the algorithm within O(n2).

APP USAGE PREDICTION

In practical applications, the most common color determining models are additive color model. For example, we can get various colors through the red (R), green (G), blue (B) changes and their superposition. In this study, we used the 24-bit additive color model.

The 24 bits plus color model refers to, in the display mode of computer, a color model is implemented by 24 bits. For example, RGB value is encoded by 24 bits per pixel: Used three 8-bit unsigned integer (0-255) to indicate the intensity of the red, green and blue.

Extracting the RGB values from the image captured by camera is achieved by this class QRgb:

Qrgb rgb = image.pixel(i, j)

Wherein, image is the variable read by Qimage class. The code means that store the R value, B value and G value of the pixel (i, j) in the image to the RGB three-dimensional matrix:

r = qRed(rgb)

g = qGreen(rgb)

b = qBlue(rgb)

Fig. 2: Rapid expansion of forest fire area

Fig. 3: Stability of the area of objects similar to fire

Then respectively store the R value, G value, B value of each pixel to three variables, r, g, b and set the RGB threshold V(R, G, B) according to empirical data. If the R, G and B value of a pixel reaches threshold value range at the same time, we define this point as the fire point.

App context based prediction: In traditional fire recognition algorithm, the fire area judgment is achieved generally by extracting contours through the methods such as covariance matrix and so on (Habiboglu et al., 2012). This study is based on the platform of embedded Arm9 processor and embedded Linux operating system, therefore we will simplify the traditional area algorithms. According to the characteristics of the fire image, we use the Sobel operator (Tong et al., 2011; Wang, 2011).

Env context based prediction: The operator uses two 3x3 kernels which are convolved with the original image to calculate approximations of the derivatives-one for horizontal changes and one for vertical. If we define A as the source image and Gx and Gy are two images which at each point contain the horizontal and vertical derivative approximations, the computations are as follows:

(1)

(2)

where, *here denotes the 2-dimensional convolution operation.

Weighted union of predictions: Since the Sobel kernels can be decomposed as the products of an averaging and a differentiation kernel, they compute the gradient with smoothing. For example, Gx can be written as:

(3)

The x-coordinate is defined here as increasing in the "right"-direction and the y-coordinate is defined as increasing in the "down"-direction. At each point in the image, the resulting gradient approximations can be combined to give the gradient magnitude, using:

(4)

Using this information, we can also calculate the gradient's direction:

|G| = |Gx|+|Gy|
(5)

(6)

where, for example, Θ is 0 for a vertical edge which is darker on the right side. The algorithm we proposed is based on digital image, so the method used is:

(7)

In Eq. 7, fx and fy are:

fx = f(i-1, j-1)+2f(i-1, j)+f(i-1, j+1)-
f(i+1, j -1)-2f(i+1, j)-f(i+1, j+1)
(8)

fy= f(i-1, j-1)+2f(i, j–1)+f(i+1, j-1)-
f(i-1, j+1)-2f(i, j+1)-f(i+1, j+1)
(9)

Equation 8, 9 traverse each pixel in image and calculate the neighbor’s gray weighted value sum of every pixel (e.g., (i, j)), then set the threshold τ according to the empirical data. If T(i, j)≥τ, the pixel (I, j) is the edge point. If the edge points can form a closed area it shows that there is a suspected fire region. Set area as the fire region value, area = 1 at this time.

Determining the threshold based on empirical data: Selected 50 different fire images as the training data, as shown in Table 1.

Table 1: Featured value of the training data

Fig. 4: Decision tree model based on machine learning

Table 1 is the 50 different fire images which are the training data, (R, G, B) values and gray weighted value sum are featured data. The label in the table shows the accuracy when determining whether the fire occurred according to these values. Then, establish machine learning decision tree based on this data and use the training data in Table 1 to complete the intelligent recognition and then achieve decision tree model, as shown in Fig. 4.

Figure 4 can be seen, we can obtain the additive color model and gray weighted value sum threshold according to the decision tree model. The additive color model threshold is:

(10)

and the gray weighted value sum threshold is:

T(i, j)≥ τ, τ = 289
(11)

Determination of fire trend: When doing the fire color judgment, if the R, G and B value of a pixel reaches threshold value range at the same time, we define this point as the fire point. Here, we defined the variable count as the number of fire points. In this algorithm, we calculated the percentage of the number of fire point in the whole image. We set the suspected fire value is 15%, if the percent is less than 15%, or area = 0, the algorithm determined the fire occurred, then sent the data packet "00000000" to the sink node and executed the ‘shut down’ command to close the sensor.

If percent is greater than 15% and area = 1, then started the camera again to capture image and analyze. After that, if the percent did not change compared with before, then determined there was suspected fire interfering in. If the percent change, greater than 15% and less than 20% it was considered as a stable fire, thus sent the data packet "00110011" to the sink node, the sink node determined there was a stable fire according to this; if the percentage is greater than 20%, sent the data packet “11111111" to the sink node and it will be determined as runaway fire.

EVALUATION

The platform used in this experiment was a single sensor node and a single sink node (PC simulate) it’s a simplified version of the wireless sensor network. The sensor node consisted of the following components: Microprocessor, ZigBee wireless communication module, power supply, camera. The specific configuration is shown in Fig. 5.

When the sensor starting, the camera automatically captured the surrounding images and communicated with sink node through ZigBee wireless network, record the results.

When the sensor automatically shut down, ignited a fire in a small area and then waited for the sensor to start again to capture the images of fire and communicate with sink node, record the results.

In order to obtain the comparison results with the method of the node being equipped with sensors such as temperature and humidity sensors, placed the node with temperature and humidity sensor but no FFRASC in the same environment, then repeat steps A and B.

In order to obtain the comparison results with the method of the images captured by camera at the sensor node being transmitted to the sink node for image processing, placed the node without FFRASC in the same environment, then repeat steps A and B. When the camera automatically capturing the surrounding images, sensor node didn’t handle but only transmitted this image to the sink node through the wireless network and analyzed it with the same FFRASC to determine in sink node, record the results.

In Fig. 6, 7 and 8, Node ID refers to the number of nodes in the sensor network, the number of nodes is only one because simulated and simplified version of the wireless sensor network in this experiment.

Fig. 5: Hardware configuration diagram of the sensor nod

Fire refers to the results after judgment; Temperature and Humidity refers to the data transmitted from sensor node after used temperature and humidity sensor (there was no data when used FFRASC and other methods), Elapsed time is the time consumed in single experimental. The judgment of the method of FFRASC and the images being transmitted to the sink node for image processing were both accurate (stable fire) but the judgment of the method of the node being equipped with temperature and humidity sensors was inaccurate (no fire).

As can be seen from the above figure, when carried the FFRASC on sensor node, the judgment of whether the fire occurred was very accurate and time-consuming was also less. The method of the node was equipped with temperature and humidity sensors, because the scope of the fire is small, the ambient temperature and humidity changed little, the accuracy significantly reduced. The method of the images transmitted to the sink node for image processing, the accuracy was high but time-consuming was more (Wei et al., 2012, 2013; Wei and Qi, 2011).

Expanded the experiment to a long period of time and then repeated above experiment 50 times with different fire hazard during this period. Based on the experimental data obtained in this time, we got the accuracy comparison of three methods, the accuracy rate was calculated as:


Fig. 6: Results of the method of FFRASC

Fig. 7: Results of the method of temperature and humidity sensors

Fig. 8: Results of the method of the images were processed in the sink node

As shown in Table 2, the accuracy rate of FFRASC proposed in this study is 90% it’s very close to the accuracy of the images transmitted to the sink node for image processing.

Table 2: Accuracy comparison of three methods

The traditional method of the node was equipped with temperature and humidity sensor, the accuracy rate is unsatisfactory compared with other methods because of the particularity of the forest fire.

Fig. 9(a-c): Comparison of accuracy fluctuations in three methods, (a) FFRASC, (b) Sensor and (c) Sink

Fig. 10: Comparison of elapsed time of three methods

Furthermore, according to the differences in these three methods, we made out the accuracy fluctuations diagram and time-consuming diagram, as shown in Fig. 9 and 10.

In Fig. 9, the blue line represents the Forest Fire Recognition Algorithm based on Sea Computing (FFRASC), red represents the traditional method of the node was equipped with temperature and humidity sensor, green represents the method of the images transmitted to the sink node for image processing. The vertical axis is accuracy, high means judging accurately, low means judging falsely. As can be seen from Fig. 9, the accuracy fluctuations of the methods of FFRASC and image processing in sink node are substantially identical, they can remain the miscarriage of justice in the low range as possible. The accuracy of the method of temperature and humidity sensor is up and down and there is a higher rate of miscarriage of justice. The miscarriage of justice don’t outbreak in the specific area but during the whole experiment it has a relatively high probability in occurring the miscarriage of justice it will be a hidden danger in the forest fire prevention (Wei et al., 2010a, b; Wei and Zhou, 2012).

As can be seen from Fig. 10, the time consuming of FFRASC was the lowest when the number of experiments expanded. The method of the node was equipped with temperature and humidity sensor had the advantage while the number of experiments is low but after the number of experiments expanded to 20, the time-consuming increased significantly with an upward trend compared to FFRASC. The method of the images transmitted to the sink node for image processing, since most of the time was consumed in transmitting the image, therefore time consuming was more than the other two. High time-consuming also means battery consumption increasing, these factors often delay the processing of forest fires, the results may be irreparable. In contrast, FFRASC ensured high accuracy rate and also maintain the time-consuming in very low ground.

The above experiments showed that, FFRASC proposed in this study could perform well in the sensor node and could achieve better effect than other methods on the balance of accuracy and time-consuming. It could be effectively practiced in the field of forest fire prevention (Wei et al., 2012).

CONCLUSION

This study proposed the forest fire recognition algorithm based on sea computing, the characteristic of this algorithm is the ability to determine the fire through the images captured by camera at the sensor node and automatically send data to the sink node based according to the analysis results. It has high accuracy rate and low time-consuming, is the method worthy of being used in the practice of forest fire prevention. The same time, the algorithm still has further improvement, for example, how to maintain the algorithm volume to ensure no additional power-consuming, as well as how to further improve the efficiency of fire recognition which will be the focus in future work.

ACKNOWLEDGMENTS

This study was supported by the Science and Technique Project of Shanxi Province under Grant (20120313032-3), the Natural Science Foundation of Shanxi Province (2012011015-1) and the National Natural Science Foundation (No. 61202163, 61240035, 61373100). This job is also supported by Scientific Research Program Funded by Shaanxi Provincial Education Department (Program No. 2013JK1139) and Supported by China Postdoctoral Science Foundation (No. 2013M542370) and supported by the Specialized Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20136118120010).

REFERENCES

  • Viret, J. and J.L. Queyla, 2004. Forest fire prevention in the South/Mediterranean area-legal instruments and field considerations. Revue Forestiere Francaise, 56: 201-202.


  • Liu, S., S.C. Wu, W. Huang and H.Q. Liu, 2012. A real-time monitoring system of forest fire prevention based on wireless sensor networks. Chin. Agric. Sci. Bull., 7: 53-58.
    Direct Link    


  • Sun, N.H., Z.W. Xu and G.J. Li, 2010. Sea computing: New computing model in internet of things. China Comput. Fed. Commun., 7: 52-57.


  • Cheong, P., K.F. Chang, Y.H. Lai, S.K. Ho, I.K. Sou and K.W. Tam, 2011. A ZigBee-based wireless sensor network node for ultraviolet detection of flame. IEEE Trans. Ind. Elect., 58: 5271-5277.
    CrossRef    


  • Pan, J., N. Ma and Y. Huang, 2010. An appraisal method of forest fires prevention ability based on AHP. Chin. Perspect. Risk Anal. Crisis Response, 13: 1010-1014.


  • Truong, T.X. and J.M. Kim, 2012. Fire flame detection in video sequences using multi-stage pattern recognition techniques. Eng. Appl. Artif. Intell., 25: 1365-1372.
    CrossRef    


  • Zhang, H. and L. Zhu, 2011. Internet of things: Key technology, architecture and challenging problems. Proceedings of the IEEE International Conference on Computer Science and Automation Engineering, Volume 4, June 10-12, 2011, Shanghai, China, pp: 507-512.


  • Li, W., D. Wang and T. Chai, 2012. Flame image-based burning state recognition for sintering process of rotary kiln using heterogeneous features and fuzzy integral. IEEE Trans. Ind. Inform., 8: 780-790.
    CrossRef    


  • Li, H.L., Q. Liu and S. Wang, 2012. A novel fire recognition algorithm based on flame's multi-features fusion. Proceedings of the International Conference on Computer Communication and Informatics, January 10-12, 2012, Coimbatore, India, pp: 1-6.


  • Xu, S.L., M. Zhao and J.B. Xu, 2008. Research for early fire image recognition technology in outdoors. Comput. Technol. Dev., 6: 214-216.
    Direct Link    


  • Feng, S.C., D.D. Kong and S.E. Lu, 2011. Criterion of flame based on image recognition research. Tech. Autom. Appl., 30: 67-69, 77.
    Direct Link    


  • Habiboglu, Y.H., O. Gunay and A.E. Cetin, 2012. Covariance matrix-based fire and flame detection method in video. Mach. Vision Appl., 23: 1103-1113.
    CrossRef    


  • Tong, X., A. Ren, H. Zhang, H. Ruan and M. Luo, 2011. Edge detection based on genetic algorithm and sobel operator in image. Proceedings of the International Conference on Graphic and Image Processing, October 1-3, 2011, Cairo, Egypt -.


  • Wang, K.F., 2011. Edge detection of inner crack defects based on improved sobel operator and clustering algorithm. Applied Mech. Mater., 55-57: 467-471.
    CrossRef    


  • Wei, W., A. Gao, B. Zhou and Y. Mei, 2010. Scheduling adjustment of mac protocols on cross layer for sensornets. Inform. Technol. J., 9: 1196-1201.
    CrossRef    Direct Link    


  • Wei, W., B. Zhou, A. Gao and Y. Mei, 2010. A new approximation to information fields in sensor nets. Inform. Technol. J., 9: 1415-1420.
    CrossRef    Direct Link    


  • Wei, W. and B. Zhou, 2012. A p-Laplace equation model for image denoising. Inform. Technol. J., 11: 632-636.
    CrossRef    Direct Link    


  • Wei, W., X.L. Yang, P.Y. Shen and B. Zhou, 2012. Holes detection in anisotropic sensornets: Topological methods. Int. J. Distrib. Sensor Networks.
    CrossRef    


  • Wei, W. and Y. Qi, 2011. Information potential fields navigation in wireless Ad-Hoc sensor networks. Sensors, 11: 4794-4807.
    CrossRef    Direct Link    


  • Wei, W., Q. Xu, L. Wang, X.H. Hei, P. Shen, W. Shi and L. Shan, 2013. GI/Geom/1 queue based on communication model for mesh networks. Int. J. Commun. Syst. (In Press).
    CrossRef    

  • © Science Alert. All Rights Reserved