Artificial neural network identification in general is based on pattern similarity and absolute values of input variables, but these can differ enormously while their relations may be the same. Generalization (learning) of these dependencies could be a way of modelling systems more similar to human brain learning than a pattern classification. Differential polynomial neural network is a new type of neural network developed by author, which constructs a differential equation, describing system of dependent variables. It creates derivative fractional terms defining a mutual change of some input variable combinations. Its response should be the same to all input patterns, which variables behave the trained dependence (regardless of their values). The most important function of brain seems to be a generalization of received information, i.e., forming ideas of common validity from received data. How does a biological neural cell process information and how it functionality essentially differs from artificial neuron ? It seems to create some polynomial multi-parametric functions, which could be better modeled by polynomial neuron. It should describe the partial dependence of some combinations of variables, using a special type of simple polynomial functions. However neural cell applies periodic activation function to create derivative terms of which differential equation describing data relations consists.