Recognition of motion from sequences of images is of great interest in many domains in natural sciences such as meteorology and physical oceanography. Everyone knows the effect of motion that appears when a sequence of satellite images is shown in the weather forecast on TV. "Motion" is defined by a velocity field which depends on space and time. We distinguish between smooth velocity fields like the wind fields that are responsible for the weather and between discontinuous velocity fields, as they appear e.g. if just one object moves in the image and the rest stays fixed.
In the present work a method to estimate a smooth displacement field, which can be understood as the integrated velocity field, is suggested and applied to satellite images of total ozone. It must be pointed out that the displacement field is estimated without any other information, but the image itself. We do not use any explanatory variables, like e.g. estimates of the geopotential height.
Our model is a discretization of the continuity equation. In order to be able to deal with this equation we have to make a few assumptions: Absence of sources and sinks, pure advection and incompressibility of the medium. Our model has the form of a transfer function model and for a given displacement field the image of the next time point can be calculated from the current image. Hence the method can also be used to interpolate images in a way, which is much more realistic than interpolating corresponding pixels in each image.
Due to the assumption of absence of divergence of the velocity field the displacement field is supposed to be area preserving. This leads to an under-determined differential equation, for which a parametric class of solutions is given in this work. Hence, once the global parameter is estimated, the displacement field can be evaluated at any place, even at places in the image, where only very few structure is given. With other methods, which estimate the velocity fields more locally and only for certain grid points, there is a very high uncertainty in such regions.
The parameter of the displacement field is estimated by a least squares method, which results in a non-linear minimization problem. In order to get a consistent estimate of the displacement field, one has to add a negative penalty term. This is due to the fact that we have an errors-in-variables phenomenon, because we assume that the intensity values of the image are disturbed by an observation error. Naturally, the penalty term depends on the variance of this error process. As this noise variance is unknown, it has to be estimated first. According to ideas in non-parametric regression a difference-based estimate for the two-dimensional case is proposed.
The coefficients in the transfer function have a geometrical interpretation. They represent the portion of a pixel k that is covered by pixel l after the motion. More precisely this is the area of intersection of pixel l after its displacement with pixel k divided by the area of pixel k. If we approximate the displaced pixel by a quadrangle, we have to solve the problem of calculating an area of intersections of two polygons.
The discretization error depends on the gradient of the grey values in the image. If one estimates this gradient one can check how much the residuals are affected by the discretization error.
As the dependence of the parameter vector in the residual sum of squares is quite complicated, it is difficult to determine the behavior of the estimate. Nevertheless, we are able to prove consistency in the limit when the pixel size tends to zero. The exchange of taking the limit of the criterion and of minimizing the criterion with respect to the unknown parameter requires some care.