Dan Mihai Burlacu
Tie point extraction in UAS imagery for vegetation areas
The use of unmanned aerial systems as tools for airborne data acquisition has broadened the range of possible applications of photogrammetry. As a result of the lower heights they can safely fly at, higher spatial resolutions are attainable, this ideally leading to higher accuracy of the output. However, the terrain in natural areas densely covered by vegetation raises a number of challenges for highly precise point positioning by means of UAS photogrammetry, as indicated in the application scenario presented by (Cramer, 2016). The automatic tie point extraction process is thus hindered by the highly similar local appearance of vegetation features and the stronger perspective distortions caused by the lower flying height. The thesis is aiming to present the current status of automatic tie point extraction, as well as to discuss possible solutions that would enhance the process, particularly for vegetation covered areas.
The implemented algorithm makes use of the SIFT detection and description algorithm (Lowe, 2004), as well as the SFOP keypoint detector (Förstner, et al., 2009). The proposed strategy is to constrain the process to matching only corresponding sub-blocks of the images. While previous work such as that of (Sun, et al., 2014) relies on a set of pre-matched points for determining the corresponding sub-blocks of the images, the proposed approach makes use of the available exterior orientation information that is increasingly available for UAS flight missions, thus without performing any matching beforehand. The main steps of the matching algorithm are presented in Figure 1.
In case of the insufficiently distinctive vegetation features, invariant descriptors are actually detrimental to the matching process. While for unordered datasets the scale and rotation invariance of the descriptors are an obvious necessity for matching features, in case of standard photogrammetric blocks with regular strip geometry and approximately constant flying heights the invariance of descriptors unnecessarily lowers the number of matches and the overall accuracy of the matching process, particularly in the case of vegetation covered areas. In the following investigations, the effect of removing the rotation invariance of the SIFT descriptor is also analysed.
Three different datasets were used for carrying out the investigations using the proposed matching algorithm. In order to evaluate the current status of tie point extraction, the number, distribution and ultimately the image observations residuals for the tie points extracted using three of the currently available photogrammetric software (Pix4Dmapper, Agisoft PhotoScan and Trimble Inpho Match-AT) were analysed, then used as reference in further comparisons, all carried out under the same conditions (Figure 2). These highlight the benefit of removing the invariance of the descriptors in vegetation areas. In Figure 2, xSFOP denotes that rotation invariance was removed for the SIFT descriptors computed for the SFOP keypoints, while the values between parentheses denote the epipolar line constraint thresholds for each case.
The main obstacles when attempting to match features from vegetation areas are the repetitiveness and indistinctiveness of the features, which lead to the rejection of ambiguous matches. This indistinctiveness of the features may actually be reduced by removing the invariance of the descriptors when it is not required, particularly in the case of regular photogrammetric blocks.
Future work may concern the study and development of a trained descriptor specifically designed for vegetation areas, as a more general approach to the matching process. However, assuming that vegetation features can now be matched sufficiently accurately, as the spatial resolution of the imagery increases the main issue becomes the inherent movement of vegetation. If vegetation objects are not static, the respective tie points may ultimately be discarded despite the fact that the features are initially matched correctly.
Cramer, M., 2016. UAS photogrammetry for high precise point positioning of linear objects. Lausanne: EuroCOW.
Förstner, W., Dickscheid, T. & Schindler, F., 2009. Detecting interpretable and accurate scale-invariant keypoints. Kyoto, Japan: 12th IEEE International Conference on Computer Vision (ICCV'09).
Lowe, D., 2004. Distinctive Image Features from Scale-Invariant Keypoints. s.l.:International Journal of Computer Vision.
Sun, Y., Zhao, L., Huang, S. & Dissanayake, G., 2014. L2-SIFT: SIFT feature extraction and matching for large images in large-scale aerial photogrammetry. s.l.:ISPRS Journal of Photogrammetry and Remote Sensing.