Rigorous formulation in terms of only feature descriptors is given for: two-and three-dimensional transformations, photogrammetric conditions, and linear feature geometric constraints. Experimental results, considering control and pass features, from single photo resection (recovering both interior and exterior orientation elements) and two-photo triangulation (estimating pass lines for object completion), using simulated data and some real image data. Geometric constraints are used to provide redundancy in place of straight lines in stereo pairs. Extensive investigation is continuing.
INTRODUCTIONImage information represents different forms in the object space. In general, these forms have been classified as point features, linear features, and area features. Many past photogrammetric reduction treatments were developed based on object point features. Linear and areal features were usually extracted from the formed photogrammetric model but did not contribute to the solution.At the present time, the conditions involved in photogrammetric activities are changing substantially. Digital imagery, either directly acquired or from digitized photography, are becoming much more available than before. Furthermore, more robust techniques for feature extraction from digital imagery are continuously being developed. In particular, edges and linear features are relatively easier to detect and extract from digital imagery. Photogrammetric methodology, therefore, is being expanded to accommodate features other than points, especially linear features. In particular, given the image descriptions of an object linear feature, on two or more images, the original linear feature may be rigorously derived, when image interior and exterior orientation is known. The derivation need not be dependent on having corresponding image point features which lie on the linear feature. In fact, the feature description on various overlapping images may be of distinctly different segments of the linear object feature. In this manner there will be no possibility of having conjugate image points. This also clearly implies that the description of the image feature is not the primary factor in the modeling; instead, it is the description of the feature in the object space. Once a general mathematical model for the linear feature is developed, it can be applied to various photogrammetric problems. For example, in photogrammetric resection of a single image, the linear feature is used as control in order to recover the sensor/platform exterior orientation parameters. For relative orientation of overlapping images, it acts as a 'pass linear feature" in the same sense as a pass point. Space intersection of linear features would involve the solution for the object space description of linear feature given its image representation in two or more images and all the sensor/platform model parameters. Finally, photogrammetric triangulation would be a simultaneous resection/intersection where linear features may be used either as control and/or as pass features, t...