Photogrammetry vs. LiDAR: Applying the Best Technology for the Job

Photogrammetry and LiDAR (light detection and ranging) can produce similar outputs. Understanding their technological differences is crucial for land surveyors and geospatial professionals to get the best accuracy out of both methodologies.
[ Page 2 of 5 ]  previous page Page 1 Page 2 Page 3 Page 4 Page 5 next page
Provided by POB Magazine
By Mary E. Shacklett
This test is no longer available for credit

LiDAR History & Technology Development

LiDAR’s history began much later than that of photogrammetry. The first attempts to measure distance by light beams were made in the 1930s with searchlights that were used to study the structure of the atmosphere and with light pulses that were used to determine the heights of clouds.

Then in 1961, under the direction of Malcolm Stitch, who headed the laser development program at Hughes Aircraft Company, the first LiDAR-like system was developed just after the invention of the laser. This early system was designed to track satellites. It combined laser-focused imaging with the ability to calculate distances by measuring the time it took for a laser signal to bounce off an object and return to its source. As part of this process, sensors and data acquisition electronics were used.

In 1963, the formal term “LiDAR” was finally introduced. LiDAR’s first common use was in meteorology, where it was used to measure clouds and pollution.

By the time of the 1971 Apollo 15 mission, astronauts were using an altimeter equipped with LiDAR to map the moon's surface.

However, it was in the 1980s that the need for LiDAR grew with an equally important need to find an effective geographical positioning system (GPS). LiDAR sensors that were capable of emitting 2,000 to 25,000 pulses per second were in the market by the 1990s. As in photogrammetry, these systems could deliver dense data sets. Unfortunately, the new LiDAR stems are also extremely expensive.

In the early days of LiDAR, users were primarily interested in mapping the earth's surface and in extracting features from these maps such as roads, buildings and forest canopy characterizations.

Excerpted from the Skagit Lidar Map Journal: skagitcounty.net/Departments/GIS/lidar.htm

LiDAR helps reveal landscape patterns often covered by trees in Skagit County, Washington. In this example, color shading of the LiDAR elevation data is used to illuminate old and abandoned channels formed by the Skagit River. These historic patterns help identify the Channel Migration Zones, which are areas where the river may be expected to move again in the future. Most of Skagit County was flown collecting high resolution LiDAR data.

How LiDAR Works

For large areas, an aerial LiDAR system is deployed to collect data. A device installed in an airplane emits infrared laser pulses as the plane flies back and forth across the landscape. The system records how long it takes for the pulse to travel to the Earth and back. On-board computer systems can then calculate the location and height of the spot the laser beam hits an object or the Earth. The laser hits all objects but also can often make it to the ground. A secondary step allows the removal of the aboveground material (trees, houses, etc.) and the resulting data is the bare earth. This allows us to "see through the trees" at the ground that is usually obscured in aerial images.

Excerpted from the Skagit Lidar Map Journal: skagitcounty.net/Departments/GIS/lidar.htm

A normal aerial image of the landscape and tree patterns of the same area in Skagit County, Washington.

The principle of LiDAR is to shine a small light (laser) at a surface and measure the time that the light takes to return to its source (i.e., Distance = speed of light x speed of flight divided by 2). When it executes this process, a LiDAR instrument can fire up to 150,000 pulses of light per second.

The laser pulses and their measurements are repeated in rapid succession in order to develop a “map” of a given area and its objects. As part of the exercise, the height, location and orientation of the LiDAR instrument must also be known for every laser pulse that is recorded.

LiDAR systems are comprised of four main components:

  • Lasers of 600-1,000 mm wavelengths that are used in measurement;
  • Photodetector and receiver electronics that read and record signals as they enter the system;
  • Scanners and Optics for image capture; and
  • Navigation and positioning systems.

Once LiDAR data is captured, it must be processed, and there is an initial pre-processing step where the LiDAR data is pulled from the LiDAR instrument in a specific sequence.

First, pulled is the laser data, followed by the positional raw data, then ground base station data, and, finally, the raw GPS and inertial measurement unit (IMU) data — which captures data about the movement of the LiDAR instrument. The totality of this data is then consolidated and processed by LiDAR software into what is known as a trajectory file, which is a large binary file that contains the time varying coordinates for each image frame in the system (photo below).

Excerpted from Esri Arcgis: desktop.arcgis.com/en/arcmap/10.3/manage-data/las-dataset/what-is-a-las-dataset-.htm

Example of an LAS data file with corresponding LiDAR image.

After this pre-processing is completed, calibrations are made to the data to unify all of the differences in the sampling of different types of data that were collected. The resulting uniform file is then placed into an LAS format, which is an industry-standard binary format used for storing airborne LiDAR data that also enables the exchange of LiDAR 3D point cloud data between data users. From this LAS format, the data can then be moved into a format that can be used by commercial engineering and mapping software.

LiDAR does not provide imagery, though the intensity of the returns or the ranges of elevation from LiDAR are often rendered to simulate imagery. Most units in use today have the ability to measure the amount of energy returned from each outgoing laser pulse, which is commonly referred to as the intensity, when the laser pulses are reflected off the ground. The intensity of a return off asphalt is very different from a return off concrete. Concrete is a highly reflective surface while the tar in asphalt absorbs quite a bit of the energy from the laser.

Similarly, vegetation has a different reflectance when compared to bare ground. Water typically absorbs all of the energy from the laser. But intensity images don’t provide nearly the same information due to the limited range of data and the relative coarseness of the data postings.

Source: delair.aero

A 3D output map from LiDAR provides elevation information, which can be colorized based on either elevation or intensity to aid interpretation. High-end LiDAR systems perform well when capturing data over power lines or other small-diameter structures.

Lidargrammetry is an interesting development for breakline collection and the compilation of some planimetric features without the use of photography. This new application of LiDAR technology refers to the creation of stereopairs from LiDAR intensity models that can subsequently be used in the soft copy photogrammetric collection. It works well in some areas of the country, particularly in areas with significant terrain relief that is covered by dense post spacings within the LiDAR acquisition. But it does not provide the same accuracy available from low-altitude photo flights (mainly due to the relative coarseness of the LiDAR coverage) and, therefore, its application is somewhat limited depending on the scope of the project.

There are some reasonably good methods and software for doing automated feature extraction from LiDAR data, but the accuracy and detail expected for most large-scale planimetric maps in terms of common features contained in the maps (roads, driveways, building footprints, sidewalks, manholes, curb inlets, utility poles, fences, group vegetation, etc.) require traditional mapping approaches from stereophotography.

 

[ Page 2 of 5 ]  previous page Page 1 Page 2 Page 3 Page 4 Page 5 next page
Originally published in October 2020

Notice