What is photogrammetry?
Photogrammetry is a sophisticated process by which information is extracted from photographs to create accurate three-dimensional maps and models. Using ultra-high-resolution aerial photographs, this practice combines UAV-mounted overhead sensors with powerful GIS mapping systems to create dynamic, measurable documents for a number of real-world situations and uses.
Photogrammetry has its earliest origins in surveillance and reconnaissance. Pilots during the First World War combined new innovations in both photography and manned flight to gather intel from behind enemy lines. The photographs alone weren’t super valuable without context, so these pioneers used local landmarks and landscape features to determine the orientation of objects in the images. In the decades that followed, these practices would evolve with new tools, from stratospheric U2 aircraft to advanced meteorological satellites to modern drone photogrammetry.
Today’s photogrammetric maps are constructed using
advanced GIS software that can generate surveyor-grade measurements of landscapes and infrastructure. These maps are detailed enough to provide valuable insight into on-the-ground environmental conditions by documenting erosion, vegetation density, water clarity, and more. And that’s just the beginning of what photogrammetry software can do.
A beginner’s glossary of photogrammetry terms and concepts
Before we dive in, here’s a primer on some key terms and concepts that are elemental to making the
best photogrammetry software decision for your organization.
What is an orthophoto or orthoimage?
An orthophoto is an aerial image that’s geometrically corrected to produce a uniform perspective and scale, so it can be used to measure true differences.
To produce a uniform scale, the image needs to be corrected for factors including camera tilt, lens distortion, and environmental conditions.
What is an orthomosaic map?
Using advanced software, a selection of orthophotos can be stitched together to produce a 2D or 3D map of on-the-ground conditions.
An orthomosaic map is a distortion-free, interactive display of high-resolution imagery that can be used to measure accurate distances between actual geographic features.
What is remote sensing?
Remote sensing describes a suite of technologies that use overhead photography and sensors to create detailed maps for measurement and study.
Photogrammetry is one of several tools in remote sensing, and it is used to process images collected by sensors mounted on UAVs, manned aircraft, and satellites. Other forms of remote sensing document infrared and UV radiation, point-by-point distances, and more.
What is Structure from Motion (SfM)?
Structure from Motion is a technique that calibrates two-dimensional images into a reconstruction of a three-dimensional structure, scene, or object.
Using ultra-high-resolution digital surface imagery, SfM can produce incredible point-cloud-based 3D models with similar measurement quality to LiDAR.
What is a geographic information system (GIS)?
Geographic information systems (GIS) are used to pin high-resolution imagery onto satellite positioning data for mapping purposes.
Google Earth is perhaps the most ubiquitous GIS system in existence, but geographic information system data also powers meteorology, advanced surveying and mapping, navigation, and much more.
What is metadata?
Metadata is a series of data-encoded notes collected alongside orthoimages to provide additional context for mapping and modeling software. Metadata may include:
GPS coordinates
Time/date
Focal length
Resolution settings
Atmospheric conditions
And more
Metadata will tell users the conditions in which the data set was created and who created it, both of which offer valuable information for building a uniform scale and perspective.
What is the difference between 3D mapping and 3D modeling?
There’s a lot of overlap between these two technologies. 3D mapping creates an orthomosaic map that has the texture and visual dimension of a 3D model but remains a fundamentally two-dimensional document.
3D modeling introduces depth to the 3D photogrammetry equation by creating composite images with height as well. This added dimension allows the user to view structures and environmental features from multiple angles.
For example, 3D real estate models allow you to “walk through” or do a fly-by on the property to see different “sides” of a home or landscape by clicking different perspectives on the map.
Photogrammetry basics: How does photogrammetry work?
A popular tool in remote sensing, photogrammetry processes images collected using sensors mounted from UAVs, manned aircraft, or satellites to create large-scale images.
These images, called orthophotos or orthoimages, are pinned to a location using GPS positioning and normalized using metadata on environmental conditions like humidity, time, date, and more. This information is sent to servers for collection and storage.
Once collected, orthoimages can be fed into advanced mapping and surveying software to create measurable 3D maps and renderings. Comparing differences in data over time can tease out variations in chemical composition, hydration and humidity, temperature, and other environmental factors — all without putting boots on the ground.
This eye-in-the-sky view is incredibly valuable for assessing large properties and examining remote infrastructure without substantial investment in manned teams.
Photogrammetry and the electromagnetic spectrum
Simple variations in visible light can offer a lot of clues about the objects below.
Photogrammetry sensors collect light from the visible light spectrum (and in some cases, beyond it) to create a picture of landscapes, vital infrastructure, or any 3D object or scene. Add environmental metadata to high-resolution images and researchers can make amazingly accurate hypotheses about real-world conditions.
Light does more than create a nice picture for the map. Rocks, vegetation, and manufactured objects all have unique spectral fingerprints that can be used to help identify their density, chemical composition, and more.
Armed with high-powered remote sensing technology, researchers can use aerial photogrammetry to gather evidence from other spectrums (such as ultraviolet or infrared radiation) alongside visible light to draw more in-depth conclusions about the environment below.
What is LiDAR?
Inspired by sonar and echolocation, LiDAR uses point cloud laser documentation to create a detailed point-by-point map of an object’s position in space.
Commonly used to build spatial awareness into augmented reality, automated driving software, and advanced surveying, LiDAR can analyze large parcels of land for density, topography, and vegetation. While LiDAR can produce incredibly accurate measurements, it doesn’t create an orthoimage — and therefore lacks critical environmental data.
Achieving clarity: image capture and building a drone flight plan
For best results, an image capture flight needs to be planned carefully and executed properly. Factors like altitude, humidity, speed, and light temperature will all impact the quality of images (and therefore the quality of the finished orthomosaic map).
In an ideal scenario, a drone flight plan will be uniform in every possible sense. Images will be collected from the same elevation above a target object or landscape, and shot at the same speed with consistent atmospheric conditions. Any deviations in flight path and image capture process should be minute enough to be normalized during processing before the model is rendered.
Common problems with orthomosaic maps
Drones should make planning flights for orthomosaic photogrammetry relatively easy. With expert drone pilots at the helm, image collection should involve little more than setting a flight path, launching the UAV, and performing quality assurance on images once they’re collected.
However, without experience and careful execution, some common problems arise:
Too many gaps. Orthoimages need to overlap enough for processing software to create a comprehensive map, otherwise gaps, inaccuracies, and visual distortion occur.
Low detail. Poor lighting, bad weather, and out-of-date technology can lead to unfocused cameras that produce blurry images, vignetting, and distortions that reduce data quality.
Irrelevant images. Data sets that include nonessential views from outside established metadata parameters — for example, off-angle takeoff and landing images or images taken outside the target area — can introduce ambiguity into your map.
In order to produce high quality orthomosaic maps, you need a well-planned, professionally executed flight.
View tips for collecting high-quality images
Altitude and speed
Altitude and speed must be properly balanced to produce high-resolution imagery. Gathering data from a lower altitude allows for more detail, however, aircraft must travel slowly and steadily enough to create low altitude images without distortion and blurriness.
The ideal speed and altitude will change depending on the drone model, camera hardware, or even the chosen landscape. In order to suss out the best combination for a project, it’s best to conduct test runs and work with a seasoned professional.
Also, remember that altitude is defined as the distance above an object of interest, and that may vary throughout the drone flight depending on the height of buildings or landscape features. You will want flight control software that can adjust to different heights with a distance-to-subject ratio rather than distance-above-sea-level settings.
Image angles
In order to get orthoimages that easily normalize for uniform processing, the camera should be pointed straight down from the nadir (in drone-based photogrammetry, the nadir describes a perpendicular field of view to the ground or object).
Images taken from an oblique angle will show a slightly different angle than other images in the data set. This causes distortion in the final orthomosaic map, which reduces accuracy and limits the potential for measurement and analysis.
To avoid non-nadir images, it’s important to plan your flight with turns in mind. Don’t use images taken during takeoff or landing, and stop image collection while reorienting on the flight path.
Camera settings
When it comes to camera settings, test and test again; default settings aren’t always the best configuration for aerial photos. Subtle miscues in contrast, aperture, shutter speed, and ISO can introduce distortion that is difficult to correct (and may impact your data).
The right camera settings may change depending on the time of year, the time of day, and the weather, all of which can affect color and shadow in images. Overcast days with light cloud cover tend to produce good-quality images.
Image overlap
Image overlap is necessary for creating highly detailed orthomosaic images with no gaps in visual or data continuity. The more overlap, the more data is collected for the software to include in a composite map — and the more detailed and accurate the map will be.
When planning your drone flight, aim for at least 70% image overlap. Some projects will require more detail and less distortion to be effective, in which case 80% or 90% overlap may be needed. For less detailed maps and models, 60% may be adequate. Be sure to weigh the cost of extra drone coverage against the detail necessary when setting your flight plan.
Keep in mind that older or weaker software may get bogged down when processing an excess of images, which can drastically slow down processing time. For highly detailed maps, software choice (which we cover below) is especially important. You don’t want to end up manually choosing which photos to upload in order to move the process along.
How is resolution defined in photogrammetry?
The quality of an orthophoto is centered on three forms of resolution: spatial, temporal, and spectral.
What is spatial resolution?
Spatial resolution describes the amount of visual data collected in each image pixel. Spatial resolution is measured in physical terms—a document with 100m resolution documents 100 meters by 100 meters worth of clear data per pixel.
What is temporal resolution?
Temporal resolution is a metric for describing how time elapsed between images or data sets, which impacts the analytic quality. Good temporal resolution requires data collection in regular intervals with few substantial gaps.
What is spectral resolution?
Spectral resolution describes the capacity of a sensor to collect information on electromagnetic wavelengths. A sensor may be well suited to documented variances in color, infrared light, or other electromagnetic energy forms.