UAV-based photogrammetric 3D modelling and surveillance of forest wildfires

Forest-fire fighting commonly depends on operational data acquired by firefighters from visual observations. Such estimations are subject to various errors and inaccuracies due to smoke obstructing the flames, human inability to perform visual estimation and errors in the identification of fire location. Various new technologies have been applied to firefighting over the recent years helping to resolve such problems. However, many of these still have suffer from practical problems in the operational conditions, such as low reliability and accuracy, high costs and other ones. Unmanned Aerial Vehicles (UAVs) can play an important role for forest fire response. They have been already successfully demonstrated in the past for fire detection, localization and observation. Fire monitoring is generally related to real-time analysis of the evolution of the most important parameters related to the fire propagation, including shape and position of the fire front, its evolution in time and the average height of the flames. When integrated within a Geographical Information System (GIS), such information can be used to advantage in more efficient deployment of firefighting resources, being the core of its use in the presented AF3 project.

The presented technologies aim to enhance fire detection and monitoring systems capabilities in the perspective of the project purpose, integrating traditional and alternative means of observation and exploiting sensor, surveillance and other relevant technologies beyond the state of the art. For this purpose, a wide array of sensors is deployed including satellite, airborne as well as ground mobile and stationary systems, with monitoring and detection data being fused, managed and visualized into an integrated environment. In AF3, an additional essential objective is the seamless capability of monitoring and fire detection through all fire development phases as well as during post-fire and pre-fire phases, also beyond fire categories (interface fires). In such a system, the UAVs play an important role for early detection of fire ignitions and validation of fire alerts by performing both visual and sensor based verification of the actual incidents. During firefighting, combined with unmanned ground vehicles they support close range surveillance of firefighting operations, search and rescue as well as on the spot analysis of situation at the fire front. After the incidents, they can be used to survey the area in search for possible re-ignitions, assess damages to infrastructures, livestock as well as search for possible casualties and survivors.

In the AF3 project both mini and micro UAV systems are used. First ones perform high-altitude visual and thermal surveillance aiding in firefighting operations. On the other hand, micro-UAVs have been used for fire detection and verification using on-board range of sensors and multi-spectral cameras. Sensors include CO/CO2 measurements and spectrometers aimed at detection a range of ionised gases, such as potassium and oxygen, commonly associated with biomass fire [reference]. 

Manual and autonomous UAV-based aerial surveillance, fire detection and 3D area modelling system has been developed and successfully tested during the 1st trials in Scaramanga near Athens in Greece and then during 2nd trials near Leon in Spain. The first set of trials focussed on spectral sensing and visual imaging in daylight, while the second trials included also observations and 3D modelling also in the thermal spectrum. The technologies used in those tests are described below.


Embedded NIR spectrometer deployed either on ground or aerial vehicles can be used for close-range monitoring of temporal changes in the atmospheric oxygen absorption and ionised potassium over forest fire areas for detection of possible (re-)ignitions of the biomass. As shown in Fig. 1, there are number of characteristic features that can be detected in the NIR spectrum for forest fires. 

Fig. 1 NIR spectrometer data from tests in Athens (left) compared to expected absorptions (right)

The easiest way to detect biomass fires have proven to be potassium emissions (double peaking in the NIR spectrum at 767.4nm and 770.7nm) with additional oxygen absorption (both shown on the left graph in Fig. 1), the former one appearing more significantly than the latter one, hence it has been adopted in the test system for indicating a possibility of a biomass fire.

When deployed on autonomous UAVs, this can be combined with imaging in both visual and thermal surveillance, thus providing visual confirmation of a potential incident. The same images can be also used for 3D modelling and volumetric fuel estimation of the biomass in the given area, as well as post-incident damage analysis, as it is described in the next section.

In the AF3 project, the STS-NIR spectrometer from Ocean Optics covering the range of 650-1100 nm has been used, which delivers sufficient performance and accuracy in performing low-concentration absorbance measurements for forest fire detection. Its rugged design with small weight (50g) and size (4x4x3 cm) makes this sensor a very attractive option for deployment on micro-UAVs that cannot lift heavy payloads, not to mention also its small footprint. In the test design (Fig. 2) an STS Developers Kit has been used that integrates an STS-NIR spectrometer, Raspberry Pi microcomputer, as well as customizable software with wireless networking. This allowed deploying the complete system without much additional development. By adding a Wi-Fi on a USB, the Raspberry could be used as a WEB server this allowing to provide measurements both in real-time for remote clients on the ground and for storing on an embedded SD memory.

Two versions of the sensor system are shown in Fig. 2. The one on the left is a modification of the original STS-NIR Development Kit from Ocean Optics, where extra NIR camera and GPS receiver were added to allow geo-located sensor measurements with visual confirmation. Since such a design is quite bulky, the custom deployment on a Raspberry PI Zero Revision 1.3 has been made (right) to lower the size and weight of the whole system, thus making it small enough for deployment on smaller types of drone, such as DJI Mavic Pro.

Fig. 2 Spectrometer implementation with Raspberry (a) PI 2-B+ and (b) size/weight reduced on PI-Zero 1.3 mounted on DJI Mavic Pro

To further reduce the weight, the batter can be replaced with a custom DC-DC downconverter (requiring physical modification of the UAV to access battery contacts) to allow the powering of the sensor system from UAV battery. Further reduction of size will be achieved by transferring the design to the latest Raspberry PI Zero Revision W incorporating an on-board Wi-Fi and Bluetooth networking as well as by connecting the GPS receiver to the GPIO pins, thus removing the need for the extra USB hub board. Finally replacing USB cables with direct ribbon connections will make the system a fully integrated solution for deployment on any drone type.

The weight of the sensor node including spectrometer and rechargeable 2300mAh battery does not exceed 300 grams (1st version) with the latest release not exceeding 100 grams, allowing more than 4 to 6 hours of continuous operation. An important part of the whole design is an embedded WEB server built upon the SDK from Ocean Optics. The development Kit came with a demo software that included the most important features such as capture, configuration and data transfer, written in PHP, and accessible remotely via on-board WEB server. This requires clients to make direct connection to the drone for accessing NIR sensor information. Custom adaptations included additional configurations, peak-based waveform analysis for determining existence of potassium emission with oxygen absorption for creating fire alerts, as well as geolocation of the measurements independently from GPS used by the UAV. The screenshots of main pages of the WEB server are shown in Fig. 3. They include fire detector screen with list of peaks detected at various frequencies at a given location (left), record of GNSS satellites showing location precision down to a meter (center) and original configuration page (right).

Fig. 3 Embedded WEB server: fire alerts based on detected ionised potassium (L), GNSS daemon (M) and NIR spectrum capture (R)

The original software from Ocean Optics allows also for real-time monitoring of spectrum e.g. during the flight, while measurements are saved on the memory card for possible processing at the control center, if required. By allowing custom filtering and wide range of observation times (integration period), the sensitivity can be increased that might be useful for e.g. surveillance from higher altitudes and/or larger distances, at a price of a lower flight speed.


Images acquired from UAVs and especially from micro-UAVs can be used to create 3D maps of the area to allow e.g. estimation of forestation and especially volume of areas under tree canopies corresponding to the amount of potential flammable material, identification of urban areas and for damage analysis after the incidents, being its main purpose in the AF3 project. Such images need to be taken using specially designed fly paths and hence preferably automatic flight routes need to be used, although it is also possible to perform manual fly byes if full area coverage is performed.

Various types of software have been used to produce 3D models of the areas, including Pix4D Mapper, Autodesk ReMake, Agisoft Photoscan and Artec 3D Studio. The first one proved to be most versatile and offering options for highest accuracy especially when combined with the Pix4D Capture software running on the Android/iOS-based platforms to perform autonomous fly-overs above the areas of interest. Images captures automatically can be processed either online by uploading them to the Pix4D online server for processing, or off-line on a custom PC workstation to produce 3D models in formats suitable for visualisation on the C4I (e.g. 3D-PDF, KLM etc.) and/or subsequent processing (e.g. OBJ, etc.). Since processing of 3D models is a very computationally intensive process, this cannot be performed in real-time, especially when high resolution is required. Typical processing time on a quad-core 2.7GHz I7 Intel processor takes approximately 3-4 hours or more, depending on the number of images and complicated areas, reaching even days on slower workstations and/or when pushing the limits of the photogrammetric technology e.g. for centimetre accuracies from altitudes of 30 meters or above.

Hence such models would be most applicable for advance surveillance of the forest areas and in crisis situations e.g. for assessment of damages during recovery. During firefighting, its applicability might be decreased due to smoke, dust and air turbulences that might affect the visibility and precise positioning of the UAV platforms over the surveyed areas. In such cases, the 3D models can be also built from images captures from e.g. thermal cameras that can penetrate the smoke and to some level also the first canopies. Examples of both types of models have been produced in the AF3 project.

Fig. 4 Athens (May 2016): area with fly paths (left) and 3D area model (right) for test area #1

Two trials have already taken place in the project, one in Greece at Scaramanga naval base near Athens in March 2016. During those tests, visual imaging has been made in both urban and mountain areas using DJI inspire 1 to perform comparative tests of 3D modelling performance with diverse flying paths. The results for urban area is shown in Fig. 4. In this test, regular grid has been made with UAV performing fully autonomous image capture, stopping at each capture location (left) resulting in highly precise representation of the area of interest (right) down to detailed car models. In this test images were taken from an altitude of 30 meters using DJI X3 camera (HD resolution). For comparison purposes manual flight was also conducted as shown in Fig. 4 (right).

Fig. 5 Athens (May 2016): 195x40-meter mountain area with fly paths (left) and 3D volume analysis for test area #5

Additional tests in Athens trials were made in the nearby mountain area to test the accuracy of the 3D models (Fig. 5). Similar fully autonomous image capture from altitude of 30 meters was used with same DJI X3 camera over a dense three-path grid. Significantly more images were captured (204 as compared to 30 from previous test), which resulted in much more accurate representation of ground features down to individual branches clearly visible on the stacks of wood used for target practice. This allowed for very precise (5-10% accuracy) volumetric analysis of the amount of wood placed at each target, as shown on the right in Fig. 5.

Second trials of the AF3 project took place in Leon in Spain during November 2016. During those tests both visual and thermal images were captured, from which 3D models were produced. In those tests, manual UAV operations was performed (irregular grid and/or single corridor) to validate the possibilities of building models from both regular and irregular observations. The same area has been scanned multiple times using either visual (20MP) or thermal (640x480 pixels) camera. Using different types of cameras with diverse resolutions allowed also to verify the compromise between the resolution, the number of images and the density of image captures (overlap). In all those tests the drone did NOT stop when capturing images. The results of 3D modelling are shown in Fig. 6.

Fig. 6 Leon (2016): 3D models from 200 visual HD images (left) & 2060 FLIR (640x480) images (middle) with DJI Inspire 1 (right).

The image capture strategy adopted during trials in Spain has demonstrated clearly negative effects of performing non-structured image captures on the accuracy of final 3D model. From one side, it has proven that regular grid along multiple corridors can offer high resolution and accuracy, whereby accurate representations of individual vehicles can be produced as well as fire front and targets well defined (left in Fig. 6) with only 22 images taken.

On the other hand, thermal images were taken very densely (nearly 3000 images over 200-meter long area), considering their low-resolution. However, no attention was paid to the camera orientation (forward looking) without stopping for images capture. Hence, although the high density of images allowed for accurate representation of ground details (right in Fig. 6), sufficient for assessing biomass content to 30% accuracy, the bending of the UAV nose down when moving forward (similarly to helicopters) has caused false ground bending in the resulting 3D model, since such information is not included with camera parameters (only viewpoint w.r.t. the drone body is registered). This requires either post processing to flatten the ground or the a-priori knowledge of the drone bending during flight. The latter one is dependent on the drone and its speed and hence could be easily compensated for during 3D model processing. This has not been attempted since images were not available for processing for more than a month prior to full paper submission. Such compensation shall be investigated during third trials in Israel, scheduled for end of April 2017.

Considering that 3D model generation using photogrammetry may lead to extensively long processing times, investigation of speeding up the processing using custom built workstation has been conducted. To achieve a working compromise between computational performance and the overall cost of the system, the coprocessing with CUDA parallel computing platform and programming model invented by NVIDIA has been adopted. CUDA cores have long been suggested for coprocessing od simple fixed-point operations like those required by the 3D modelling. The latest cards from NVidia, such as GeForce GTX 1080 (TI) and Titan X include large number of CUDA cores, 2560 and 3584 respectively. Hence, a custom workstation has been built that uses 3.6 GHz 8-core Intel-I7 processor with 32Mbytes of RAM and dual NVidia GeForce GTX 1080 cards, i.e. adding a total of 5120 CUDA cores, total cost closing at 3000 Euros. Such an approach allowed for speeding up 3D model processing more than double for the same size of the project. Encouraged by those results, consideration is made for building the second version of the workstation with dual Intel-I7 processor server board and quad NVidia GTX Titan X graphics cards (connected internally via 4-slot SLI bridge), i.e. adding 14336 CUDA cores in total. This is expected to increase the current processing speed almost five times more, in practical terms allowing to produce high-res 3D models currently needing 16 hrs in under 3 hrs.


The research leading to these results has been partially funded by the European Union Seventh Framework Program (FP7-SEC) under Grant Agreement N° 607276: AF3 “Advanced Forest Firefighting” and from the Reflective Societies program of the Horizon’2020 Research Framework under Grant Agreement N° 665091: SCAN4RECO. Authors would like also to acknowledge the University of Westminster, London (UK), for co-funding of this R&D work through subcontracting agreement in the context of the AF3 project.

References for further reading

Articles View Hits