Abstract
This paper proposes a fully automated generative network (“SynFAGnet”) for automatically creating a realistic-looking synthetic fire image. SynFAGnet is used as a data augmentation technique to create diverse data for training models, thereby solving problems related to real data acquisition and data imbalances. SynFAGnet comprises two main parts: an object-scene placement net (OSPNet) and a local–global context-based generative adversarial network (LGC-GAN). The OSPNet identifies suitable positions and scales for fires corresponding to the background scene. The LGC-GAN enhances the realistic appearance of synthetic fire images created by a given fire object-background scene pair by assembling effects such as halos and reflections in the surrounding area in the background scene. A comparative analysis shows that SynFAGnet achieves better outcomes than previous studies for both the Fréchet inception distance and learned perceptual image patch similarity evaluation metrics (values of 17.232 and 0.077, respectively). In addition, SynFAGnet is verified as a practically applicable data augmentation technique for training datasets, as it improves the detection and instance segmentation performance.
Original language | English |
---|---|
Pages (from-to) | 1643-1665 |
Number of pages | 23 |
Journal | Fire Technology |
Volume | 60 |
Issue number | 3 |
DOIs | |
State | Published - May 2024 |
Bibliographical note
Publisher Copyright:© The Author(s) 2024.
Keywords
- Data augmentation
- Generative adversarial network
- Image synthesis
- Inpainting
- Object placement