Abstract
We describe a sensor fusion algorithm based on a set of simple assumptions about the relationship among the sensors. Under these assumptions we estimate the common signal in each sensor, and the optimal fusion is then approximated by a weighted sum of the common component in each sensor output at each pixel. We then examine a variety of techniques to map the sensor signals onto perceptual dimensions, such that the human operator can benefit from the enhanced fused image, and simultaneously, be able to identify the source of the information. We examine several color mapping schemes.
Original language | English (US) |
---|---|
Pages (from-to) | 169-176 |
Number of pages | 8 |
Journal | Proceedings of SPIE - The International Society for Optical Engineering |
Volume | 3088 |
DOIs | |
State | Published - 1997 |
Externally published | Yes |
Event | Enhanced and Synthetic Vision 1997 - Orlando, FL, United States Duration: Apr 21 1997 → Apr 21 1997 |
Keywords
- Color fusion
- Color mapping
- Color vision
- Enhanced vision
- Image fusion
- Multisensor fusion
- Sensor fusion
ASJC Scopus subject areas
- Electronic, Optical and Magnetic Materials
- Condensed Matter Physics
- Computer Science Applications
- Applied Mathematics
- Electrical and Electronic Engineering