Unfortunately, the number of realizations and the procedure to characterize uncertainties tend to be rather subjective. For example,[46] investigated uncertainty in SAR-derived water stages, for a single SAR image and a single flood mapping procedure, and identified two main sources of uncertainty: 1) the parameter value applied to classify a pixel as flooded (i.e., flooded/ non-flooded classification threshold) and 2) geocoding of the image itself.
They tested four different threshold values and 50 image geocodings to obtain an ensemble of binary flood maps and corresponding SAR-derived water levels. In a similar study, [34] considered uncertainty stemming from the available SAR image and the applied classification procedure. They computed ten binary flood maps by combining two available SAR images that were acquired at nearly the same time but having different resolutions, with five different flood mapping procedures. These case studies show that it is important, as well as far from trivial, to correctly and objectively quantify uncertainty in flood mapping.
More recently, [47] has implicitly introduced a semiautomated approach that allows integrating ancillary information to derive a posteriori probabilistic maps of flood inundation, accounting for different scattering responses to the presence of water. Giustarini et al. [48] proposed the use of a nonparametric bootstrap method to address speckle uncertainty. While focusing only on the specific component of the total uncertainty that derives from speckle, they proposed a methodology to objectively determine the minimum number of realizations capable of taking into account speckle uncertainty. However, for a reliable assessment of flood mapping uncertainty and a subsequent successful data assimilation, a probabilistic map would need to take into account all uncertainty components.