Hyper-resolution global hydrological modelling: the next step

Background
 
On 15-17 March 2010 a workshop was held at Princeton University entitled “Meeting a Grand Challenge to Hydrology: The Global Monitoring of Earth’s Terrestrial Water”. The goal of this workshop was to assess the need for developing hyper-resolution (0.1–1km) global hydrology and land surface models and to make an inventory on what obstacles have to be overcome to make these hyper-resolution models a reality. As a result of this workshop a publication came out: Wood, E.F., et al. (2011), Hyper-resolution global land surface modeling: Meeting a grand challenge for monitoring Earth’s terrestrial water, Water Resources Research 47, W05301. Some of you co-authored the paper, some of you participated in a fruitful discussion about its contents and attended a meeting on the subject organized by the Helmholtz Centre last July.
 
Since the Princeton workshop and the paper, several groups have been working on making high resolution “Hydrological Models of Everywhere”. WaterGap (Kassel) now runs at 5 minutes globally and so does PRC-GLOBWB (Utrecht). The Princeton group is developing a modular framework to couple state of the art land surface models to Dynamic TOPMODEL for 30 second continental simulations. Parallel to the global modelling and land surface efforts, there is a community of models that tries to start as physically-based as possible (three-dimensional variably saturated coupled to shallow water equations for surface runoff) and combine these with high-performance computing technology to scale up from catchments to basins to continents.  While global hydrological models increase their resolution and the physically-based catchment models their domains, these models are bound to cross in the near future.
 
Undoubtedly all these efforts run into similar problems: what processes to explicitly model and what to parameterize? How to cope with the computing costs that increase exponentially with resolution; not only because finer grids are handled, but also because many processes that were previously parameterized now have to be described in a spatially explicit manner? How to obtain the information to feed the huge parameterization requirements of these models? How, if at all, to calibrate such models, validate their predictions and perform uncertainty analyses on them?
Since many of the groups are at the moment trying to find solutions to these problems independently, we organized a follow-up workshop in Utrecht to exchange experiences and learn from each other.
 
The main outcome of the workshop was the the foundation of the HYperHydro network with three working groups:
 foundation of the HYperHydro network with three working groups:
  • WG1: Setting up a testbed for comparing different large-scale models at different resolutions.
  • WG2: Around computational challenges, including parallel computing and model component coupling.
  • WG3: With the goal to think about delivering the information needed to achieve hyper-resolution (< 1 km) globally: parameter sets, model concepts and forcing.

 

This entry was posted in Uncategorized. Bookmark the permalink.