
Updates from our group
Full list of publications is here.

As Earth’s climate changes, it is impacting disasters and extreme weather events across the planet. Record-breaking heatwaves, drenching rainfalls, extreme wildfires, and widespread flooding during hurricanes are all becoming more frequent and more intense. Rapid and efficient response to disaster events is essential for climate resilience and sustainability. A key challenge in disaster response is to accurately and timely identify disaster locations to support decision-making and resources allocation. In this paper, we propose a Probabilistic Cross-view Geolocalization approach, called ProbGLC, exploring new pathways towards generative location awareness for rapid disaster response. Herein, we combine probabilistic and deterministic geolocalization models into a unified framework to simultaneously enhance model explainability (via uncertainty quantification) and achieve state-of-the-art geolocalization performance. Designed for rapid diaster response, the ProbGLC is able to address cross-view geolocalization across multiple disaster events as well as to offer unique features of probabilistic distribution and localizability score. To evaluate the ProbGLC, we conduct extensive experiments on two cross-view disaster datasets (i.e., MultiIAN and SAGAINDisaster), consisting diverse cross-view imagery pairs of multiple disaster types (e.g., hurricanes, wildfires, floods, to tornadoes). Preliminary results confirms the superior geolocalization accuracy (i.e., 0.86 in Acc@1km and 0.97 in Acc@25km) and model explainability (i.e., via probabilistic distributions and localizability scores) of the proposed ProbGLC approach, highlighting the great potential of leveraging generative cross-view approach to facilitate location awareness for better and faster disaster response. The data and code is publicly available at https://github.com/bobleegogogo/ProbGLC.

Road slopes shape mobility patterns and drive the reliability of urban simulations. Yet in most cities, road-level slope information remains scarce. We introduce Vision2Slope, a framework that leverages panoramic street view imagery to estimate road slopes using computer vision techniques. The workflow consists of three steps: (i) projecting panoramic images into road-aligned views; (ii) semantic-prompted image deskewing to correct geometric distortion induced by camera orientation; and (iii) a two-level slope estimation strategy that extracts point- and segment-level slope and relief characteristics from road-edge geometry and iterative regression to reduce outliers. Using Google Street View images from San Francisco and New York City, the framework estimates slopes for over 60,000 locations and 17,000 street segments. Point- and segment-level MAEs are 0.81°/0.57° and 0.72°/0.78°, respectively, with segment relief errors of 1.70 and 1.66 m. Conditional bias analysis reveals the influence of street-level environmental features on estimation accuracy. The proposed framework significantly outperforms the omnipresent 30 m digital elevation models and maintains robustness under simulated changes in camera orientation and imaging conditions. As an open and scalable workflow, Vision2Slope emphasizes the potential of street view imagery for cost-effective, detailed urban road slope mapping, enriching foundational data for vertical-aware urban analytics.

Parks are essential to urban well-being, making park satisfaction crucial for sustainable city development. Traditional survey-based approaches to understand sentiment towards parks among residents are often costly, time-consuming, and limited in scale. Recent social media–based studies have scaled such research but predominantly focus on text and frequently overlook visual information and the joint effects of text–image representations. This study presents an automated multimodal framework using crowdsourced reviews from Google Maps to model park satisfaction by integrating textual and visual features. Using Singapore as a case study, we analysed 76,869 textual reviews and 184,322 images associated with them. The results show that multimodal models are more useful than text-only approaches, with textual sentiment, emotional attributes, and image temporal characteristics identified as the most influential factors. These findings highlight the importance of multimodal analysis for advancing park research and informing planning and policy practices.

In the era of data-intensive science, the complexity and volume of geospatial data have grown exponentially. Compared to traditional data sources, non-traditional sources are more complex and structured, necessitating sophisticated methods and a series of decisions to transform raw data inputs into usable and actionable data products. This Special Issue, “Sustainable geospatial analytics and geoinformatics with repeatable, reproducible, and expandable (RRE) framework and design,” brings together a collection of seven pioneering papers that address the critical need for consistency and transparency in geospatial research. These studies explore diverse domains, including explainable machine learning, disaster risk assessment, urban ecological health, infectious disease control and scientific workflow management. Collectively, they advocate for the adoption of an RRE framework to ensure that results can be verified and reproducible across different environments and expanded with new data or methodologies. By integrating visual programming, service-oriented strategies, as well as Findable, Accessible, Interoperable, and Reusable (FAIR) principles, the featured research lowers technical barriers for non-experts while enhancing the robustness of complex models. This editorial synthesizes the contributions of these papers, highlighting how they foster a sustainable and collaborative geospatial knowledge ecosystem. This collection serves as a roadmap for the next generation of geoinformatics, where transparency and flexibility are foundational to addressing global environmental and social challenges.

Quantifying and assessing urban greenery is consequential for planning and development, reflecting the everlasting importance of green spaces for multiple climate and well-being dimensions of cities. Evaluation can be broadly grouped into objective (e.g., measuring the amount of greenery) and subjective (e.g., polling the perception of people) approaches, which may differ – what people see and feel about how green a place is might not match the measurements of the actual amount of vegetation. In this work, we advance the state of the art by measuring such differences and explaining them through human, geographic, and spatial dimensions. The experiments rely on contextual information extracted from street view imagery and a comprehensive urban visual perception survey collected from 1000 people across five countries with their extensive demographic and personality information. We analyze the discrepancies between objective measures (e.g., Green View Index (GVI)) and subjective scores (e.g., pairwise ratings), examining whether they can be explained by a variety of human and visual factors such as age group and spatial variation of greenery in the scene. The findings reveal that such discrepancies are comparable around the world and that demographics and personality do not play a significant role in perception. Further, while perceived and measured greenery correlate consistently across geographies (both where people and where imagery are from), where people live plays a significant role in explaining perceptual differences, with these two, as the top among seven, features that influences perceived greenery the most. This location influence suggests that cultural, environmental, and experiential factors substantially shape how individuals observe greenery in cities. We also found that the spatial arrangement of greenery in the sight, rather than its proximity to the person, influences perception. Our study provides a new understanding of the deep relationships between objective and subjective street-level greenery assessments, contributing to a more human-centric design of green urban environments.