
Updates from our group
Full list of publications is here.

Outdoor thermal comfort is a crucial determinant of urban space quality. While research has developed various heat indices, such as the Universal Thermal Climate Index (UTCI) and the Physiological Equivalent Temperature (PET), these metrics fail to fully capture perceived thermal comfort. Beyond environmental and physiological factors, recent research suggests that visual elements significantly drive outdoor thermal perception. This study integrates computer vision, explainable machine learning, and perceptual assessments to investigate how visual elements in streetscapes affect thermal perception. To provide a comprehensive representation of diverse visual elements, we employed multiple computer vision models (viz. Segment Anything Model, ResNet-50, and Vision Transformer) and applied the Maximum Clique method to systematically select 50 representative ground-level images, each paired with a corresponding thermal image captured simultaneously. An outdoor, web-based survey among 317 students collected thermal sensation votes (TSV), thermal comfort votes (TCV), and element preference data, yielding 2,854 valid responses. The same survey was replicated in an indoor exhibition setting to provide a comparative reference against the outdoor experiment. A Random Forest classifier achieved 70% and 68% accuracy in predicting thermal sensation and comfort, respectively. Using Shapley Additive Explanations to interpret model outcomes, we uncovered that the colour magenta emerged as the most influential visual factor for thermal perception, while greenery – despite being participants most preferred element for cooling – showed weaker correlation with actual thermal perception. These findings challenge conventional assumptions about visual thermal comfort and offer a novel framework for image-based thermal perception research, with important implications for climate-responsive urban design.

Urban areas stand at the forefront of the climate crisis, facing escalating environmental pressures, growing social inequalities, and heightened risks to human health and well-being. These challenges are especially pronounced in rapidly expanding cities across the Global South, where informal settlements, resource constraints, and inadequate infrastructure amplify vulnerabilities. Conventional urban planning and management approaches, developed prior to recent advances in data-intensive urban analysis, are increasingly unable to address the complexity, scale, and dynamism of these issues.

Buildings play a crucial role in shaping urban environments, influencing their physical, functional, and aesthetic characteristics. However, urban analytics is frequently limited by datasets lacking essential semantic details as well as fragmentation across diverse and incompatible data sources. To address these challenges, we conducted a comprehensive meta-analysis of 6,285 publications (2019–2024). From this review, we identified 11 key visually discernible building characteristics grouped into three branches: satellite house, satellite neighborhood, and street-view. Based on this structured characteristic system, we introduce BuildingMultiView, an innovative framework leveraging fine-tuned Large Language Models (LLMs) to systematically extract semantically detailed building characteristics from integrated satellite and street-view imagery. Using structured image–prompt–label triplets, the model efficiently annotates characteristics at multiple spatial scales. These characteristics include swimming pools, roof types, building density, wall–window ratio, and property types. Together, they provide a comprehensive and multi-perspective building database. Experiments conducted across five cities in the USA with diverse architecture and urban form, San Francisco, San Diego, Salt Lake City, Austin, and New York City, demonstrate significant performance improvements, with an F1 score of 79.77% compared to the untuned base version of ChatGPT’s 45.66%. These results reveal diverse urban building patterns and correlations between architectural and environmental characteristics, showcasing the framework’s capability to analyze both macro-scale and micro-scale urban building data. By integrating multi-perspective data sources with cutting-edge LLMs, BuildingMultiView enhances building data extraction, offering a scalable tool for urban planners to address sustainability, infrastructure, and human-centered design, enabling smarter, resilient cities.

Fine-resolution spatial analytics are essential for urban planning and policy-making, yet traditional small-area estimation often struggles with sparse, hierarchical, or imbalanced data. This paper introduces a Spatially Regularised Bayesian Heterogeneous Graph Neural Network (SR-BHGNN) that integrates multiple census tract levels within a unified framework. The model builds a heterogeneous graph where nodes represent spatial units at different scales, edges encode adjacency or membership, and Bayesian inference quantifies uncertainty in parameters and predictions. A spatial regularisation term, inspired by Tobler’s First Law of Geography, penalises large discrepancies between neighbouring nodes, reducing errors in imbalanced datasets and ensuring coherent local estimates. We evaluate SR-BHGNN through two London case studies, population estimation and PM 2.5 prediction, comparing it against random forests, single-level GNNs, and spatial hierarchical Bayesian estimation. SR-BHGNN achieves strong performance gains, with classification accuracies of 0.85 for population estimation and 0.81 for PM 2.5 prediction. Its Bayesian design produces posterior distributions that capture uncertainty, enabling policy-relevant insights into vulnerable neighbourhoods or priority intervention zones (e.g. low-emission areas). These results demonstrate that SR-BHGNN advances the state of the art in small-area estimation, offering a flexible, uncertainty-aware framework for diverse urban analytics applications.

Geospatial artificial intelligence (GeoAI) is reshaping our understanding of Earth and urban systems by integrating advanced artificial intelligence techniques with diverse geospatial data and methodologies. This backstory highlights recent GeoAI advances and applications as presented in the 11 articles in the iScience special issue, “GeoAI shaping earth and cities: Advances, opportunities, and challenges.” Guest editors share perspectives on GeoAI’s advances in explainability, adaptability, and sustainability, demonstrating that GeoAI’s applications extend beyond traditional mapping functions. These 11 case studies illustrate four types of explainability, three levels of adaptability and three thematic areas of sustainability, showing the methodological diversity and practical relevance of GeoAI for Earth and urban systems. Here, interactions among these dimensions are mapped to support the evaluation and design of future GeoAI solutions. We also outline future research directions for GeoAI to address complex challenges across the sciences relating to the Earth and its cities.