Urban Analytics Lab

A research group at the National University of Singapore

About us


We are introducing innovative methods, datasets, and software to derive new insights in cities and advance data-driven urban planning, digital twins, and geospatial technologies in establishing and managing the smart cities of tomorrow. Converging multidisciplinary approaches inspired by recent advancements in computer science, geomatics and urban data science, and influenced by crowdsourcing and open science, we conceive cutting-edge techniques for urban sensing and analytics at the city-scale. Watch the video above or read more here.

Established and directed by Filip Biljecki, we are proudly based at the Department of Architecture at the College of Design and Engineering of the National University of Singapore, a leading global university centered in the heart of Southeast Asia. We are also affiliated with the Department of Real Estate at the NUS Business School.

News

Updates from our group

People

We are an ensemble of scholars from diverse disciplines and countries, driving forward our shared research goal of making cities smarter and more data-driven. Since 2019, we have been fortunate to collaborate with many talented alumni, whose invaluable contributions have shaped and enriched our research group, and set the scene for future developments. The full list of our members is available here.

Avatar

Filip Biljecki

Assistant Professor

Avatar

Matias Quintana

Research Fellow

Avatar

Winston Yap

PhD Researcher

Avatar

Edgardo G. Macatulad

PhD Researcher

Avatar

Koichi Ito

PhD Researcher

Avatar

Zicheng Fan

PhD Researcher

Avatar

Yixin Wu

PhD Researcher

Avatar

Xiucheng Liang

PhD Researcher

Avatar

Sijie Yang

PhD Researcher

Avatar

Yihan Zhu

PhD Researcher

Avatar

Youlong Gu

Research Engineer

Avatar

Kun Zhou

Research Assistant

Avatar

Jiatong Li

Visiting Scholar

Avatar

Weipeng Deng

Visiting Scholar

Avatar

Jussi Torkko

Visiting Scholar

Recent publications

Full list of publications is here.

Drivers of day-night intra-surface urban heat island variations under local extreme heat: A case study of Singapore
Drivers of day-night intra-surface urban heat island variations under local extreme heat: A case study of Singapore

Urban areas face significant challenges from extreme heat and urban heat islands (UHIs), which often interact and intensify each other at multi-spatial scales. However, most existing studies examine extreme heat and its interaction with UHIs at the city scale, overlooking the spatial heterogeneity of temperature responses within local areas. Extreme heat does not manifest uniformly across the entire city, and the UHI is a typically localized phenomenon influenced by changes in local climate and urban factors. To address this gap, this study defines local extreme heat (LEH) at the local scale based on 1 km and examines surface urban heat island (SUHI) response to local extreme heat (LEH) in Singapore, a tropical city experiencing more frequent extreme heat events. Using multi-year temperature datasets, we calculated the difference in SUHI intensity (SUHII) between LEH and non-LEH conditions, referred to as SUHII. Our findings revealed that SUHII responses to LEH differed by daytime and nighttime and local areas. Daytime SUHII peaked at 3.2 °C in the northeast, while nighttime SUHII reached 0.6 °C in other regions. To identify the dominant drivers of SUHII responses to LEH, we employed the spatial Random Forest (spatialRF) model. Our results showed that the spatialRF model achieved R-squared values exceeding 63% for predicting daytime SUHII and 45% for nighttime SUHII. LEH, land use, and vegetation dominantly contributed to daytime SUHII, while socioeconomic factors mostly influenced nighttime SUHII. Furthermore, we applied SHAP to interpret the spatialRF model. Hotspots of both daytime and nighttime SUHII were driven by socio-economic factors. Finally, nonlinear associations showed that the cooling effect of vegetation reached saturation, as the SHAP values remained positive, while water bodies, as indicated by a U-shaped SHAP pattern followed by a decline, were more effective in mitigating SUHII increases under LEH conditions.

A methodological review of the assessment of urban greenery exposure
A methodological review of the assessment of urban greenery exposure

Greenery plays a vital role in urban environments, providing numerous benefits through diverse pathways. Various metrics and methodologies have been proposed to assess multiple dimensions of greenery exposure. For a comprehensive and precise assessment of greenery exposure for different research purposes, it is crucial to identify the most suitable methods and data sources. However, existing reviews primarily address the health outcomes of urban greenery, rather than the methods of assessing greenery exposure. To address this gap, we conducted a review of 312 research articles, focusing on methodologies and technologies for measuring greenery exposure in urban settings. This review categorizes exposure measurement techniques into three categories: proximity-based, mobility-based, and visibility-based, evaluating their strengths, limitations, and synergies. Proximity-based methods generally assess overall greenery level in residential areas or other locations, but they fall short in capturing the actual interactions between humans and greenery. Mobility-based methods track real-time human location and assess greenery exposure based on travel trajectories, but they neglect the specific nature of human-greenery interactions. In contrast, emerging visibility-based methods offer opportunities to measure potential visual interactions between individuals and greenery. We found emerging metrics tend to integrate 3D data, qualitative aspects, and diverse data sources. We advocate for an integrated approach that encompasses both human mobility and potential interactions with greenery across various areas. We also argue that data granularity is balanced against cost, scalability, and ethical constraints. Our comprehensive review offers a framework and categorization to guide studies in designing exposure measurements aligned with their research objectives.

VoxCity: A seamless framework for open geospatial data integration, grid-based semantic 3D city model generation, and urban environment simulation
VoxCity: A seamless framework for open geospatial data integration, grid-based semantic 3D city model generation, and urban environment simulation

Three-dimensional urban environment simulation is a powerful tool for informed urban planning. However, the intensive manual effort required to prepare input 3D city models has hindered its widespread adoption. To address this challenge, we present VoxCity, an open-source Python package that provides a one-stop solution for grid-based 3D city model generation and urban environment simulation for cities worldwide. VoxCity’s ‘generator’ subpackage automatically downloads building heights, tree canopy heights, land cover, and terrain elevation within a specified target area, and voxelizes buildings, trees, land cover, and terrain to generate an integrated voxel city model. The ‘simulator’ subpackage enables users to conduct environmental simulations, including solar radiation and view index analyses. Users can export the generated models using several file formats compatible with external software, such as ENVI-met (INX), Blender, and Rhino (OBJ). We generated 3D city models for eight global cities, and demonstrated the calculation of solar irradiance, sky view index, and green view index. We also showcased microclimate simulation and 3D rendering visualization through ENVI-met and Rhino, respectively, through the file export function. Additionally, we reviewed openly available geospatial data to create guidelines to help users choose appropriate data sources depending on their target areas and purposes. VoxCity can significantly reduce the effort and time required for 3D city model preparation and promote the utilization of urban environment simulations. This contributes to more informed urban and architectural design that considers environmental impacts, and in turn, fosters sustainable and livable cities. VoxCity is released openly at https://github.com/kunifujiwara/VoxCity.

OpenFACADES: An open framework for architectural caption and attribute data enrichment via street view imagery
OpenFACADES: An open framework for architectural caption and attribute data enrichment via street view imagery

Building properties, such as height, usage, and material, play a crucial role in spatial data infrastructures, supporting various urban applications. Despite their importance, comprehensive building attribute data remain scarce in many urban areas. Recent advances have enabled the extraction of objective building attributes using remote sensing and street-level imagery. However, establishing a pipeline that integrates diverse open datasets, acquires holistic building imagery, and infers comprehensive building attributes at scale remains a significant challenge. Among the first, this study bridges the gaps by introducing OpenFACADES, an open framework that leverages multimodal crowdsourced data to enrich building profiles with both objective attributes and semantic descriptors through multimodal large language models. First, we integrate street-level image metadata from Mapillary with OpenStreetMap geometries via isovist analysis, identifying images that provide suitable vantage points for observing target buildings. Second, we automate the detection of building facades in panoramic imagery and tailor a reprojection approach to convert objects into holistic perspective views that approximate real-world observation. Third, we introduce an innovative approach that harnesses and investigates the capabilities of open-source large vision-language models (VLMs) for multi-attribute prediction and open-vocabulary captioning in building-level analytics, leveraging a globally sourced dataset of 31,180 labeled images from seven cities. Evaluation shows that fine-tuned VLM excel in multi-attribute inference, outperforming single-attribute computer vision models and zero-shot ChatGPT-4o. Further experiments confirm its superior generalization and robustness across culturally distinct region and varying image conditions. Finally, the model is applied for large-scale building annotation, generating a dataset of 1.2 million images for half a million buildings. This open‐source framework enhances the scope, adaptability, and granularity of building‐level assessments, enabling more fine‐grained and interpretable insights into the built environment. Our dataset and code are available openly at: https://github.com/seshing/OpenFACADES.

Contact