Left: Utilizing a reference facade image (bottom row, left column)
and relative camera position information (top row), our method generates novel facades from varied viewpoints, all while
preserving the reference image’s structure and style (centered columns). Additionally, our approach faithfully reconstructs
the reference facade (right column). Right: Zoomed facade regions highlight our method’s capacity to modify
critical facade elements, like windows, across diverse viewpoints (red and orange regions). Furthermore, our approach
accurately reconstructs (green region) the reference facade (blue region).
Abstract
We introduce FacadeNet, a deep learning approach for synthesizing building facade images from diverse view-points.
Our method employs a conditional GAN, taking a single view of a facade along with the desired viewpoint
information and generates an image of the facade from the distinct viewpoint. To precisely modify view-dependent
elements like windows and doors while preserving the structure of view-independent components such as walls,
we introduce a selective editing module. This module leverages image embeddings extracted from a pretrained
vision transformer. Our experiments demonstrated state-of-the-art performance on building facade generation,
surpassing alternative methods.
Qualitative Results
View Interpolation
Problematic View Improvment
Evaluation
Paper
FacadeNet: Conditional Facade Synthesis via Selective Editing
Yiangos Georgiou, Marios Loizou, Tom Kelly, Melinos Averkiou