NEWS

_______

2024.02.27 Our paper was accepted to CVPR 2024


Title: Causal Mode Multiplexer: A Novel Framework for Unbiased Multispectral Pedestrian Detection

Author: Taeheon Kim (KAIST), Sebin Shin (KAIST), Youngjoon Yu (KAIST), Hak Gu Kim, and Yong Man Ro (KAIST)

Abstract: RGBT multispectral pedestrian detection has emerged as a promising solution for safety-critical applications that require day/night operations. However, the modality bias problem remains unsolved as multispectral pedestrian detectors learn the statistical bias in datasets. Specifically, datasets in multispectral pedestrian detection mainly distribute between ROTO (day) and RXTO (night) data; the majority of the pedestrian labels statistically co-occur with their thermal features. As a result, multispectral pedestrian detectors show poor generalization ability on examples beyond this statistical correlation, such as ROTX data. To address this problem, we propose a novel Causal Mode Multiplexer (CMM) framework that effectively learns the causalities between multispectral inputs and predictions. Moreover, we construct a new dataset (ROTX-MP) to evaluate modality bias in multispectral pedestrian detection. ROTX-MP mainly includes ROTX examples not presented in previous datasets. Extensive experiments demonstrate that our proposed CMM framework generalizes well on existing datasets (KAIST, CVC-14, FLIR) and the new ROTX-MP. We will release our new dataset to the public for future research.



2024.02.27 Two students joined our lab.

San Ah Jeong and Jung Jae Yu joined our lab as Master students.

San Ah Jeong received a B.S. degree from Korea National University of Transportation (KNUT) in Feb. 2024.

Jung Jae Yu is received a B.S. degree from Sun Moon Univ. in Feb. 2024.

Welcome to IRIS@CAU!


2024.01.15 Our paper was accepted to IEEE Access


Title: Photometric Stereo Super Resolution via Complex Surface Structure Estimation

Author: Han-nyoung Lee and Hak Gu Kim

Abstract: Photometric stereo, which derives per-pixel surface normals from shading cues, faces challenges in capturing high-resolution (HR) images in linear response systems. We address the representation of HR surface normals from low-resolution (LR) photometric stereo images. To represent fine details of the surface normal in the HR domain, we propose a novel plug-in high-frequency representation module named the Complex Surface Structure (CSS) estimator. When combined with a conventional photometric stereo model, CSS is capable of representing intricate surface structures in 2D Fourier space. We show that photometric stereo super-resolution (SR) with our CSS estimator provides high-fidelity surface normal representations in higher resolution from the LR inputs. Experiments demonstrate that our results are quantitatively and qualitatively better than those of the existing deep learning-based SR work.


2024.01.13 Our paper was accepted to IEEE SPL


Title: Super-Resolution Neural Radiance Field via Learning High Frequency Details for High-Fidelity Novel View Synthesis

Author: Han-nyoung Lee and Hak Gu Kim

Abstract: While neural rendering approaches facilitate photorealistic rendering in novel view synthesis tasks, the challenge of high-resolution rendering persists due to the substantial costs associated with acquiring and training data. Recently, several studies have been proposed that render high-resolution scenes by either super-sampling points or using reference images, aiming to restore details in low-resolution (LR) images. However, supersampling is computationally expensive, and methods with reference images require high-resolution (HR) images for inference. In this paper, we propose a novel super-resolution (SR) neural radiance field (NeRF) framework for high-fidelity novel view synthesis. To address the representation of high-fidelity HR images from the captured LR images, we learn a mapping function that maps LR rendering images to the Fourier space to restore insufficient high frequency details and render HR images at higher resolution. Experiments demonstrate that our results are quantitatively and qualitatively better than those of the existing SR methods in novel view synthesis. By visualizing the estimated dominant frequency components, we provide visual interpretations of the performance improvement.


2023.01.01 Two students joined our lab.

Hyung Kyu Lee and Kyo-Seok Lee joined our lab as Master students.

Hyung Kyu Kim is expected to graduate from the School of Computer Science & Engineering (CSE), Konkuk Univ. in Feb. 2023.

Kyo-Seok Lee is expected to graduate from the Department of Medical IT, Eulji Univ. in Feb. 2023.

Welcome to IRIS@CAU!

2022.07.01 Two students joined our lab.

Ho Jun Kim and Jiwoo Hwang joined our lab as a Master student and a research intern, respectively.

Ho Jun Kim received a B.S. degree from Dankook Univ. in Feb. 2022. 

Jiwoo Hwang is double majoring in School of Computer Arts & Computer Science and Engineering (CSE), Chung-Ang Univ.

Welcome to IRIS@CAU!

2022.01.22 Our paper was accepted to IEEE ICASSP 2022

Title: Natural-Looking Adversarial Examples from Freehand Sketches

Authors: Hak Gu Kim, Davide Nanni (EPFL), and Sabine Süsstrunk (EPFL)

Abstract: Deep neural networks (DNNs) have achieved great success in image classification and recognition compared to previous methods. However, recent works have reported that DNNs are very vulnerable to adversarial examples that are intentionally generated to mislead the predictions of the DNNs. Here, we present a novel freehand sketch-based natural-looking adversarial example generator that we call SketchAdv. To generate a natural-looking adversarial example from a sketch, we force the encoded edge information (i.e., the visual attributes) to be close to the latent random vector fed to the edge generator and adversarial example generator. This leads to preserve the spatial consistency of the adversarial example generated from the random vector with the edge information. In addition, through the sketch-edge encoder with a novel sketch-edge matching loss, we reduce the gap between edges and sketches. We evaluate the proposed method on several dominant classes of SketchyCOCO, the benchmark dataset for sketch to image translation. Our experiments show that our SketchAdv produces visually plausible adversarial examples while remaining competitive with other adversarial attack methods.

2021.12.01 Two students joined our lab.

Han-nyoung Lee and Seon Ho Park joined our lab as a Master student and a research intern, respectively.

Han-nyoung Lee is expected to graduate from the School of Integrative Engineering (Digital Imaging), Chung-Ang Univ. in Feb. 2022. 

Seon Ho Park is double majoring in the Department of Brain & Cognitive Sciences and Statistics, Ewha Womans Univ.

Welcome to IRIS@CAU again!

2021.10.01 CAU IRIS Lab is now open.

Welcome to Immersive Reality and Intelligent Systems Lab (IRIS Lab) at Chung-Ang Univ. (CAU) !

The main goal of IRIS Lab is to develop the state-of-the-art machine learning/deep learning-based intelligent systems to create the future of immersive reality (e.g., AR/VR/Metaverse), i.e., Convergence of AI & Reality.

For more information, please visit our research & publications pages.

2021.09.01 I started working at CAU as an Assistant Professor.