Sara Rojas

I'm a PhD Candidate at KAUST in Saudi Arabia, under the supervision of Professor Bernard Ghanem.

Experience in 3D computer vision: neural rendering, 3D reconstruction, and 3D-based recognition tasks

I've recently completed a research internship at Naver Labs Europe, where I worked on extending MAST3R to better understand humans in-the-wild. I was fortunate to be advised by Gregory Rogez, Matthieu Armando, and Vincent Leroy.

Prior to that, I interned at Adobe Research, where I worked under the guidance of Kalyan Sunkavalli. I also collaborated with Reality Labs at Meta in Zurich, mentored by Albert Pumarola and Ali Thabet. Earlier, I conducted research at the University of Southern California with Autumn Kulaga.

I'm expected to graduate in Dec 2025 and am currently on the job market. If you have any opportunities, I would greatly appreciate it if you could drop me an email. Thank you!

Email  /  CV  /  Scholar  /  X  /  Github

Last updated: Sep 2025

profile photo

Research

I’m interested in 3D computer vision, deep learning, generative AI, and image processing. Most of my research focuses on NeRF and its applications, including scene editing and efficiency. Lately, I’ve been working on 3D reconstruction.

HAMSt3R: Human-Aware Multi-view Stereo 3D Reconstruction
Sara Rojas, Matthieu Armando, Bernard Ghanem, Philippe Weinzaepfel, Vincent Leroy, Gregory Rogez,
ICCV, 2025
Project Page / arXiv

HAMSt3R is a feed-forward method for joint human and scene 3D reconstruction from sparse images, using a strong encoder and specialized heads. It handles human-centric scenarios effectively while preserving strong general 3D performance.

UnMix-NeRF: Spectral Unmixing Meets Neural Radiance Fields
Fabian Perez, Sara Rojas, Carlos Hinojosa, Hoover Rueda-Chacón, Bernard Ghanem,
ICCV, 2025
Project Page / arXiv / Code

We propose UnMix-NeRF, the first method integrating spectral unmixing into NeRF, enabling hyperspectral view synthesis, accurate unsupervised material segmentation, and intuitive material-based scene editing, significantly outperforming existing methods.

4D-Bench: Benchmarking Multi-modal Large Language Models for 4D Object Understanding
Wenxuan Zhu, Bing Li, Cheng Zheng, Jinjie Mai, Jun Chen, Letian Jiang, Abdullah Hamdi, Sara Rojas, Chia-Wen Lin, Mohamed Elhoseiny, Bernard Ghanem
ICCV, 2025
Project Page / arXiv / Code

4D-Bench introduces the first large-scale benchmark for evaluating multi-modal large language models on 4D object understanding, providing diverse tasks and datasets to advance reasoning about dynamic, temporally-evolving 3D data.

DATENeRF: Depth-Aware Text-based Editing of NeRFs
Sara Rojas, Julien Philip, Kai Zhang, Sai Bi, Fujun Luan, Bernard Ghanem, Kalyan Sunkavalli,
ECCV, 2024
Project Page / arXiv

We introduce an inpainting approach that leverages the depth information of NeRF scenes to distribute 2D edits across different images, ensuring robustness against errors and resampling challenges.

TrackNeRF: Bundle Adjusting NeRF from Sparse and Noisy Views via Feature Tracks
Jinjie Mai, Wenxuan Zhu, Sara Rojas, Jesus Zarzar, Abdullah Hamdi, Guocheng Qian, Bing Li, Silvio Giancola, Bernard Ghanem ,
ECCV, 2024
Project Page / arXiv

TrackNeRF enhances NeRF reconstruction under sparse and noisy poses by enforcing global 3D consistency via feature tracks across views, inspired by bundle adjustment. It outperforms prior methods like BARF and SPARF, setting a new benchmark in challenging scenarios.

Re-ReND: Real-time rendering of nerfs across devices
Sara Rojas, Jesus Zarzar, Juan C. Perez, Artsiom Sanakoyeu, Ali Thabet, Albert Pumarola, Bernard Ghanem,
ICCV, 2023
Github Repo / arXiv

Re-ReND distills the NeRF by extracting the learned density into a mesh, while the learned color information is factorized into a set of matrices that represent the scene's light field. Re-ReND can achieve over a 2.6-fold increase in rendering speed versus the SOTA.

SegNeRF: 3d part segmentation with neural radiance fields
Sara Rojas*, Jesus Zarzar*, Silvio Giancola, Bernard Ghanem,
arXiv, 2022
arXiv

A neural field representation that integrates a semantic field along with the usual radiance field. SegNeRF inherits from previous works the ability to perform novel view synthesis and 3D reconstruction, and enables 3D part segmentation from a few images.

Advpc: Transferable adversarial perturbations on 3d point clouds
Abdullah Hamdi, Sara Rojas, Ali Thabet, Bernard Ghanem,
ECCV, 2020
Github Repo / video / arXiv

We perform transferable adversarial attacks on 3D point clouds by utilizing a point cloud autoencoder. We exceed SOTA by up to 40% on transferability and 38% in breaking SOTA 3D defenses on ModelNet40 data.


Kudos to Dr. Jon Barron for sharing his website template.