Sara Rojas

I'm a PhD Candidate at KAUST in Saudi Arabia, under the supervision of Professor Bernard Ghanem.

Experience in 3D computer vision: neural rendering, 3D reconstruction, 3D-based recognition tasks, diffusion models and robustness.

I've recently completed a research internship at Naver Labs Europe, where I worked on extending MAST3R to better understand humans in-the-wild. I was fortunate to be advised by Gregory Rogez, Matthieu Armando, and Vincent Leroy.

Prior to that, I interned at Adobe Research, where I worked under the guidance of Kalyan Sunkavalli. I also collaborated with Reality Labs at Meta in Zurich, mentored by Albert Pumarola and Ali Thabet. Earlier, I conducted research at the University of Southern California with Autumn Kulaga.

Email  /  CV  /  Scholar  /  X  /  Github

Last updated: April 2025

profile photo

Research

I’m interested in 3D computer vision, deep learning, generative AI, and image processing. Most of my research focuses on NeRF and its applications, including scene editing and efficiency. Lately, I’ve been working on 3D reconstruction.

DATENeRF: Depth-Aware Text-based Editing of NeRFs
Sara Rojas, Julien Philip, Kai Zhang, Sai Bi, Fujun Luan, Bernard Ghanem, Kalyan Sunkavalli,
ECCV, 2024
Project Page / arXiv

We introduce an inpainting approach that leverages the depth information of NeRF scenes to distribute 2D edits across different images, ensuring robustness against errors and resampling challenges.

TrackNeRF: Bundle Adjusting NeRF from Sparse and Noisy Views via Feature Tracks
Jinjie Mai, Wenxuan Zhu, Sara Rojas, Jesus Zarzar, Abdullah Hamdi, Guocheng Qian, Bing Li, Silvio Giancola, Bernard Ghanem ,
ECCV, 2024
Project Page / arXiv

TrackNeRF enhances NeRF reconstruction under sparse and noisy poses by enforcing global 3D consistency via feature tracks across views, inspired by bundle adjustment. It outperforms prior methods like BARF and SPARF, setting a new benchmark in challenging scenarios.

Re-ReND: Real-time rendering of nerfs across devices
Sara Rojas, Jesus Zarzar, Juan C. Perez, Artsiom Sanakoyeu, Ali Thabet, Albert Pumarola, Bernard Ghanem,
ICCV, 2023
Github Repo / arXiv

Re-ReND distills the NeRF by extracting the learned density into a mesh, while the learned color information is factorized into a set of matrices that represent the scene's light field. Re-ReND can achieve over a 2.6-fold increase in rendering speed versus the SOTA.

SegNeRF: 3d part segmentation with neural radiance fields
Jesus Zarzar*, Sara Rojas*, Silvio Giancola, Bernard Ghanem,
arXiv, 2022
arXiv

A neural field representation that integrates a semantic field along with the usual radiance field. SegNeRF inherits from previous works the ability to perform novel view synthesis and 3D reconstruction, and enables 3D part segmentation from a few images.

Advpc: Transferable adversarial perturbations on 3d point clouds
Abdullah Hamdi, Sara Rojas, Ali Thabet, Bernard Ghanem,

ECCV, 2020
Github Repo / video / arXiv

We perform transferable adversarial attacks on 3D point clouds by utilizing a point cloud autoencoder. We exceed SOTA by up to 40% on transferability and 38% in breaking SOTA 3D defenses on ModelNet40 data.


Kudos to Dr. Jon Barron for sharing his website template.