Research Focus
Develop computer vision techniques to understand and monitor forest ecosystems from the ground to the canopy. This project will explore how self-supervised and multi-modal techniques can unlock new insights from rich remote sensing data. We will primarily focus on using LiDAR and multispectral imagery to reveal patterns of forest structure and ecological attributes. This research sits at the intersection of computer vision, remote sensing, and ecology, and emphasizes both methodological innovation and real-world impact. We will be working in collaboration with domain experts. The targeted outputs of the project include a public software repository, a presentation, and a manuscript for academic publication.
Skills, Techniques, Methods
- Computer Vision
- Machine Learning
- Geospatial Data Processing
Research Conditions
The research will be conducted primarily in person in the Multimodal Vision Research Laboratory (MVRL) in McKelvey Hall. It will involve software development, dataset curation, training machine learning models, and model performance analysis.
Team Structure and Opportunities
The undergraduate fellow will work closely with a Ph.D. student mentor from MVRL, with weekly meetings with Nathan Jacobs and meetings every few weeks with our ecology collaborator. The fellow will be exposed to other Computer Vision research taking place in the lab through a weekly journal club and work-in-progress meeting.
Requirements
Python programming, machine learning, and prior experience working with remotely sensed data is helpful

Nathan Jacobs
jacobsn@wustl.edu