Invited talks

Title: AI-Enhanced Volumetric Optical Microscopy for Faster Imaging in Living Tissues

Fei Xia (CNRS)

Abstract:  Volumetric optical microscopy is essential for visualizing living biological systems at high resolution. Data collection in volumetric imaging using non-diffracting beams enables rapid 3D imaging by projecting volumes onto 2D images, but it lacks depth information. I will discuss our recent work proposing a new reconstruction model named MicroDiffusion for high-quality, depth-resolved 3D reconstruction from limited 2D projections. MicroDiffusion pretrains an implicit neural representation model to create a preliminary 3D volume that guides the generative process of a denoising diffusion probabilistic model, leading to enhanced 3D reconstruction quality. Our work highlights the interesting intersection between deep learning and optical microscopy for faster biomedical optical imaging, potentially powerful for new scientific discovery

Biography: Fei Xia is an incoming assistant professor in electrical engineering and computer science at University of California at Irvine (UC Irvine). She is currently a postdoctoral researcher at the French National Centre for Scientific Research (CNRS) in Paris, France. She received her Ph.D. from Cornell University. Her primary research interest lies in developing solutions to extend the capabilities of optical imaging, sensing and information processing, particularly for biology and medicine.

Title: 7 Foundational Principles to Learn Representations

Alexandre Alahi (EPFL)

Abstract:  In this talk, I will share seven principles that serve as the backbone for mastering the field of representation learning. I will draw upon a rich tapestry of interdisciplinary research, showcasing practical applications in the context of autonomous mobility.

Biography: Alexandre Alahi is an associate professor at EPFL leading the Visual Intelligence for Transportation laboratory (VITA). Before joining EPFL in 2017, he was a Post-doc and Research Scientist at Stanford University. His research lies at the intersection of Computer Vision, Machine Learning, and Robotics applied to transportation & mobility. He works on the theoretical challenges and practical applications of socially-aware Artificial Intelligence, i.e., systems equipped with perception and social intelligence. His research enables systems to detect, track, and predict human motion dynamics at all scales. 

In 2022&2023, Alexandre was recognized as the top 100 Most Influential Scholar in Computer Vision over the past 10 years.  His work on human motion prediction received the editor’s choice award from the journal Image and vision computing (2021). His work on human pose estimation received an honourable mention at an ICCV workshop (2019). Finally, his research was transferred to a startup detecting and tracking more than 100 million pedestrians in public spaces (including  train stations). 

Title:  Nature-inspired Full-Stokes Polarimetric Camera

Luat Vuong (UC Riverside)

Abstract:  Typically, sensitivity to polarization accompanies a loss in image resolution. How do insects achieve high angular acuity with polarization sensitivity? We bio-speculatively design a solution-processed polarimetric encoder composed of conducting-polymer nanofibers. By imaging the highly corrugated speckle patterns with a polarization-agnostic CCD sensor, we achieve full-Stokes computational imaging from single-image capture, trained with a shallow neural network. Our results present new guidelines and impressive possibilities for compressed polarimetric sensing with meso-ordered, multiscale, self-assembled materials in future hybrid computing systems.

Biography:  Dr. Vuong is an applied optical physicist in the Mechanical Engineering Department at the University of California at Riverside. She admires the feed-forward decision-making exhibited by many creatures in nature.

Title:  Computational Imaging across the Scales

Ivo Ihrke (Siegen University)

Abstract:  

The Computational Imaging methodology and community are bridging gaps across application domains which enables interesting connections between seemingly disparate problems to be drawn.

Traditionally, imaging modalities have been developed by separate communities, often reinventing similar concepts, but also developing different ideas when faced with common problems. The underlying physical concepts and mathematical models are, however, sufficiently similar to enable a transfer of insights from one domain to the next. In my talk I will therefore discuss imaging modalities at different scales, from synthetic aperture radar through optical imaging to electron microscopy and point out similarities and differences in the underlying acquisition problems. In doing this, I aim at reinforcing the spirit of our community that essentially, most imaging problems are similar and can be understood and dealt with on a common theoretical basis. This, in turn, enables us to serve as “translators” between different application domains and aids knowledge transfer and homogenization.

Biography:  

Ivo Ihrke is professor of Computational Sensing at University of Siegen, Germany, a member of the university’s ZESS (center for sensor systems) as well as affiliated with the Fraunhofer Institute for High Frequency Physics and Radar Techniques. Prior to joining Siegen, he was a staff scientist at the Carl Zeiss research department, which he joined on-leave from Inria Bordeaux Sud-Ouest, where he was a permanent researcher. At Inria he lead the research project “Generalized Image Acquisition and Analysis” which was supported by an Emmy-Noether fellowship of the German Research Foundation (DFG). Prior to that he was heading a research group within the Cluster of Excellence “Multimodal Computing and Interaction” at Saarland University. He was an Associate Senior Researcher at the MPI Informatik, and associated with the Max-Planck Center for Visual Computing and Communications. Before joining Saarland University he was a postdoctoral research fellow at the University of British Columbia, Vancouver, Canada, supported by the Alexander von Humboldt-Foundation. He received a MS degree in Scientific Computing from the Royal Institute of Technology (KTH), Stockholm, Sweden and a PhD (summa cum laude) in Computer Science from Saarland University.

He is interested in all aspects of Computational Imaging, including theory, mathematical modeling, algorithm design and their efficient implementation, as well as hardware concepts and their experimental realization and characterization.

Title: Reverse-Engineering Evolved Cameras

Emma Alexander (Nortwestern University)

Abstract:  Biological vision systems are efficient, robust, and well-adapted to specific tasks in specific environments. By many metrics, they continue to outperform the state of the art in artificial vision. Lessons from natural vision are particularly valuable for computational imaging systems, because eyes and brains evolve together to reveal scene information. By analyzing animals’ behaviors, brains, and environments, we can uncover computational models that enable better camera designs. This talk will describe cameras developed with a blend of techniques from the physical, computational, and biological sciences, and discuss tools for finding useful inspiration in the natural world.

Biography:  Emma Alexander is an assistant professor of computer science at Northwestern University, where she leads the Bio Inspired Vision Lab. Her training in physics (BS Yale), computer science (BS Yale, MS PhD Harvard), and vision science (postdoc, UC Berkeley) support her interest in low-level, physics-based, bio-inspired artificial vision. Her work won best demo at ICCP 2018.

Title: Computational light-field imaging for mesoscale intravital microscopy

Jiamin Wu (Tsinghua University)

Abstract:  Long-term subcellular intravital 3D imaging in mammals is vital to study diverse intercellular behaviors and organelle functions during native physiological processes. However, optical heterogeneity, tissue opacity, and phototoxicity pose great challenges, leading to the tradeoff between the field of view, resolution, speed, and sample health. In this talk, I will discuss our recent work in mesoscale intravital fluorescence microscopy based on computational imaging methods. Various large-scale fast subcellular processes are observed, including brain-wide neural recoding in mice at single resolution, 3D calcium propagations in cardiac cells, human cerebral organoids, high-speed 3D voltage imaging across whole Drosophila brain, membrane dynamics in embryo development, and large-scale cell migrations during immune response and tumor metastasis, enabling simultaneous in vivo studies of morphological and functional dynamics in 3D.

Biography:  Jiamin Wu is an associate professor in the Department of Automation at Tsinghua University, and PI at the IDG/McGovern Institute for Brain Research, Tsinghua University. His current research interests focus on computational imaging and system biology, with a particular emphasis on developing mesoscale fluorescence microscopy for observing large-scale biological dynamics in vivo. He has proposed the frameworks of scanning light-field imaging for ultrafine measurement and synthesis of light field, and digital adaptive optics for high-speed multi-site aberration correction. In the recent 5 years, He has published more than 40 journal papers in Nature, Cell, Nature Photonics, Nature Biotechnology, Nature Methods, and so on. He has served as the Associate Editor of PhotoniX and IEEE Transactions on Circuits and Systems for Video Technology.

Title: Nothing Lasts Forever: Harnessing Ephemeral Signals for Dynamic Scene Perception

Mark Sheinin (Weizmann Institute of Science)

Abstract:  Most computer vision algorithms rely on inference from static scene features such as object textures and edges. However, some scene information can be uniquely retrieved by sensing transient signals that exist for only a short duration. For example, a person’s footsteps in a hallway generate minute surface vibrations that may reveal the person’s walking path if sensed on the floor. Moreover, each footstep can generate a thermal footprint, which can also reveal the same information if imaged in the infrared domain. In this talk, I will cover two projects focusing on inference from such ephemeral signals. First, I will discuss how sensing vibrations at multiple surface points enables the localization of transient disturbance sources, such as the impact of a ping-pong ball on a table or a person’s footsteps on the floor. Then, I will describe a novel active vision system that combines thermal imaging with laser illumination to enable dynamic vision tasks like object tracking, structure from motion, and optical flow by painting and tracking transient heat patterns on completely textureless objects.

Biography:  Mark Sheinin is an assistant professor at the Computer Science department at the Weizmann Institute of Science. Before that, he was a postdoctoral research associate at Carnegie Mellon University’s Robotic Institute. He received his Ph.D. in Electrical Engineering from the Technion—Israel Institute of Technology in 2019. His work has received the Best Student Paper Award at CVPR 2017 and the Best Paper Honorable Mention Award at CVPR 2022. His research focuses on expanding the capabilities of computer vision beyond conventional imaging.

Title: Multimodal Foundation Models

Amir Zamir (EPFL)

Abstract:  I will discuss the role of multimodality in learning – specifically, how to learn a single “foundation” model that can predict an arbitrary set of modalities given another arbitrary set of modalities, and how multimodality could be leveraged to learn a better single-modal representation. I will overview our past works on this topic, e.g., 4M (https://4m.epfl.ch/) and MultiMAE (https://multimae.epfl.ch/), follow-up works, and future explorations. 

Biography:  Amir Zamir is an Assistant Professor of computer science at the Swiss Federal Institute of Technology (EPFL). His research is in computer vision and machine learning. Before joining EPFL in 2020, he was with UC Berkeley, Stanford, and UCF. He has received paper awards at SIGGRAPH 2022, CVPR 2020, CVPR 2018, CVPR 2016, and the NVIDIA Pioneering Research Award 2018, PAMI Everingham Prize 2022, and ECCV/ECVA Young Researcher Award 2022. His research has been covered by press outlets, such as The New York Times or Forbes. He was also the computer vision and machine learning chief scientist of Aurora Solar, a Forbes AI 50 company, from 2015 to 2022.