Publications

You can also find my articles on my Google Scholar profile.

SDformer-Flow: Spiking Neural Network Transformer for Event-based optical flow estimation

Published in arxiv, 2024

Event cameras generate asynchronous and sparse event streams capturing changes in light intensity. They offer significant advantages over conventional frame-based cameras, such as a higher dynamic range and an extremely faster data rate, making them particularly useful in scenarios involving fast motion or challenging lighting conditions. Spiking neural networks (SNNs) share similar asynchronous and sparse characteristics and are well-suited for processing data from event cameras. Inspired by the potential of transformers and spike-driven transformers (spikeformers) in other computer vision tasks, we propose two solutions for fast and robust optical flow estimation for event cameras: STTFlowNet and SDformerFlow. STTFlowNet adopts a U-shaped artificial neural network (ANN) architecture with spatiotemporal shifted window self-attention (swin) transformer encoders, while SDformerFlow presents its fully spiking counterpart, incorporating swin spikeformer encoders. Furthermore, we present two variants of the spiking version with different neuron models. Our work is the first to make use of spikeformers for dense optical flow estimation. We conduct end-to-end training for all models using supervised learning. Our results yield state-of-the-art performance among SNN-based event optical flow methods on both the DSEC and MVSEC datasets, and show significant reduction in power consumption compared to the equivalent ANNs.

Recommended citation: Y. Tian and J. Andrade-Cetto. SDformer-Flow: Spiking Neural Network Transformer for Event-based optical flow estimation, 2024, arXiv preprint arXiv:2409.04082. https://arxiv.org/abs/2409.04082

Egomotion from event-based SNN optical flow

Published in 2023 ACM International Conference on Neuromorphic Systems, 2023

We present a method for computing egomotion using event cameras with a pre-trained optical flow spiking neural network (SNN). To address the aperture problem encountered in the sparse and noisy normal flow of the initial SNN layers, our method includes a sliding-window bin-based pooling layer that computes a fused full flow estimate. To add robustness to noisy flow estimates, instead of computing the egomotion from vector averages, our method optimizes the intersection of constraints. The method also includes a RANSAC step to robustly deal with outlier flow estimates in the pooling layer. We validate our approach on both simulated and real scenes and compare our results favorably to the state-of-the-art methods. However, our method may be sensitive to datasets and motion speeds different from those used for training, limiting its generalizability.

Recommended citation: Y. Tian and J. Andrade-Cetto. Egomotion from event-based SNN optical flow, 2023 ACM International Conference on Neuromorphic Systems, 2023, Santa Fe, NM, USA, pp. 8:1-8. http://www.iri.upc.edu/files/scidoc/2747-Egomotion-from-event-based-SNN-optical-flow.pdf

Event transformer FlowNet for optical flow estimation

Published in 2022 British Machine Vision Conference, 2022

Event cameras are bioinspired sensors that produce asynchronous and sparse streams of events at image locations where intensity change is detected. They can detect fast motion with low latency, high dynamic range, and low power consumption. Over the past decade, efforts have been conducted in developing solutions with event cameras for robotics applications. In this work, we address their use for fast and robust computation of optical flow. We present ET-FlowNet, a hybrid RNN-ViT architecture for optical flow estimation. Visual transformers (ViTs) are ideal candidates for the learning of global context in visual tasks, and we argue that rigid body motion is a prime case for the use of ViTs since long-range dependencies in the image hold during rigid body motion. We perform end-to-end training with self-supervised learning method. Our results show comparable and in some cases exceeding performance with state-of-the-art coarse-to-fine event-based optical flow estimation.

Recommended citation: Y. Tian and J. Andrade-Cetto. Event transformer FlowNet for optical flow estimation, 2022 British Machine Vision Conference, 2022, London. http://www.iri.upc.edu/files/scidoc/2645-Event-transformer-FlowNet-for-optical-flow-estimation.pdf