Portfolio

Real-time Human Action Recognition

Developed a human action recognition method based on 3D skeleton for the Temi assistive robot. Created an Android app that performs human pose estimation, recognizes basic human actions, and records the time spent for each action in real-time.

Assistive Robotics Human Action Recognition Human Pose Estimation Real-time Processing Temi Robot Android

FRAILWATCH

Development of 3D monocular human action recognition benchmark for frailty assessment. Explored multiple methods including direct methods, 2D-3D lifting, and SMPL. Recorded test dataset with Optitrack motion capture system and integrated algorithms for frail tests including balance and gaiting speed.

3D Pose Estimation Optitrack Deep Learning Benchmark Development Gait Analysis Python

SDformerFlow

The first fully spiking transformer-based architecture for optical flow estimation using event cameras. The method achieves state-of-the-art performance among SNN-based approaches with significant power reduction.

Deep Learning Surrogate gradient Event Cameras Transformers Optical Flow Python PyTorch

Egomotion from SNN Optical Flow

A method for computing egomotion using event cameras with a pre-trained optical flow SNN. Our approach includes sliding-window pooling and RANSAC for robust flow estimation.

Neuromorphic computing Unsupervised learning Robotics Egomotion RANSAC C++ Cuda

ET-FlowNet

ET-FlowNet is a hybrid RNN-ViT architecture for optical flow estimation using event cameras. We use visual transformers to capture global context in rigid body motion scenarios.

Deep Learning Event Cameras Transformers RNN Self-Supervised Learning Python PyTorch

Indoor Mapping Payload System

A payload system for indoor mapping designed for disaster rescue applications. The system integrates a depth camera, IMU, and rover platform, using RTMBSLAM with EKF for sensor fusion.

SLAM Sensor Fusion VIO Robotics Extended Kalman Filter Indoor Mapping ROS C++