Research
I'm interested in computer vision problems related to egocentric vision, video understanding, 3D reconstruction and scene understanding. My research till now has been primarily in meta learning, incremental learning and data efficient learning. My research works have been highlighted below.
|
|
Few‑Shot Class Incremental Point Cloud Segmentation
Tanuj Sur,
Samrat Mukherjee,
Kaizer Rahman,
Dr. Subhasis Chaudhury,
Dr. Biplab Banerjee
Under review at IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025
3D point cloud recognition and segmentation are vital for scene understanding and motion planning in autonomous driving and robotics.
Traditional methods struggle with dynamic environments and new categories, highlighting the need for integrating few-shot learning and class incremental learning in 3D point cloud segmentation.
Despite some existing approaches, there is a notable gap in research specifically focused on Few-Shot Class Incremental Learning (FSCIL) for this purpose.
|
|
UIDAPLE: Unsupervised Incremental Domain
Adaptation through Adaptive Prompt Learning
Samrat Mukherjee,
Tanuj Sur,
Saurish Seksaria,
Prof. Dr. Gemma Roig,
Dr. Subhasis Chaudhury,
Dr. Biplab Banerjee
Under review at International Conference on Acoustics, Speech, and Signal Processing, 2025
Continual learning in deep neural networks faces challenges like catastrophic forgetting and issues with shifting data distributions.
This paper addresses these challenges within the Unsupervised Incremental Domain Adaptation (UIDA) framework, where the initial source domain is labeled while subsequent domains are not.
Existing methods often lack cross-domain generalization and adaptation.
We propose UIDAPLE, a new approach that uses a unified prompt across all domains, leveraging the CLIP foundation model to eliminate the need for separate domain treatments.
|
|
Robust Prototypical Few‑Shot Organ Segmentation with Regularized Neural‑ODEs
Prashant Pandey*,
Mustafa Chasmai*,
Tanuj Sur,
Dr. Brejesh Lall
IEEE Transactions on Medical Imaging, 2023 (Impact Factor: 10.6)
Paper link | Code
The text discusses advancements in deep learning for image semantic segmentation but highlights the challenge of requiring large annotated datasets. It notes a shift towards Few-Shot Learning (FSL), particularly in medical domains where pixel-level annotations are costly.
The proposed solution is Regularized Prototypical Neural Ordinary Differential Equation (R-PNODE), which utilizes Neural-ODEs alongside cluster and consistency losses to improve Few-Shot Segmentation (FSS) of organs.
|
|
Adversarially Robust Prototypical Few‑shot Segmentation with Neural‑ODEs
Prashant Pandey*,
Aleti Vardhan*,
Mustafa Chasmai,
Tanuj Sur,
Dr. Brejesh Lall
International Conference on Medical Image Computing and Computer Assisted Intervention, 2022
Paper link | Code
Few-shot Learning (FSL) methods are increasingly used in data-scarce environments, particularly in the medical field where obtaining annotations is costly.
However, Deep Neural Networks, especially in FSL, are vulnerable to adversarial attacks, which can critically affect clinical decisions.
This paper presents a framework aimed at enhancing the adversarial robustness of few-shot segmentation models in the medical domain.
|
|
Mathematical Expressions in Software Engineering Artifacts
Tanuj Sur,
Aaditree Jaisswal, Dr. Venkatesh Vinayakarao
CODS-COMAD: International Conference on Data Science and Management of Data, 2023
Paper link
Mathematical expressions are essential not only for numerical calculations but also for enhancing clarity in discussions and documentation.
They are present in various software engineering artifacts like source code, documentation, and bug reports.
Research shows that these expressions impact the accuracy of commit message generation tools.
To support future research, a dataset of
bug reports with annotated mathematical expressions has been created and shared, along with a tool called MEDSEA to identify these expressions.
|
Projects
I have highlighted some of the projects that I have been a part of.
|
|