Agnibh Dasgupta
Agnibh Dasgupta
AI Researcher · PhD
Representation Learning · Watermarking · Multimodality

Building robust, invariant representations for AI models

View Publications GitHub CV

At a glance

  • PhD candidate at University of Nebraska - Omaha.
  • Research interests:
    • Representation learning
    • Watermarking (vision & language)
    • Multimodality
  • Other Interests:
    • Explainability
    • LLM forensics

About

I'm a doctoral researcher in Information Science & Technology at the University of Nebraska Omaha. My research centers on robust representation learning. I study how models encode semantic structure that remains stable under perturbations.

My work operationalizes these representations for robust watermarking and for LLM provenance and forensics.

Selected Research

Invariant Representation Learning in LLMs for Model Attribution

Invariant Representation Learning in LLMs for Model Attribution

Under review

Layer-wise analysis framework for identifying paraphrase-stable latent representations in LLMs. Supports semantic clustering and model attribution tasks.

Invariant Feature Learning with Auto-Augmentation for Watermarking

TIACam: Text-Anchored Invariant Feature Learning with Auto-Augmentation for Camera-Robust Zero-Watermarking

IEEE/CVF Computer Vision and Pattern Recognition Conference (in press)

TIACam is a text-anchored invariant feature learning framework for camera-robust zero-watermarking that embeds messages in a distortion-invariant feature space. Using a learnable auto-augmentor and cross-modal adversarial training, it achieves state-of-the-art watermark recovery under synthetic and real camera captures.

Paper

Robust Image Watermarking via Cross-Attention & Invariant Domain Learning

Robust Image Watermarking via Cross-Attention & Invariant Domain Learning

International Conf. on Computational Science & Computational Intelligence 2023

Watermark embedding and extraction method resilient to geometric and photometric attacks. Utilizes ViT-based cross-attention to align invariant domain features for robust watermark decoding. The figure above shows an overview of our proposed franework.

Paper · Code

Publications

Links to papers in press will be updated as they become available.

Contact

Email adg002@gmail.com

GitHub cent664

LinkedIn linkedin.com/in/cent664

Google Scholar Google Scholar Profile