Agnibh Dasgupta
Agnibh Dasgupta
AI Researcher · PhD
Representation Learning · Watermarking · Multimodality

Building robust, invariant representations for AI models

View Publications GitHub CV

At a glance

  • PhD in Computing & Information Science, University of Nebraska Omaha.
  • Research interests:
    • Representation learning
    • Watermarking (vision & language)
    • Multimodality
  • Other Interests:
    • Explainability

About

I'm an AI researcher with a PhD in Computing & Information Science from the University of Nebraska Omaha. My research centers on robust representation learning. I study how models encode semantic structure that remains stable under perturbations.

My work operationalizes these representations for robust watermarking and LLM provenance.

Selected Research

Invariant Features in Language Models: Geometric Characterization and Model Attribution

Invariant Features in Language Models: Geometric Characterization and Model Attribution

Under review

We propose a local geometric framework for identifying invariant semantic subspaces in transformer-based language models. Using a contrastive generalized eigenvalue decomposition over semantic-preserving and semantic-changing perturbations, we localize layers where semantic meaning concentrates and validate these representations causally through hidden-state interventions. Invariant representations are further applied to zero-shot model attribution, achieving over 92% accuracy across base, fine-tuned, and distilled variants of 9 open-source LLMs spanning diverse architectures and parameter scales.

Paper · Code

TIACam: Text-Anchored Invariant Feature Learning with Auto-Augmentation for Camera-Robust Zero-Watermarking

TIACam: Text-Anchored Invariant Feature Learning with Auto-Augmentation for Camera-Robust Zero-Watermarking

IEEE/CVF Computer Vision and Pattern Recognition Conference (in press)

TIACam is a text-anchored invariant feature learning framework for camera-robust zero-watermarking that embeds messages in a distortion-invariant feature space. Using a learnable auto-augmentor and cross-modal adversarial training, it achieves state-of-the-art watermark recovery under synthetic and real camera captures.

Paper

Watermarking Language Models through Language Models

Watermarking Language Models through Language Models

IEEE Transactions on Artificial Intelligence 2025

Prompt-based LLM watermarking framework that embeds detectable signals in model responses without modifying weights or data. Evaluated watermark generation and detection using instruction-tuned LLMs. The figure above is an overview of our prompting strategy.

Paper · Code

Robust Image Watermarking via Cross-Attention & Invariant Domain Learning

Robust Image Watermarking via Cross-Attention & Invariant Domain Learning

International Conf. on Computational Science & Computational Intelligence 2023

Watermark embedding and extraction method resilient to geometric and photometric attacks. Utilizes ViT-based cross-attention to align invariant domain features for robust watermark decoding. The figure above shows an overview of our proposed franework.

Paper · Code

Perspective Transformation Layer

Perspective Transformation Layer

International Conf. on Computational Science & Computational Intelligence 2022

A lightweight differentiable layer that learns perspective transformations within the network forward pass, improving robustness to viewpoint changes without hand-crafted augmentation pipelines. Evaluated on SVHN, Imagenette, and MNIST, outperforming prior state-of-the-art with minimal added computation.

Paper · Code

Publications

Links to papers in press will be updated as they become available.

Contact

Email adg002@gmail.com

GitHub cent664

LinkedIn linkedin.com/in/cent664

Google Scholar Google Scholar Profile