Agnibh Dasgupta
Agnibh Dasgupta
AI Researcher · PhD
Representation Learning · Watermarking · Multimodality

Building robust, invariant representations for AI models

View Publications GitHub CV

At a glance

  • Ph.D., Computing & Information Science, University of Nebraska Omaha (2026)
  • Research interests:
    • Representation learning
    • Watermarking
    • Multimodality
    • Provenance
  • Other Interests:
    • Explainable AI

About

I am an AI researcher with a Ph.D. in Computing & Information Science from the University of Nebraska Omaha. My research investigates how neural networks encode semantic structure that remains stable under perturbation, spanning both the visual and language domains. I study how these invariant representations emerge, how they can be systematically learned or identified, and how they can be operationalized for robustness-critical applications.

My work has focused on robust watermarking and model provenance as downstream applications and as quantitative probes of representational stability. On the vision side, I have developed frameworks for semantically grounded invariant feature learning and camera-robust zero-watermarking, with work accepted at CVPR 2026. On the language side, I have designed black-box LLM watermarking systems published in IEEE Transactions on Artificial Intelligence, and investigated the geometric structure of invariant subspaces in pretrained LLMs for model attribution.

Selected Research

Invariant Features in Language Models: Geometric Characterization and Model Attribution

Invariant Features in Language Models: Geometric Characterization and Model Attribution

Under review

We propose a local geometric framework for identifying invariant semantic subspaces in transformer-based language models. Using a contrastive generalized eigenvalue decomposition over semantic-preserving and semantic-changing perturbations, we localize layers where semantic meaning concentrates and validate these representations causally through hidden-state interventions. Invariant representations are further applied to zero-shot model attribution, achieving over 92% accuracy across base, fine-tuned, and distilled variants of 9 open-source LLMs spanning diverse architectures and parameter scales.

Paper · Code

TIACam: Text-Anchored Invariant Feature Learning with Auto-Augmentation for Camera-Robust Zero-Watermarking

TIACam: Text-Anchored Invariant Feature Learning with Auto-Augmentation for Camera-Robust Zero-Watermarking

IEEE/CVF Computer Vision and Pattern Recognition Conference (in press)

TIACam is a text-anchored invariant feature learning framework for camera-robust zero-watermarking that embeds messages in a distortion-invariant feature space. Using a learnable auto-augmentor and cross-modal adversarial training, it achieves state-of-the-art watermark recovery under synthetic and real camera captures.

Paper

Watermarking Language Models through Language Models

Watermarking Language Models through Language Models

IEEE Transactions on Artificial Intelligence 2025

Prompt-based LLM watermarking framework that embeds detectable signals in model responses without modifying weights or data. Evaluated watermark generation and detection using instruction-tuned LLMs. The figure above is an overview of our prompting strategy.

Paper · Code

Robust Image Watermarking via Cross-Attention & Invariant Domain Learning

Robust Image Watermarking via Cross-Attention & Invariant Domain Learning

International Conf. on Computational Science & Computational Intelligence 2023

Watermark embedding and extraction method resilient to geometric and photometric attacks. Utilizes ViT-based cross-attention to align invariant domain features for robust watermark decoding. The figure above shows an overview of our proposed franework.

Paper · Code

Perspective Transformation Layer

Perspective Transformation Layer

International Conf. on Computational Science & Computational Intelligence 2022

A lightweight differentiable layer that learns perspective transformations within the network forward pass, improving robustness to viewpoint changes without hand-crafted augmentation pipelines. Evaluated on SVHN, Imagenette, and MNIST, outperforming prior state-of-the-art with minimal added computation.

Paper · Code

Publications

Links to papers in press will be updated as they become available.

Contact

Email adg002@gmail.com

GitHub cent664

LinkedIn linkedin.com/in/cent664

Google Scholar Google Scholar Profile