Agnibh Dasgupta
Agnibh Dasgupta
AI Researcher · PhD (Artificial Intelligence)
Representation Learning · Watermarking · LLM Robustness

Building robust, invariant representations for AI models

I study how models encode semantic meaning that remains stable under perturbations. My work spans robust representation learning for image watermarking, auto-augmentation, and identifying invariant latent features in LLMs for attribution and forensics.

View Publications GitHub CV

At a glance

  • PhD candidate at University of Nebraska - Omaha. Focus: Artificial Intelligence
  • Research interests:
    • Invariant representation learning
    • Robust watermarking (vision & language)
    • LLM forensics
    • Multimodal deep learning
  • Other Interests: Reinforcement learning, multimodalilty in deep learning

About

I'm a doctoral researcher in Information Science & Technology at the University of Nebraska Omaha. My dissertation centers on invariant representation learning and its applications to robust image watermarking and LLM robustness.

Broadly, I design systems that remain stable under content-preserving transformations: geometric/photometric augmentations for images and lexical/structural paraphrases for text. I care about what an AI model knows versus how it encodes it.

Selected Research

Watermarking Language Models through Language Models

Watermarking Language Models through Language Models

IEEE Transactions on Artificial Intelligence 2025

Prompt-based LLM watermarking framework that embeds detectable signals in model responses without modifying weights or data. Evaluated watermark generation and detection using instruction-tuned LLMs. The figure above is an overview of our prompting strategy.

PDF · GitHub

Robust Image Watermarking via Cross-Attention & Invariant Domain Learning

Robust Image Watermarking via Cross-Attention & Invariant Domain Learning

International Conf. on Computational Science & Computational Intelligence 2023

Watermark embedding and extraction method resilient to geometric and photometric attacks. Utilizes ViT-based cross-attention to align invariant domain features for robust watermark decoding. The figure above shows an overview of our proposed franework.

PDF · GitHub

Publications

Links to papers in press will be updated as they become available.

Contact

Email adg002@gmail.com

GitHub cent664

LinkedIn linkedin.com/in/cent664

Google Scholar Google Scholar Profile