HyperAIHyperAI

Command Palette

Search for a command to run...

FOA-Attack, a Targeted migration-based Adversarial Attack Framework

Date

5 days ago

Organization

Mohamed bin Zayed University of Artificial Intelligence

Paper URL

2505.21494

Feature Optimal Alignment Attack (FOA-Attack) was jointly proposed in May 2025 by a research team from Nanyang Technological University, Muhammad Bin Zayed University for Artificial Intelligence, and other universities and institutions. The relevant research results were published in the paper "...".Adversarial Attacks against Closed-Source MLLMs via Feature Optimal AlignmentThe proposal has been accepted by NeurIPS 2025.

FOA-Attack is a targeted, transferable adversarial attack method based on optimal feature alignment. At the global level, this paradigm introduces a global feature loss based on cosine similarity to align the coarse-grained features of the adversarial example with those of the target sample. At the local level, leveraging the rich local representations in the Transformer, this paradigm utilizes clustering techniques to extract compact local patterns, reducing redundant local features. Extensive experiments demonstrate that FOA-Attack outperforms state-of-the-art targeted adversarial attack methods and achieves superior transferability on both open-source and closed-source MLLMs.

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp