﻿<?xml version="1.0" encoding="UTF-8"?>
<b:Sources SelectedStyle="" xmlns:b="http://schemas.openxmlformats.org/officeDocument/2006/bibliography"  xmlns="http://schemas.openxmlformats.org/officeDocument/2006/bibliography" >
<b:Source>
<b:Tag>sun.ea:scala:2025</b:Tag>
<b:SourceType>ArticleInAPeriodical</b:SourceType>
<b:Year>2025</b:Year>
<b:PeriodicalTitle>IEEE Transactions on Information Forensics and Security</b:PeriodicalTitle>
<b:Url>https://doi.org/10.1109/TIFS.2025.3629604</b:Url>
<b:Pages>1-1</b:Pages>
<b:Author>
<b:Author><b:NameList>
<b:Person><b:Last>Sun</b:Last><b:First>Siqi</b:First></b:Person>
<b:Person><b:Last>Brucker</b:Last><b:First>Achim</b:First><b:Middle>D</b:Middle></b:Person>
<b:Person><b:Last>Hu</b:Last><b:First>Jia</b:First></b:Person>
<b:Person><b:Last>Huang</b:Last><b:First>Xiaowei</b:First></b:Person>
<b:Person><b:Last>Ruan</b:Last><b:First>Wenjie</b:First></b:Person>
</b:NameList></b:Author>
</b:Author>
<b:Title>SCALA: Towards Imperceptible and Efficient Black-box Textual Adversarial Perturbations</b:Title>
<b:Comments>Deep learning models are intrinsically susceptible to textual adversarial attacks on social media, where the perturbed text can trigger aberrant behaviours of victim models and threaten security and privacy. In this paper, we present a novel word-level attack called SCALA: a Synonym-based desCending And repLace-back Ascending mechanism. Our focus is on the efficient production of adversarial examples, with a particular emphasis on minimizing human perceptibility while ensuring the visual resemblance and semantic correctness. The merits of our attacking solution lie in being: (i) imperceptible  it keeps a very low word perturbation rate based on the Hamming (L0-norm) distance, thus achieving heightened deceptiveness validated through human evaluations; (ii) efficient  our tensor-based parallelization strategy ensures the attacking efficiency compared with baselines; (iii) effective  it surpasses seven state-of-the-art attacks on five target models in terms of reducing after-attack accuracy; (iv) practical  black-box score-based setting ensures that the adversary only needs to query target models for confidence scores; and (v) transferable  our attack shows competitive transferability on the generated adversarial examples. We release our code SCALA via https://github.com/TrustAI/SCALA.</b:Comments>
</b:Source>
</b:Sources>
