arxiv_id
stringlengths 10
10
| published
stringlengths 20
20
| titles
stringlengths 9
243
| authors
sequencelengths 1
389
| abstract
stringlengths 96
3.09k
| categories
sequencelengths 1
10
| selected
bool 2
classes |
---|---|---|---|---|---|---|
2402.09957 | 2024-02-15T14:08:08Z | On Designing Features for Condition Monitoring of Rotating Machines | [
"Seetaram Maurya",
"Nishchal K. Verma"
] | Various methods for designing input features have been proposed for fault
recognition in rotating machines using one-dimensional raw sensor data. The
available methods are complex, rely on empirical approaches, and may differ
depending on the condition monitoring data used. Therefore, this article
proposes a novel algorithm to design input features that unifies the feature
extraction process for different time-series sensor data. This new insight for
designing/extracting input features is obtained through the lens of histogram
theory. The proposed algorithm extracts discriminative input features, which
are suitable for a simple classifier to deep neural network-based classifiers.
The designed input features are given as input to the classifier with
end-to-end training in a single framework for machine conditions recognition.
The proposed scheme has been validated through three real-time datasets: a)
acoustic dataset, b) CWRU vibration dataset, and c) IMS vibration dataset. The
real-time results and comparative study show the effectiveness of the proposed
scheme for the prediction of the machine's health states. | [
"cs.LG",
"eess.SP"
] | false |
2402.09970 | 2024-02-15T14:27:58Z | Accelerating Parallel Sampling of Diffusion Models | [
"Zhiwei Tang",
"Jiasheng Tang",
"Hao Luo",
"Fan Wang",
"Tsung-Hui Chang"
] | Diffusion models have emerged as state-of-the-art generative models for image
generation. However, sampling from diffusion models is usually time-consuming
due to the inherent autoregressive nature of their sampling process. In this
work, we propose a novel approach that accelerates the sampling of diffusion
models by parallelizing the autoregressive process. Specifically, we
reformulate the sampling process as solving a system of triangular nonlinear
equations through fixed-point iteration. With this innovative formulation, we
explore several systematic techniques to further reduce the iteration steps
required by the solving process. Applying these techniques, we introduce
ParaTAA, a universal and training-free parallel sampling algorithm that can
leverage extra computational and memory resources to increase the sampling
speed. Our experiments demonstrate that ParaTAA can decrease the inference
steps required by common sequential sampling algorithms such as DDIM and DDPM
by a factor of 4~14 times. Notably, when applying ParaTAA with 100 steps DDIM
for Stable Diffusion, a widely-used text-to-image diffusion model, it can
produce the same images as the sequential sampling in only 7 inference steps. | [
"cs.LG",
"stat.ML"
] | false |
2402.09978 | 2024-02-15T14:41:55Z | Deep learning for the design of non-Hermitian topolectrical circuits | [
"Xi Chen",
"Jinyang Sun",
"Xiumei Wang",
"Hengxuan Jiang",
"Dandan Zhu",
"Xingping Zhou"
] | Non-Hermitian topological phases can produce some remarkable properties,
compared with their Hermitian counterpart, such as the breakdown of
conventional bulk-boundary correspondence and the non-Hermitian topological
edge mode. Here, we introduce several algorithms with multi-layer perceptron
(MLP), and convolutional neural network (CNN) in the field of deep learning, to
predict the winding of eigenvalues non-Hermitian Hamiltonians. Subsequently, we
use the smallest module of the periodic circuit as one unit to construct
high-dimensional circuit data features. Further, we use the Dense Convolutional
Network (DenseNet), a type of convolutional neural network that utilizes dense
connections between layers to design a non-Hermitian topolectrical Chern
circuit, as the DenseNet algorithm is more suitable for processing
high-dimensional data. Our results demonstrate the effectiveness of the deep
learning network in capturing the global topological characteristics of a
non-Hermitian system based on training data. | [
"physics.app-ph",
"cs.LG"
] | false |
2402.09984 | 2024-02-15T14:49:28Z | Symmetry-Breaking Augmentations for Ad Hoc Teamwork | [
"Ravi Hammond",
"Dustin Craggs",
"Mingyu Guo",
"Jakob Foerster",
"Ian Reid"
] | In many collaborative settings, artificial intelligence (AI) agents must be
able to adapt to new teammates that use unknown or previously unobserved
strategies. While often simple for humans, this can be challenging for AI
agents. For example, if an AI agent learns to drive alongside others (a
training set) that only drive on one side of the road, it may struggle to adapt
this experience to coordinate with drivers on the opposite side, even if their
behaviours are simply flipped along the left-right symmetry. To address this we
introduce symmetry-breaking augmentations (SBA), which increases diversity in
the behaviour of training teammates by applying a symmetry-flipping operation.
By learning a best-response to the augmented set of teammates, our agent is
exposed to a wider range of behavioural conventions, improving performance when
deployed with novel teammates. We demonstrate this experimentally in two
settings, and show that our approach improves upon previous ad hoc teamwork
results in the challenging card game Hanabi. We also propose a general metric
for estimating symmetry-dependency amongst a given set of policies. | [
"cs.LG",
"cs.AI"
] | false |
2402.10001 | 2024-02-15T15:06:33Z | Privacy Attacks in Decentralized Learning | [
"Abdellah El Mrini",
"Edwige Cyffers",
"Aurélien Bellet"
] | Decentralized Gradient Descent (D-GD) allows a set of users to perform
collaborative learning without sharing their data by iteratively averaging
local model updates with their neighbors in a network graph. The absence of
direct communication between non-neighbor nodes might lead to the belief that
users cannot infer precise information about the data of others. In this work,
we demonstrate the opposite, by proposing the first attack against D-GD that
enables a user (or set of users) to reconstruct the private data of other users
outside their immediate neighborhood. Our approach is based on a reconstruction
attack against the gossip averaging protocol, which we then extend to handle
the additional challenges raised by D-GD. We validate the effectiveness of our
attack on real graphs and datasets, showing that the number of users
compromised by a single or a handful of attackers is often surprisingly large.
We empirically investigate some of the factors that affect the performance of
the attack, namely the graph topology, the number of attackers, and their
position in the graph. | [
"cs.LG",
"cs.CR"
] | false |
2402.10046 | 2024-02-15T16:07:56Z | How Flawed is ECE? An Analysis via Logit Smoothing | [
"Muthu Chidambaram",
"Holden Lee",
"Colin McSwiggen",
"Semon Rezchikov"
] | Informally, a model is calibrated if its predictions are correct with a
probability that matches the confidence of the prediction. By far the most
common method in the literature for measuring calibration is the expected
calibration error (ECE). Recent work, however, has pointed out drawbacks of
ECE, such as the fact that it is discontinuous in the space of predictors. In
this work, we ask: how fundamental are these issues, and what are their impacts
on existing results? Towards this end, we completely characterize the
discontinuities of ECE with respect to general probability measures on Polish
spaces. We then use the nature of these discontinuities to motivate a novel
continuous, easily estimated miscalibration metric, which we term
Logit-Smoothed ECE (LS-ECE). By comparing the ECE and LS-ECE of pre-trained
image classification models, we show in initial experiments that binned ECE
closely tracks LS-ECE, indicating that the theoretical pathologies of ECE may
be avoidable in practice. | [
"cs.LG",
"math.PR",
"68T37 (Primary) 62-08, 60E05 (Secondary)"
] | false |
2402.10082 | 2024-02-15T16:42:04Z | FedRDF: A Robust and Dynamic Aggregation Function against Poisoning
Attacks in Federated Learning | [
"Enrique Mármol Campos",
"Aurora González Vidal",
"José Luis Hernández Ramos",
"Antonio Skarmeta"
] | Federated Learning (FL) represents a promising approach to typical privacy
concerns associated with centralized Machine Learning (ML) deployments. Despite
its well-known advantages, FL is vulnerable to security attacks such as
Byzantine behaviors and poisoning attacks, which can significantly degrade
model performance and hinder convergence. The effectiveness of existing
approaches to mitigate complex attacks, such as median, trimmed mean, or Krum
aggregation functions, has been only partially demonstrated in the case of
specific attacks. Our study introduces a novel robust aggregation mechanism
utilizing the Fourier Transform (FT), which is able to effectively handling
sophisticated attacks without prior knowledge of the number of attackers.
Employing this data technique, weights generated by FL clients are projected
into the frequency domain to ascertain their density function, selecting the
one exhibiting the highest frequency. Consequently, malicious clients' weights
are excluded. Our proposed approach was tested against various model poisoning
attacks, demonstrating superior performance over state-of-the-art aggregation
methods. | [
"cs.LG",
"cs.CR"
] | false |
2402.10142 | 2024-02-15T17:48:58Z | Tracking Changing Probabilities via Dynamic Learners | [
"Omid Madani"
] | Consider a predictor, a learner, whose input is a stream of discrete items.
The predictor's task, at every time point, is probabilistic multiclass
prediction, i.e., to predict which item may occur next by outputting zero or
more candidate items, each with a probability, after which the actual item is
revealed and the predictor learns from this observation. To output
probabilities, the predictor keeps track of the proportions of the items it has
seen. The predictor has constant (limited) space and we seek efficient
prediction and update techniques: The stream is unbounded, the set of items is
unknown to the predictor and their totality can also grow unbounded. Moreover,
there is non-stationarity: the underlying frequencies of items may change,
substantially, from time to time. For instance, new items may start appearing
and a few currently frequent items may cease to occur again. The predictor,
being space-bounded, need only provide probabilities for those items with
(currently) sufficiently high frequency, i.e., the salient items. This problem
is motivated in the setting of prediction games, a self-supervised learning
regime where concepts serve as both the predictors and the predictands, and the
set of concepts grows over time, resulting in non-stationarities as new
concepts are generated and used. We develop moving average techniques designed
to respond to such non-stationarities in a timely manner, and explore their
properties. One is a simple technique based on queuing of count snapshots, and
another is a combination of queuing together with an extended version of sparse
EMA. The latter combination supports predictand-specific dynamic learning
rates. We find that this flexibility allows for a more accurate and timely
convergence. | [
"cs.LG",
"cs.AI",
"68T05",
"I.2.6"
] | false |
2402.10164 | 2024-02-15T18:09:41Z | Random features and polynomial rules | [
"Fabián Aguirre-López",
"Silvio Franz",
"Mauro Pastore"
] | Random features models play a distinguished role in the theory of deep
learning, describing the behavior of neural networks close to their
infinite-width limit. In this work, we present a thorough analysis of the
generalization performance of random features models for generic supervised
learning problems with Gaussian data. Our approach, built with tools from the
statistical mechanics of disordered systems, maps the random features model to
an equivalent polynomial model, and allows us to plot average generalization
curves as functions of the two main control parameters of the problem: the
number of random features $N$ and the size $P$ of the training set, both
assumed to scale as powers in the input dimension $D$. Our results extend the
case of proportional scaling between $N$, $P$ and $D$. They are in accordance
with rigorous bounds known for certain particular learning tasks and are in
quantitative agreement with numerical experiments performed over many order of
magnitudes of $N$ and $P$. We find good agreement also far from the asymptotic
limits where $D\to \infty$ and at least one between $P/D^K$, $N/D^L$ remains
finite. | [
"cond-mat.dis-nn",
"cs.LG"
] | false |
2402.10177 | 2024-02-15T18:27:18Z | Large Scale Constrained Clustering With Reinforcement Learning | [
"Benedikt Schesch",
"Marco Caserta"
] | Given a network, allocating resources at clusters level, rather than at each
node, enhances efficiency in resource allocation and usage. In this paper, we
study the problem of finding fully connected disjoint clusters to minimize the
intra-cluster distances and maximize the number of nodes assigned to the
clusters, while also ensuring that no two nodes within a cluster exceed a
threshold distance. While the problem can easily be formulated using a binary
linear model, traditional combinatorial optimization solvers struggle when
dealing with large-scale instances. We propose an approach to solve this
constrained clustering problem via reinforcement learning. Our method involves
training an agent to generate both feasible and (near) optimal solutions. The
agent learns problem-specific heuristics, tailored to the instances encountered
in this task. In the results section, we show that our algorithm finds near
optimal solutions, even for large scale instances. | [
"cs.LG",
"cs.AI"
] | false |
2402.10206 | 2024-02-15T18:58:18Z | Ising on the Graph: Task-specific Graph Subsampling via the Ising Model | [
"Maria Bånkestad",
"Jennifer Andersson",
"Sebastian Mair",
"Jens Sjölund"
] | Reducing a graph while preserving its overall structure is an important
problem with many applications. Typically, the reduction approaches either
remove edges (sparsification) or merge nodes (coarsening) in an unsupervised
way with no specific downstream task in mind. In this paper, we present an
approach for subsampling graph structures using an Ising model defined on
either the nodes or edges and learning the external magnetic field of the Ising
model using a graph neural network. Our approach is task-specific as it can
learn how to reduce a graph for a specific downstream task in an end-to-end
fashion. The utilized loss function of the task does not even have to be
differentiable. We showcase the versatility of our approach on three distinct
applications: image segmentation, 3D shape sparsification, and sparse
approximate matrix inverse determination. | [
"cs.LG",
"cs.AI"
] | false |
2402.10248 | 2024-02-15T11:09:22Z | A Data-Driven Supervised Machine Learning Approach to Estimating Global
Ambient Air Pollution Concentrations With Associated Prediction Intervals | [
"Liam J Berrisford",
"Hugo Barbosa",
"Ronaldo Menezes"
] | Global ambient air pollution, a transboundary challenge, is typically
addressed through interventions relying on data from spatially sparse and
heterogeneously placed monitoring stations. These stations often encounter
temporal data gaps due to issues such as power outages. In response, we have
developed a scalable, data-driven, supervised machine learning framework. This
model is designed to impute missing temporal and spatial measurements, thereby
generating a comprehensive dataset for pollutants including NO$_2$, O$_3$,
PM$_{10}$, PM$_{2.5}$, and SO$_2$. The dataset, with a fine granularity of
0.25$^{\circ}$ at hourly intervals and accompanied by prediction intervals for
each estimate, caters to a wide range of stakeholders relying on outdoor air
pollution data for downstream assessments. This enables more detailed studies.
Additionally, the model's performance across various geographical locations is
examined, providing insights and recommendations for strategic placement of
future monitoring stations to further enhance the model's accuracy. | [
"cs.LG",
"cs.AI"
] | false |
2402.10282 | 2024-02-15T19:18:47Z | Information Capacity Regret Bounds for Bandits with Mediator Feedback | [
"Khaled Eldowa",
"Nicolò Cesa-Bianchi",
"Alberto Maria Metelli",
"Marcello Restelli"
] | This work addresses the mediator feedback problem, a bandit game where the
decision set consists of a number of policies, each associated with a
probability distribution over a common space of outcomes. Upon choosing a
policy, the learner observes an outcome sampled from its distribution and
incurs the loss assigned to this outcome in the present round. We introduce the
policy set capacity as an information-theoretic measure for the complexity of
the policy set. Adopting the classical EXP4 algorithm, we provide new regret
bounds depending on the policy set capacity in both the adversarial and the
stochastic settings. For a selection of policy set families, we prove
nearly-matching lower bounds, scaling similarly with the capacity. We also
consider the case when the policies' distributions can vary between rounds,
thus addressing the related bandits with expert advice problem, which we
improve upon its prior results. Additionally, we prove a lower bound showing
that exploiting the similarity between the policies is not possible in general
under linear bandit feedback. Finally, for a full-information variant, we
provide a regret bound scaling with the information radius of the policy set. | [
"cs.LG",
"stat.ML"
] | false |
2402.10289 | 2024-02-15T19:37:39Z | Thompson Sampling in Partially Observable Contextual Bandits | [
"Hongju Park",
"Mohamad Kazem Shirani Faradonbeh"
] | Contextual bandits constitute a classical framework for decision-making under
uncertainty. In this setting, the goal is to learn the arms of highest reward
subject to contextual information, while the unknown reward parameters of each
arm need to be learned by experimenting that specific arm. Accordingly, a
fundamental problem is that of balancing exploration (i.e., pulling different
arms to learn their parameters), versus exploitation (i.e., pulling the best
arms to gain reward). To study this problem, the existing literature mostly
considers perfectly observed contexts. However, the setting of partial context
observations remains unexplored to date, despite being theoretically more
general and practically more versatile. We study bandit policies for learning
to select optimal arms based on the data of observations, which are noisy
linear functions of the unobserved context vectors. Our theoretical analysis
shows that the Thompson sampling policy successfully balances exploration and
exploitation. Specifically, we establish the followings: (i) regret bounds that
grow poly-logarithmically with time, (ii) square-root consistency of parameter
estimation, and (iii) scaling of the regret with other quantities including
dimensions and number of arms. Extensive numerical experiments with both real
and synthetic data are presented as well, corroborating the efficacy of
Thompson sampling. To establish the results, we introduce novel martingale
techniques and concentration inequalities to address partially observed
dependent random variables generated from unspecified distributions, and also
leverage problem-dependent information to sharpen probabilistic bounds for
time-varying suboptimality gaps. These techniques pave the road towards
studying other decision-making problems with contextual information as well as
partial observations. | [
"stat.ML",
"cs.LG"
] | false |
2402.10350 | 2024-02-15T22:43:02Z | Large Language Models for Forecasting and Anomaly Detection: A
Systematic Literature Review | [
"Jing Su",
"Chufeng Jiang",
"Xin Jin",
"Yuxin Qiao",
"Tingsong Xiao",
"Hongda Ma",
"Rong Wei",
"Zhi Jing",
"Jiajun Xu",
"Junhong Lin"
] | This systematic literature review comprehensively examines the application of
Large Language Models (LLMs) in forecasting and anomaly detection, highlighting
the current state of research, inherent challenges, and prospective future
directions. LLMs have demonstrated significant potential in parsing and
analyzing extensive datasets to identify patterns, predict future events, and
detect anomalous behavior across various domains. However, this review
identifies several critical challenges that impede their broader adoption and
effectiveness, including the reliance on vast historical datasets, issues with
generalizability across different contexts, the phenomenon of model
hallucinations, limitations within the models' knowledge boundaries, and the
substantial computational resources required. Through detailed analysis, this
review discusses potential solutions and strategies to overcome these
obstacles, such as integrating multimodal data, advancements in learning
methodologies, and emphasizing model explainability and computational
efficiency. Moreover, this review outlines critical trends that are likely to
shape the evolution of LLMs in these fields, including the push toward
real-time processing, the importance of sustainable modeling practices, and the
value of interdisciplinary collaboration. Conclusively, this review underscores
the transformative impact LLMs could have on forecasting and anomaly detection
while emphasizing the need for continuous innovation, ethical considerations,
and practical solutions to realize their full potential. | [
"cs.LG",
"cs.AI"
] | false |
2402.10972 | 2024-02-15T08:30:50Z | Modeling methodology for the accurate and prompt prediction of
symptomatic events in chronic diseases | [
"Josué Pagán",
"José L. Risco-Martín",
"José M. Moya",
"José L. Ayala"
] | Prediction of symptomatic crises in chronic diseases allows to take decisions
before the symptoms occur, such as the intake of drugs to avoid the symptoms or
the activation of medical alarms. The prediction horizon is in this case an
important parameter in order to fulfill the pharmacokinetics of medications, or
the time response of medical services. This paper presents a study about the
prediction limits of a chronic disease with symptomatic crises: the migraine.
For that purpose, this work develops a methodology to build predictive migraine
models and to improve these predictions beyond the limits of the initial
models. The maximum prediction horizon is analyzed, and its dependency on the
selected features is studied. A strategy for model selection is proposed to
tackle the trade off between conservative but robust predictive models, with
respect to less accurate predictions with higher horizons. The obtained results
show a prediction horizon close to 40 minutes, which is in the time range of
the drug pharmacokinetics. Experiments have been performed in a realistic
scenario where input data have been acquired in an ambulatory clinical study by
the deployment of a non-intrusive Wireless Body Sensor Network. Our results
provide an effective methodology for the selection of the future horizon in the
development of prediction algorithms for diseases experiencing symptomatic
crises. | [
"q-bio.QM",
"cs.LG"
] | false |
2402.15521 | 2024-02-15T18:13:41Z | HKD-SHO: A hybrid smart home system based on knowledge-based and
data-driven services | [
"Mingming Qiu",
"Elie Najm",
"Rémi Sharrock",
"Bruno Traverson"
] | A smart home is realized by setting up various services. Several methods have
been proposed to create smart home services, which can be divided into
knowledge-based and data-driven approaches. However, knowledge-based approaches
usually require manual input from the inhabitant, which can be complicated if
the physical phenomena of the concerned environment states are complex, and the
inhabitant does not know how to adjust related actuators to achieve the target
values of the states monitored by services. Moreover, machine learning-based
data-driven approaches that we are interested in are like black boxes and
cannot show the inhabitant in which situations certain services proposed
certain actuators' states. To solve these problems, we propose a hybrid system
called HKD-SHO (Hybrid Knowledge-based and Data-driven services based Smart
HOme system), where knowledge-based and machine learning-based data-driven
services are profitably integrated. The principal advantage is that it inherits
the explicability of knowledge-based services and the dynamism of data-driven
services. We compare HKD-SHO with several systems for creating dynamic smart
home services, and the results show the better performance of HKD-SHO. | [
"cs.AI",
"cs.LG"
] | false |
2402.17771 | 2024-02-15T18:49:05Z | Utilizing Machine Learning for Signal Classification and Noise Reduction
in Amateur Radio | [
"Jimi Sanchez"
] | In the realm of amateur radio, the effective classification of signals and
the mitigation of noise play crucial roles in ensuring reliable communication.
Traditional methods for signal classification and noise reduction often rely on
manual intervention and predefined thresholds, which can be labor-intensive and
less adaptable to dynamic radio environments. In this paper, we explore the
application of machine learning techniques for signal classification and noise
reduction in amateur radio operations. We investigate the feasibility and
effectiveness of employing supervised and unsupervised learning algorithms to
automatically differentiate between desired signals and unwanted interference,
as well as to reduce the impact of noise on received transmissions.
Experimental results demonstrate the potential of machine learning approaches
to enhance the efficiency and robustness of amateur radio communication
systems, paving the way for more intelligent and adaptive radio solutions in
the amateur radio community. | [
"eess.SP",
"cs.LG"
] | false |
2403.05559 | 2024-02-15T14:12:38Z | Improving Cognitive Diagnosis Models with Adaptive Relational Graph
Neural Networks | [
"Pengyang Shao",
"Chen Gao",
"Lei Chen",
"Yonghui Yang",
"Kun Zhang",
"Meng Wang"
] | Cognitive Diagnosis (CD) algorithms receive growing research interest in
intelligent education. Typically, these CD algorithms assist students by
inferring their abilities (i.e., their proficiency levels on various knowledge
concepts). The proficiency levels can enable further targeted skill training
and personalized exercise recommendations, thereby promoting students' learning
efficiency in online education. Recently, researchers have found that building
and incorporating a student-exercise bipartite graph is beneficial for
enhancing diagnostic performance. However, there are still limitations in their
studies. On one hand, researchers overlook the heterogeneity within edges,
where there can be both correct and incorrect answers. On the other hand, they
disregard the uncertainty within edges, e.g., a correct answer can indicate
true mastery or fortunate guessing. To address the limitations, we propose
Adaptive Semantic-aware Graph-based Cognitive Diagnosis model (ASG-CD), which
introduces a novel and effective way to leverage bipartite graph information in
CD. Specifically, we first map students, exercises, and knowledge concepts into
a latent representation space and combine these latent representations to
obtain student abilities and exercise difficulties. After that, we propose a
Semantic-aware Graph Neural Network Layer to address edge heterogeneity. This
layer splits the original bipartite graph into two subgraphs according to edge
semantics, and aggregates information based on these two subgraphs separately.
To mitigate the impact of edge uncertainties, we propose an Adaptive Edge
Differentiation Layer that dynamically differentiates edges, followed by
keeping reliable edges and filtering out uncertain edges. Extensive experiments
on three real-world datasets have demonstrated the effectiveness of ASG-CD. | [
"cs.CY",
"cs.LG"
] | false |
2403.15394 | 2024-02-15T14:56:00Z | "Model Cards for Model Reporting" in 2024: Reclassifying Category of
Ethical Considerations in Terms of Trustworthiness and Risk Management | [
"DeBrae Kennedy-Mayo",
"Jake Gord"
] | In 2019, the paper entitled "Model Cards for Model Reporting" introduced a
new tool for documenting model performance and encouraged the practice of
transparent reporting for a defined list of categories. One of the categories
detailed in that paper is ethical considerations, which includes the
subcategories of data, human life, mitigations, risks and harms, and use cases.
We propose to reclassify this category in the original model card due to the
recent maturing of the field known as trustworthy AI, a term which analyzes
whether the algorithmic properties of the model indicate that the AI system is
deserving of trust from its stakeholders. In our examination of trustworthy AI,
we highlight three respected organizations - the European Commission's
High-Level Expert Group on AI, the OECD, and the U.S.-based NIST - that have
written guidelines on various aspects of trustworthy AI. These recent
publications converge on numerous characteristics of the term, including
accountability, explainability, fairness, privacy, reliability, robustness,
safety, security, and transparency, while recognizing that the implementation
of trustworthy AI varies by context. Our reclassification of the original
model-card category known as ethical considerations involves a two-step
process: 1) adding a new category known as trustworthiness, where the
subcategories will be derived from the discussion of trustworthy AI in our
paper, and 2) maintaining the subcategories of ethical considerations under a
renamed category known as risk environment and risk management, a title which
we believe better captures today's understanding of the essence of these
topics. We hope that this reclassification will further the goals of the
original paper and continue to prompt those releasing trained models to
accompany these models with documentation that will assist in the evaluation of
their algorithmic properties. | [
"cs.CY",
"cs.LG"
] | false |
2402.09657 | 2024-02-15T01:50:46Z | Digital versus Analog Transmissions for Federated Learning over Wireless
Networks | [
"Jiacheng Yao",
"Wei Xu",
"Zhaohui Yang",
"Xiaohu You",
"Mehdi Bennis",
"H. Vincent Poor"
] | In this paper, we quantitatively compare these two effective communication
schemes, i.e., digital and analog ones, for wireless federated learning (FL)
over resource-constrained networks, highlighting their essential differences as
well as their respective application scenarios. We first examine both digital
and analog transmission methods, together with a unified and fair comparison
scheme under practical constraints. A universal convergence analysis under
various imperfections is established for FL performance evaluation in wireless
networks. These analytical results reveal that the fundamental difference
between the two paradigms lies in whether communication and computation are
jointly designed or not. The digital schemes decouple the communication design
from specific FL tasks, making it difficult to support simultaneous uplink
transmission of massive devices with limited bandwidth. In contrast, the analog
communication allows over-the-air computation (AirComp), thus achieving
efficient spectrum utilization. However, computation-oriented analog
transmission reduces power efficiency, and its performance is sensitive to
computational errors. Finally, numerical simulations are conducted to verify
these theoretical observations. | [
"cs.IT",
"cs.LG",
"cs.NI",
"math.IT"
] | false |
2402.09695 | 2024-02-15T04:08:49Z | Reward Poisoning Attack Against Offline Reinforcement Learning | [
"Yinglun Xu",
"Rohan Gumaste",
"Gagandeep Singh"
] | We study the problem of reward poisoning attacks against general offline
reinforcement learning with deep neural networks for function approximation. We
consider a black-box threat model where the attacker is completely oblivious to
the learning algorithm and its budget is limited by constraining both the
amount of corruption at each data point, and the total perturbation. We propose
an attack strategy called `policy contrast attack'. The high-level idea is to
make some low-performing policies appear as high-performing while making
high-performing policies appear as low-performing. To the best of our
knowledge, we propose the first black-box reward poisoning attack in the
general offline RL setting. We provide theoretical insights on the attack
design and empirically show that our attack is efficient against current
state-of-the-art offline RL algorithms in different kinds of learning datasets. | [
"cs.LG",
"cs.AI",
"cs.CR"
] | false |
2402.09710 | 2024-02-15T05:06:53Z | Preserving Data Privacy for ML-driven Applications in Open Radio Access
Networks | [
"Pranshav Gajjar",
"Azuka Chiejina",
"Vijay K. Shah"
] | Deep learning offers a promising solution to improve spectrum access
techniques by utilizing data-driven approaches to manage and share limited
spectrum resources for emerging applications. For several of these
applications, the sensitive wireless data (such as spectrograms) are stored in
a shared database or multistakeholder cloud environment and are therefore prone
to privacy leaks. This paper aims to address such privacy concerns by examining
the representative case study of shared database scenarios in 5G Open Radio
Access Network (O-RAN) networks where we have a shared database within the
near-real-time (near-RT) RAN intelligent controller. We focus on securing the
data that can be used by machine learning (ML) models for spectrum sharing and
interference mitigation applications without compromising the model and network
performances. The underlying idea is to leverage a (i) Shuffling-based
learnable encryption technique to encrypt the data, following which, (ii)
employ a custom Vision transformer (ViT) as the trained ML model that is
capable of performing accurate inferences on such encrypted data. The paper
offers a thorough analysis and comparisons with analogous convolutional neural
networks (CNN) as well as deeper architectures (such as ResNet-50) as
baselines. Our experiments showcase that the proposed approach significantly
outperforms the baseline CNN with an improvement of 24.5% and 23.9% for the
percent accuracy and F1-Score respectively when operated on encrypted data.
Though deeper ResNet-50 architecture is obtained as a slightly more accurate
model, with an increase of 4.4%, the proposed approach boasts a reduction of
parameters by 99.32%, and thus, offers a much-improved prediction time by
nearly 60%. | [
"cs.CR",
"cs.LG",
"cs.NI"
] | false |
2402.09715 | 2024-02-15T05:19:53Z | DPBalance: Efficient and Fair Privacy Budget Scheduling for Federated
Learning as a Service | [
"Yu Liu",
"Zibo Wang",
"Yifei Zhu",
"Chen Chen"
] | Federated learning (FL) has emerged as a prevalent distributed machine
learning scheme that enables collaborative model training without aggregating
raw data. Cloud service providers further embrace Federated Learning as a
Service (FLaaS), allowing data analysts to execute their FL training pipelines
over differentially-protected data. Due to the intrinsic properties of
differential privacy, the enforced privacy level on data blocks can be viewed
as a privacy budget that requires careful scheduling to cater to diverse
training pipelines. Existing privacy budget scheduling studies prioritize
either efficiency or fairness individually. In this paper, we propose
DPBalance, a novel privacy budget scheduling mechanism that jointly optimizes
both efficiency and fairness. We first develop a comprehensive utility function
incorporating data analyst-level dominant shares and FL-specific performance
metrics. A sequential allocation mechanism is then designed using the Lagrange
multiplier method and effective greedy heuristics. We theoretically prove that
DPBalance satisfies Pareto Efficiency, Sharing Incentive, Envy-Freeness, and
Weak Strategy Proofness. We also theoretically prove the existence of a
fairness-efficiency tradeoff in privacy budgeting. Extensive experiments
demonstrate that DPBalance outperforms state-of-the-art solutions, achieving an
average efficiency improvement of $1.44\times \sim 3.49 \times$, and an average
fairness improvement of $1.37\times \sim 24.32 \times$. | [
"cs.DC",
"cs.CR",
"cs.LG"
] | false |
2402.09735 | 2024-02-15T06:22:50Z | DFORM: Diffeomorphic vector field alignment for assessing dynamics
across learned models | [
"Ruiqi Chen",
"Giacomo Vedovati",
"Todd Braver",
"ShiNung Ching"
] | Dynamical system models such as Recurrent Neural Networks (RNNs) have become
increasingly popular as hypothesis-generating tools in scientific research.
Evaluating the dynamics in such networks is key to understanding their learned
generative mechanisms. However, comparison of learned dynamics across models is
challenging due to their inherent nonlinearity and because a priori there is no
enforced equivalence of their coordinate systems. Here, we propose the DFORM
(Diffeomorphic vector field alignment for comparing dynamics across learned
models) framework. DFORM learns a nonlinear coordinate transformation which
provides a continuous, maximally one-to-one mapping between the trajectories of
learned models, thus approximating a diffeomorphism between them. The mismatch
between DFORM-transformed vector fields defines the orbital similarity between
two models, thus providing a generalization of the concepts of smooth orbital
and topological equivalence. As an example, we apply DFORM to models trained on
a canonical neuroscience task, showing that learned dynamics may be
functionally similar, despite overt differences in attractor landscapes. | [
"cs.LG",
"cs.SY",
"eess.SY",
"q-bio.NC"
] | false |
2402.09754 | 2024-02-15T07:08:11Z | Robust SVD Made Easy: A fast and reliable algorithm for large-scale data
analysis | [
"Sangil Han",
"Kyoowon Kim",
"Sungkyu Jung"
] | The singular value decomposition (SVD) is a crucial tool in machine learning
and statistical data analysis. However, it is highly susceptible to outliers in
the data matrix. Existing robust SVD algorithms often sacrifice speed for
robustness or fail in the presence of only a few outliers. This study
introduces an efficient algorithm, called Spherically Normalized SVD, for
robust SVD approximation that is highly insensitive to outliers,
computationally scalable, and provides accurate approximations of singular
vectors. The proposed algorithm achieves remarkable speed by utilizing only two
applications of a standard reduced-rank SVD algorithm to appropriately scaled
data, significantly outperforming competing algorithms in computation times. To
assess the robustness of the approximated singular vectors and their subspaces
against data contamination, we introduce new notions of breakdown points for
matrix-valued input, including row-wise, column-wise, and block-wise breakdown
points. Theoretical and empirical analyses demonstrate that our algorithm
exhibits higher breakdown points compared to standard SVD and its
modifications. We empirically validate the effectiveness of our approach in
applications such as robust low-rank approximation and robust principal
component analysis of high-dimensional microarray datasets. Overall, our study
presents a highly efficient and robust solution for SVD approximation that
overcomes the limitations of existing algorithms in the presence of outliers. | [
"stat.ML",
"cs.LG",
"math.ST",
"stat.TH"
] | false |
2402.09761 | 2024-02-15T07:23:34Z | A Framework For Gait-Based User Demography Estimation Using Inertial
Sensors | [
"Chinmay Prakash Swami"
] | Human gait has been shown to provide crucial motion cues for various
applications. Recognizing patterns in human gait has been widely adopted in
various application areas such as security, virtual reality gaming, medical
rehabilitation, and ailment identification. Furthermore, wearable inertial
sensors have been widely used for not only recording gait but also to predict
users' demography. Machine Learning techniques such as deep learning, combined
with inertial sensor signals, have shown promising results in recognizing
patterns in human gait and estimate users' demography. However, the black-box
nature of such deep learning models hinders the researchers from uncovering the
reasons behind the model's predictions. Therefore, we propose leveraging deep
learning and Layer-Wise Relevance Propagation (LRP) to identify the important
variables that play a vital role in identifying the users' demography such as
age and gender. To assess the efficacy of this approach we train a deep neural
network model on a large sensor-based gait dataset consisting of 745 subjects
to identify users' age and gender. Using LRP we identify the variables relevant
for characterizing the gait patterns. Thus, we enable interpretation of
non-linear ML models which are experts in identifying the users' demography
based on inertial signals. We believe this approach can not only provide
clinicians information about the gait parameters relevant to age and gender but
also can be expanded to analyze and diagnose gait disorders. | [
"cs.HC",
"cs.LG",
"eess.SP"
] | false |
2402.09766 | 2024-02-15T07:35:52Z | From Variability to Stability: Advancing RecSys Benchmarking Practices | [
"Valeriy Shevchenko",
"Nikita Belousov",
"Alexey Vasilev",
"Vladimir Zholobov",
"Artyom Sosedka",
"Natalia Semenova",
"Anna Volodkevich",
"Andrey Savchenko",
"Alexey Zaytsev"
] | In the rapidly evolving domain of Recommender Systems (RecSys), new
algorithms frequently claim state-of-the-art performance based on evaluations
over a limited set of arbitrarily selected datasets. However, this approach may
fail to holistically reflect their effectiveness due to the significant impact
of dataset characteristics on algorithm performance. Addressing this
deficiency, this paper introduces a novel benchmarking methodology to
facilitate a fair and robust comparison of RecSys algorithms, thereby advancing
evaluation practices. By utilizing a diverse set of $30$ open datasets,
including two introduced in this work, and evaluating $11$ collaborative
filtering algorithms across $9$ metrics, we critically examine the influence of
dataset characteristics on algorithm performance. We further investigate the
feasibility of aggregating outcomes from multiple datasets into a unified
ranking. Through rigorous experimental analysis, we validate the reliability of
our methodology under the variability of datasets, offering a benchmarking
strategy that balances quality and computational demands. This methodology
enables a fair yet effective means of evaluating RecSys algorithms, providing
valuable guidance for future research endeavors. | [
"cs.IR",
"cs.AI",
"cs.LG"
] | false |
2402.09796 | 2024-02-15T08:51:49Z | Closed-form Filtering for Non-linear Systems | [
"Théophile Cantelobre",
"Carlo Ciliberto",
"Benjamin Guedj",
"Alessandro Rudi"
] | Sequential Bayesian Filtering aims to estimate the current state distribution
of a Hidden Markov Model, given the past observations. The problem is
well-known to be intractable for most application domains, except in notable
cases such as the tabular setting or for linear dynamical systems with gaussian
noise. In this work, we propose a new class of filters based on Gaussian PSD
Models, which offer several advantages in terms of density approximation and
computational efficiency. We show that filtering can be efficiently performed
in closed form when transitions and observations are Gaussian PSD Models. When
the transition and observations are approximated by Gaussian PSD Models, we
show that our proposed estimator enjoys strong theoretical guarantees, with
estimation error that depends on the quality of the approximation and is
adaptive to the regularity of the transition probabilities. In particular, we
identify regimes in which our proposed filter attains a TV $\epsilon$-error
with memory and computational complexity of $O(\epsilon^{-1})$ and
$O(\epsilon^{-3/2})$ respectively, including the offline learning step, in
contrast to the $O(\epsilon^{-2})$ complexity of sampling methods such as
particle filtering. | [
"stat.ML",
"cs.LG",
"cs.RO"
] | false |
2402.09807 | 2024-02-15T09:13:59Z | Two trust region type algorithms for solving nonconvex-strongly concave
minimax problems | [
"Tongliang Yao",
"Zi Xu"
] | In this paper, we propose a Minimax Trust Region (MINIMAX-TR) algorithm and a
Minimax Trust Region Algorithm with Contractions and Expansions(MINIMAX-TRACE)
algorithm for solving nonconvex-strongly concave minimax problems. Both
algorithms can find an $(\epsilon, \sqrt{\epsilon})$-second order stationary
point(SSP) within $\mathcal{O}(\epsilon^{-1.5})$ iterations, which matches the
best well known iteration complexity. | [
"math.OC",
"cs.LG",
"stat.ML",
"90C47, 90C26, 90C30"
] | false |
2402.09821 | 2024-02-15T09:36:36Z | Diffusion Models for Audio Restoration | [
"Jean-Marie Lemercier",
"Julius Richter",
"Simon Welker",
"Eloi Moliner",
"Vesa Välimäki",
"Timo Gerkmann"
] | With the development of audio playback devices and fast data transmission,
the demand for high sound quality is rising, for both entertainment and
communications. In this quest for better sound quality, challenges emerge from
distortions and interferences originating at the recording side or caused by an
imperfect transmission pipeline. To address this problem, audio restoration
methods aim to recover clean sound signals from the corrupted input data. We
present here audio restoration algorithms based on diffusion models, with a
focus on speech enhancement and music restoration tasks. Traditional
approaches, often grounded in handcrafted rules and statistical heuristics,
have shaped our understanding of audio signals. In the past decades, there has
been a notable shift towards data-driven methods that exploit the modeling
capabilities of deep neural networks (DNNs). Deep generative models, and among
them diffusion models, have emerged as powerful techniques for learning complex
data distributions. However, relying solely on DNN-based learning approaches
carries the risk of reducing interpretability, particularly when employing
end-to-end models. Nonetheless, data-driven approaches allow more flexibility
in comparison to statistical model-based frameworks whose performance depends
on distributional and statistical assumptions that can be difficult to
guarantee. Here, we aim to show that diffusion models can combine the best of
both worlds and offer the opportunity to design audio restoration algorithms
with a good degree of interpretability and a remarkable performance in terms of
sound quality. | [
"eess.AS",
"cs.LG",
"cs.SD"
] | false |
2402.09830 | 2024-02-15T09:48:20Z | Utilizing GANs for Fraud Detection: Model Training with Synthetic
Transaction Data | [
"Mengran Zhu",
"Yulu Gong",
"Yafei Xiang",
"Hanyi Yu",
"Shuning Huo"
] | Anomaly detection is a critical challenge across various research domains,
aiming to identify instances that deviate from normal data distributions. This
paper explores the application of Generative Adversarial Networks (GANs) in
fraud detection, comparing their advantages with traditional methods. GANs, a
type of Artificial Neural Network (ANN), have shown promise in modeling complex
data distributions, making them effective tools for anomaly detection. The
paper systematically describes the principles of GANs and their derivative
models, emphasizing their application in fraud detection across different
datasets. And by building a collection of adversarial verification graphs, we
will effectively prevent fraud caused by bots or automated systems and ensure
that the users in the transaction are real. The objective of the experiment is
to design and implement a fake face verification code and fraud detection
system based on Generative Adversarial network (GANs) algorithm to enhance the
security of the transaction process.The study demonstrates the potential of
GANs in enhancing transaction security through deep learning techniques. | [
"cs.LG",
"cs.AI",
"cs.CE"
] | false |
2402.09846 | 2024-02-15T10:05:18Z | A Deep Learning Approach to Radar-based QPE | [
"Ting-Shuo Yo",
"Shih-Hao Su",
"Jung-Lien Chu",
"Chiao-Wei Chang",
"Hung-Chi Kuo"
] | In this study, we propose a volume-to-point framework for quantitative
precipitation estimation (QPE) based on the Quantitative Precipitation
Estimation and Segregation Using Multiple Sensor (QPESUMS) Mosaic Radar data
set. With a data volume consisting of the time series of gridded radar
reflectivities over the Taiwan area, we used machine learning algorithms to
establish a statistical model for QPE in weather stations. The model extracts
spatial and temporal features from the input data volume and then associates
these features with the location-specific precipitations. In contrast to QPE
methods based on the Z-R relation, we leverage the machine learning algorithms
to automatically detect the evolution and movement of weather systems and
associate these patterns to a location with specific topographic attributes.
Specifically, we evaluated this framework with the hourly precipitation data of
45 weather stations in Taipei during 2013-2016. In comparison to the
operational QPE scheme used by the Central Weather Bureau, the volume-to-point
framework performed comparably well in general cases and excelled in detecting
heavy-rainfall events. By using the current results as the reference benchmark,
the proposed method can integrate the heterogeneous data sources and
potentially improve the forecast in extreme precipitation scenarios. | [
"physics.ao-ph",
"cs.LG",
"eess.SP"
] | false |
2402.09941 | 2024-02-15T13:41:23Z | FedLion: Faster Adaptive Federated Optimization with Fewer Communication | [
"Zhiwei Tang",
"Tsung-Hui Chang"
] | In Federated Learning (FL), a framework to train machine learning models
across distributed data, well-known algorithms like FedAvg tend to have slow
convergence rates, resulting in high communication costs during training. To
address this challenge, we introduce FedLion, an adaptive federated
optimization algorithm that seamlessly incorporates key elements from the
recently proposed centralized adaptive algorithm, Lion (Chen et al. 2o23), into
the FL framework. Through comprehensive evaluations on two widely adopted FL
benchmarks, we demonstrate that FedLion outperforms previous state-of-the-art
adaptive algorithms, including FAFED (Wu et al. 2023) and FedDA. Moreover,
thanks to the use of signed gradients in local training, FedLion substantially
reduces data transmission requirements during uplink communication when
compared to existing adaptive algorithms, further reducing communication costs.
Last but not least, this work also includes a novel theoretical analysis,
showcasing that FedLion attains faster convergence rate than established FL
algorithms like FedAvg. | [
"cs.LG",
"cs.AI",
"stat.ML"
] | false |
2402.09992 | 2024-02-15T14:55:38Z | Risk-Sensitive Soft Actor-Critic for Robust Deep Reinforcement Learning
under Distribution Shifts | [
"Tobias Enders",
"James Harrison",
"Maximilian Schiffer"
] | We study the robustness of deep reinforcement learning algorithms against
distribution shifts within contextual multi-stage stochastic combinatorial
optimization problems from the operations research domain. In this context,
risk-sensitive algorithms promise to learn robust policies. While this field is
of general interest to the reinforcement learning community, most studies
up-to-date focus on theoretical results rather than real-world performance.
With this work, we aim to bridge this gap by formally deriving a novel
risk-sensitive deep reinforcement learning algorithm while providing numerical
evidence for its efficacy. Specifically, we introduce discrete Soft
Actor-Critic for the entropic risk measure by deriving a version of the Bellman
equation for the respective Q-values. We establish a corresponding policy
improvement result and infer a practical algorithm. We introduce an environment
that represents typical contextual multi-stage stochastic combinatorial
optimization problems and perform numerical experiments to empirically validate
our algorithm's robustness against realistic distribution shifts, without
compromising performance on the training distribution. We show that our
algorithm is superior to risk-neutral Soft Actor-Critic as well as to two
benchmark approaches for robust deep reinforcement learning. Thereby, we
provide the first structured analysis on the robustness of reinforcement
learning under distribution shifts in the realm of contextual multi-stage
stochastic combinatorial optimization problems. | [
"cs.LG",
"cs.SY",
"eess.SY"
] | false |
2402.10028 | 2024-02-15T15:48:55Z | Diffusion Models Meet Contextual Bandits with Large Action Spaces | [
"Imad Aouali"
] | Efficient exploration is a key challenge in contextual bandits due to the
large size of their action space, where uninformed exploration can result in
computational and statistical inefficiencies. Fortunately, the rewards of
actions are often correlated and this can be leveraged to explore them
efficiently. In this work, we capture such correlations using pre-trained
diffusion models; upon which we design diffusion Thompson sampling (dTS). Both
theoretical and algorithmic foundations are developed for dTS, and empirical
evaluation also shows its favorable performance. | [
"cs.LG",
"cs.AI",
"stat.ML"
] | false |
2402.10036 | 2024-02-15T15:59:59Z | Predictive Linear Online Tracking for Unknown Targets | [
"Anastasios Tsiamis",
"Aren Karapetyan",
"Yueshan Li",
"Efe C. Balta",
"John Lygeros"
] | In this paper, we study the problem of online tracking in linear control
systems, where the objective is to follow a moving target. Unlike classical
tracking control, the target is unknown, non-stationary, and its state is
revealed sequentially, thus, fitting the framework of online non-stochastic
control. We consider the case of quadratic costs and propose a new algorithm,
called predictive linear online tracking (PLOT). The algorithm uses recursive
least squares with exponential forgetting to learn a time-varying dynamic model
of the target. The learned model is used in the optimal policy under the
framework of receding horizon control. We show the dynamic regret of PLOT
scales with $\mathcal{O}(\sqrt{TV_T})$, where $V_T$ is the total variation of
the target dynamics and $T$ is the time horizon. Unlike prior work, our
theoretical results hold for non-stationary targets. We implement PLOT on a
real quadrotor and provide open-source software, thus, showcasing one of the
first successful applications of online control methods on real hardware. | [
"eess.SY",
"cs.LG",
"cs.SY",
"math.OC"
] | false |
2402.10115 | 2024-02-15T17:10:27Z | Generating Visual Stimuli from EEG Recordings using Transformer-encoder
based EEG encoder and GAN | [
"Rahul Mishra",
"Arnav Bhavsar"
] | In this study, we tackle a modern research challenge within the field of
perceptual brain decoding, which revolves around synthesizing images from EEG
signals using an adversarial deep learning framework. The specific objective is
to recreate images belonging to various object categories by leveraging EEG
recordings obtained while subjects view those images. To achieve this, we
employ a Transformer-encoder based EEG encoder to produce EEG encodings, which
serve as inputs to the generator component of the GAN network. Alongside the
adversarial loss, we also incorporate perceptual loss to enhance the quality of
the generated images. | [
"cs.AI",
"cs.LG",
"eess.SP",
"q-bio.NC"
] | false |
2402.10135 | 2024-02-15T17:38:32Z | Benchmarking federated strategies in Peer-to-Peer Federated learning for
biomedical data | [
"Jose L. Salmeron",
"Irina Arévalo",
"Antonio Ruiz-Celma"
] | The increasing requirements for data protection and privacy has attracted a
huge research interest on distributed artificial intelligence and specifically
on federated learning, an emerging machine learning approach that allows the
construction of a model between several participants who hold their own private
data. In the initial proposal of federated learning the architecture was
centralised and the aggregation was done with federated averaging, meaning that
a central server will orchestrate the federation using the most straightforward
averaging strategy. This research is focused on testing different federated
strategies in a peer-to-peer environment. The authors propose various
aggregation strategies for federated learning, including weighted averaging
aggregation, using different factors and strategies based on participant
contribution. The strategies are tested with varying data sizes to identify the
most robust ones. This research tests the strategies with several biomedical
datasets and the results of the experiments show that the accuracy-based
weighted average outperforms the classical federated averaging method. | [
"cs.LG",
"cs.AI",
"cs.DC"
] | false |
2402.10145 | 2024-02-15T17:49:50Z | A chaotic maps-based privacy-preserving distributed deep learning for
incomplete and Non-IID datasets | [
"Irina Arévalo",
"Jose L. Salmeron"
] | Federated Learning is a machine learning approach that enables the training
of a deep learning model among several participants with sensitive data that
wish to share their own knowledge without compromising the privacy of their
data. In this research, the authors employ a secured Federated Learning method
with an additional layer of privacy and proposes a method for addressing the
non-IID challenge. Moreover, differential privacy is compared with
chaotic-based encryption as layer of privacy. The experimental approach
assesses the performance of the federated deep learning model with differential
privacy using both IID and non-IID data. In each experiment, the Federated
Learning process improves the average performance metrics of the deep neural
network, even in the case of non-IID data. | [
"cs.LG",
"cs.CR",
"cs.DC"
] | false |
2402.10186 | 2024-02-15T18:41:35Z | Self-consistent Validation for Machine Learning Electronic Structure | [
"Gengyuan Hu",
"Gengchen Wei",
"Zekun Lou",
"Philip H. S. Torr",
"Wanli Ouyang",
"Han-sen Zhong",
"Chen Lin"
] | Machine learning has emerged as a significant approach to efficiently tackle
electronic structure problems. Despite its potential, there is less guarantee
for the model to generalize to unseen data that hinders its application in
real-world scenarios. To address this issue, a technique has been proposed to
estimate the accuracy of the predictions. This method integrates machine
learning with self-consistent field methods to achieve both low validation cost
and interpret-ability. This, in turn, enables exploration of the model's
ability with active learning and instills confidence in its integration into
real-world studies. | [
"cs.LG",
"physics.chem-ph",
"physics.comp-ph"
] | false |
2402.10211 | 2024-02-15T18:59:43Z | Hierarchical State Space Models for Continuous Sequence-to-Sequence
Modeling | [
"Raunaq Bhirangi",
"Chenyu Wang",
"Venkatesh Pattabiraman",
"Carmel Majidi",
"Abhinav Gupta",
"Tess Hellebrekers",
"Lerrel Pinto"
] | Reasoning from sequences of raw sensory data is a ubiquitous problem across
fields ranging from medical devices to robotics. These problems often involve
using long sequences of raw sensor data (e.g. magnetometers, piezoresistors) to
predict sequences of desirable physical quantities (e.g. force, inertial
measurements). While classical approaches are powerful for locally-linear
prediction problems, they often fall short when using real-world sensors. These
sensors are typically non-linear, are affected by extraneous variables (e.g.
vibration), and exhibit data-dependent drift. For many problems, the prediction
task is exacerbated by small labeled datasets since obtaining ground-truth
labels requires expensive equipment. In this work, we present Hierarchical
State-Space Models (HiSS), a conceptually simple, new technique for continuous
sequential prediction. HiSS stacks structured state-space models on top of each
other to create a temporal hierarchy. Across six real-world sensor datasets,
from tactile-based state prediction to accelerometer-based inertial
measurement, HiSS outperforms state-of-the-art sequence models such as causal
Transformers, LSTMs, S4, and Mamba by at least 23% on MSE. Our experiments
further indicate that HiSS demonstrates efficient scaling to smaller datasets
and is compatible with existing data-filtering techniques. Code, datasets and
videos can be found on https://hiss-csp.github.io. | [
"cs.LG",
"cs.RO",
"eess.SP"
] | true |
2402.10310 | 2024-02-15T20:21:40Z | Interpretable Generative Adversarial Imitation Learning | [
"Wenliang Liu",
"Danyang Li",
"Erfan Aasi",
"Roberto Tron",
"Calin Belta"
] | Imitation learning methods have demonstrated considerable success in teaching
autonomous systems complex tasks through expert demonstrations. However, a
limitation of these methods is their lack of interpretability, particularly in
understanding the specific task the learning agent aims to accomplish. In this
paper, we propose a novel imitation learning method that combines Signal
Temporal Logic (STL) inference and control synthesis, enabling the explicit
representation of the task as an STL formula. This approach not only provides a
clear understanding of the task but also allows for the incorporation of human
knowledge and adaptation to new scenarios through manual adjustments of the STL
formulae. Additionally, we employ a Generative Adversarial Network
(GAN)-inspired training approach for both the inference and the control policy,
effectively narrowing the gap between the expert and learned policies. The
effectiveness of our algorithm is demonstrated through two case studies,
showcasing its practical applicability and adaptability. | [
"cs.LG",
"cs.SY",
"eess.SY"
] | false |
2402.10974 | 2024-02-15T14:39:58Z | On the Cross-Dataset Generalization of Machine Learning for Network
Intrusion Detection | [
"Marco Cantone",
"Claudio Marrocco",
"Alessandro Bria"
] | Network Intrusion Detection Systems (NIDS) are a fundamental tool in
cybersecurity. Their ability to generalize across diverse networks is a
critical factor in their effectiveness and a prerequisite for real-world
applications. In this study, we conduct a comprehensive analysis on the
generalization of machine-learning-based NIDS through an extensive
experimentation in a cross-dataset framework. We employ four machine learning
classifiers and utilize four datasets acquired from different networks:
CIC-IDS-2017, CSE-CIC-IDS2018, LycoS-IDS2017, and LycoS-Unicas-IDS2018.
Notably, the last dataset is a novel contribution, where we apply corrections
based on LycoS-IDS2017 to the well-known CSE-CIC-IDS2018 dataset. The results
show nearly perfect classification performance when the models are trained and
tested on the same dataset. However, when training and testing the models in a
cross-dataset fashion, the classification accuracy is largely commensurate with
random chance except for a few combinations of attacks and datasets. We employ
data visualization techniques in order to provide valuable insights on the
patterns in the data. Our analysis unveils the presence of anomalies in the
data that directly hinder the classifiers capability to generalize the learned
knowledge to new scenarios. This study enhances our comprehension of the
generalization capabilities of machine-learning-based NIDS, highlighting the
significance of acknowledging data heterogeneity. | [
"cs.CR",
"cs.LG",
"cs.NI"
] | false |
2402.10981 | 2024-02-15T22:51:27Z | Stuck-at Faults in ReRAM Neuromorphic Circuit Array and their Correction
through Machine Learning | [
"Vedant Sawal",
"Hiu Yung Wong"
] | In this paper, we study the inference accuracy of the Resistive Random Access
Memory (ReRAM) neuromorphic circuit due to stuck-at faults (stuck-on,
stuck-off, and stuck at a certain resistive value). A simulation framework
using Python is used to perform supervised machine learning (neural network
with 3 hidden layers, 1 input layer, and 1 output layer) of handwritten digits
and construct a corresponding fully analog neuromorphic circuit (4 synaptic
arrays) simulated by Spectre. A generic 45nm Process Development Kit (PDK) was
used. We study the difference in the inference accuracy degradation due to
stuck-on and stuck-off defects. Various defect patterns are studied including
circular, ring, row, column, and circular-complement defects. It is found that
stuck-on and stuck-off defects have a similar effect on inference accuracy.
However, it is also found that if there is a spatial defect variation across
the columns, the inference accuracy may be degraded significantly. We also
propose a machine learning (ML) strategy to recover the inference accuracy
degradation due to stuck-at faults. The inference accuracy is improved from 48%
to 85% in a defective neuromorphic circuit. | [
"cs.AR",
"cs.LG",
"cs.NE"
] | false |
2402.10982 | 2024-02-15T23:08:18Z | mshw, a forecasting library to predict short-term electricity demand
based on multiple seasonal Holt-Winters | [
"Oscar Trull",
"J. Carlos García-Díaz",
"Angel Peiró-Signes"
] | Transmission system operators have a growing need for more accurate
forecasting of electricity demand. Current electricity systems largely require
demand forecasting so that the electricity market establishes electricity
prices as well as the programming of production units. The companies that are
part of the electrical system use exclusive software to obtain predictions,
based on the use of time series and prediction tools, whether statistical or
artificial intelligence. However, the most common form of prediction is based
on hybrid models that use both technologies. In any case, it is software with a
complicated structure, with a large number of associated variables and that
requires a high computational load to make predictions. The predictions they
can offer are not much better than those that simple models can offer. In this
paper we present a MATLAB toolbox created for the prediction of electrical
demand. The toolbox implements multiple seasonal Holt-Winters exponential
smoothing models and neural network models. The models used include the use of
discrete interval mobile seasonalities (DIMS) to improve forecasting on special
days. Additionally, the results of its application in various electrical
systems in Europe are shown, where the results obtained can be seen. The use of
this library opens a new avenue of research for the use of models with discrete
and complex seasonalities in other fields of application. | [
"cs.LG",
"econ.EM",
"stat.AP"
] | false |
2403.03222 | 2024-02-15T01:52:44Z | Knowledge-guided EEG Representation Learning | [
"Aditya Kommineni",
"Kleanthis Avramidis",
"Richard Leahy",
"Shrikanth Narayanan"
] | Self-supervised learning has produced impressive results in multimedia
domains of audio, vision and speech. This paradigm is equally, if not more,
relevant for the domain of biosignals, owing to the scarcity of labelled data
in such scenarios. The ability to leverage large-scale unlabelled data to learn
robust representations could help improve the performance of numerous inference
tasks on biosignals. Given the inherent domain differences between multimedia
modalities and biosignals, the established objectives for self-supervised
learning may not translate well to this domain. Hence, there is an unmet need
to adapt these methods to biosignal analysis. In this work we propose a
self-supervised model for EEG, which provides robust performance and remarkable
parameter efficiency by using state space-based deep learning architecture. We
also propose a novel knowledge-guided pre-training objective that accounts for
the idiosyncrasies of the EEG signal. The results indicate improved embedding
representation learning and downstream performance compared to prior works on
exemplary tasks. Also, the proposed objective significantly reduces the amount
of pre-training data required to obtain performance equivalent to prior works. | [
"cs.LG",
"cs.AI",
"eess.SP"
] | false |
2403.18923 | 2024-02-15T20:27:33Z | Nature-Guided Cognitive Evolution for Predicting Dissolved Oxygen
Concentrations in North Temperate Lakes | [
"Runlong Yu",
"Robert Ladwig",
"Xiang Xu",
"Peijun Zhu",
"Paul C. Hanson",
"Yiqun Xie",
"Xiaowei Jia"
] | Predicting dissolved oxygen (DO) concentrations in north temperate lakes
requires a comprehensive study of phenological patterns across various
ecosystems, which highlights the significance of selecting phenological
features and feature interactions. Process-based models are limited by partial
process knowledge or oversimplified feature representations, while machine
learning models face challenges in efficiently selecting relevant feature
interactions for different lake types and tasks, especially under the
infrequent nature of DO data collection. In this paper, we propose a
Nature-Guided Cognitive Evolution (NGCE) strategy, which represents a
multi-level fusion of adaptive learning with natural processes. Specifically,
we utilize metabolic process-based models to generate simulated DO labels.
Using these simulated labels, we implement a multi-population cognitive
evolutionary search, where models, mirroring natural organisms, adaptively
evolve to select relevant feature interactions within populations for different
lake types and tasks. These models are not only capable of undergoing crossover
and mutation mechanisms within intra-populations but also, albeit infrequently,
engage in inter-population crossover. The second stage involves refining these
models by retraining them with real observed labels. We have tested the
performance of our NGCE strategy in predicting daily DO concentrations across a
wide range of lakes in the Midwest, USA. These lakes, varying in size, depth,
and trophic status, represent a broad spectrum of north temperate lakes. Our
findings demonstrate that NGCE not only produces accurate predictions with few
observed labels but also, through gene maps of models, reveals sophisticated
phenological patterns of different lakes. | [
"cs.NE",
"cs.AI",
"cs.LG"
] | false |
2402.10065 | 2024-02-15T16:30:55Z | How Much Does Each Datapoint Leak Your Privacy? Quantifying the
Per-datum Membership Leakage | [
"Achraf Azize",
"Debabrota Basu"
] | We study the per-datum Membership Inference Attacks (MIAs), where an attacker
aims to infer whether a fixed target datum has been included in the input
dataset of an algorithm and thus, violates privacy. First, we define the
membership leakage of a datum as the advantage of the optimal adversary
targeting to identify it. Then, we quantify the per-datum membership leakage
for the empirical mean, and show that it depends on the Mahalanobis distance
between the target datum and the data-generating distribution. We further
assess the effect of two privacy defences, i.e. adding Gaussian noise and
sub-sampling. We quantify exactly how both of them decrease the per-datum
membership leakage. Our analysis builds on a novel proof technique that
combines an Edgeworth expansion of the likelihood ratio test and a
Lindeberg-Feller central limit theorem. Our analysis connects the existing
likelihood ratio and scalar product attacks, and also justifies different
canary selection strategies used in the privacy auditing literature. Finally,
our experiments demonstrate the impacts of the leakage score, the sub-sampling
ratio and the noise scale on the per-datum membership leakage as indicated by
the theory. | [
"cs.LG",
"cs.CR",
"math.ST",
"stat.ML",
"stat.TH"
] | false |
2402.10127 | 2024-02-15T17:31:19Z | Nonlinear spiked covariance matrices and signal propagation in deep
neural networks | [
"Zhichao Wang",
"Denny Wu",
"Zhou Fan"
] | Many recent works have studied the eigenvalue spectrum of the Conjugate
Kernel (CK) defined by the nonlinear feature map of a feedforward neural
network. However, existing results only establish weak convergence of the
empirical eigenvalue distribution, and fall short of providing precise
quantitative characterizations of the ''spike'' eigenvalues and eigenvectors
that often capture the low-dimensional signal structure of the learning
problem. In this work, we characterize these signal eigenvalues and
eigenvectors for a nonlinear version of the spiked covariance model, including
the CK as a special case. Using this general result, we give a quantitative
description of how spiked eigenstructure in the input data propagates through
the hidden layers of a neural network with random weights. As a second
application, we study a simple regime of representation learning where the
weight matrix develops a rank-one signal component over training and
characterize the alignment of the target function with the spike eigenvector of
the CK on test data. | [
"stat.ML",
"cs.LG",
"math.PR",
"math.ST",
"stat.TH"
] | false |
2402.10168 | 2024-02-15T18:11:02Z | DeepSRGM -- Sequence Classification and Ranking in Indian Classical
Music with Deep Learning | [
"Sathwik Tejaswi Madhusudhan",
"Girish Chowdhary"
] | A vital aspect of Indian Classical Music (ICM) is Raga, which serves as a
melodic framework for compositions and improvisations alike. Raga Recognition
is an important music information retrieval task in ICM as it can aid numerous
downstream applications ranging from music recommendations to organizing huge
music collections. In this work, we propose a deep learning based approach to
Raga recognition. Our approach employs efficient pre possessing and learns
temporal sequences in music data using Long Short Term Memory based Recurrent
Neural Networks (LSTM-RNN). We train and test the network on smaller sequences
sampled from the original audio while the final inference is performed on the
audio as a whole. Our method achieves an accuracy of 88.1% and 97 % during
inference on the Comp Music Carnatic dataset and its 10 Raga subset
respectively making it the state-of-the-art for the Raga recognition task. Our
approach also enables sequence ranking which aids us in retrieving melodic
patterns from a given music data base that are closely related to the presented
query sequence. | [
"cs.SD",
"cs.AI",
"cs.IR",
"cs.LG",
"eess.AS"
] | false |
2402.10252 | 2024-02-15T16:16:30Z | Online Control of Linear Systems with Unbounded and Degenerate Noise | [
"Kaito Ito",
"Taira Tsuchiya"
] | This paper investigates the problem of controlling a linear system under
possibly unbounded and degenerate noise with unknown cost functions, known as
an online control problem. In contrast to the existing work, which assumes the
boundedness of noise, we reveal that for convex costs, an $
\widetilde{O}(\sqrt{T}) $ regret bound can be achieved even for unbounded
noise, where $ T $ denotes the time horizon. Moreover, when the costs are
strongly convex, we establish an $ O({\rm poly} (\log T)) $ regret bound
without the assumption that noise covariance is non-degenerate, which has been
required in the literature. The key ingredient in removing the rank assumption
on noise is a system transformation associated with the noise covariance. This
simultaneously enables the parameter reduction of an online control algorithm. | [
"eess.SY",
"cs.LG",
"cs.SY",
"math.OC",
"stat.ML"
] | false |
2402.10283 | 2024-02-15T19:19:54Z | Backdoor Attack against One-Class Sequential Anomaly Detection Models | [
"He Cheng",
"Shuhan Yuan"
] | Deep anomaly detection on sequential data has garnered significant attention
due to the wide application scenarios. However, deep learning-based models face
a critical security threat - their vulnerability to backdoor attacks. In this
paper, we explore compromising deep sequential anomaly detection models by
proposing a novel backdoor attack strategy. The attack approach comprises two
primary steps, trigger generation and backdoor injection. Trigger generation is
to derive imperceptible triggers by crafting perturbed samples from the benign
normal data, of which the perturbed samples are still normal. The backdoor
injection is to properly inject the backdoor triggers to comprise the model
only for the samples with triggers. The experimental results demonstrate the
effectiveness of our proposed attack strategy by injecting backdoors on two
well-established one-class anomaly detection models. | [
"cs.LG",
"cs.AI",
"cs.CR",
"cs.IT",
"math.IT"
] | false |
2402.10360 | 2024-02-15T23:10:45Z | Learnability is a Compact Property | [
"Julian Asilis",
"Siddartha Devic",
"Shaddin Dughmi",
"Vatsal Sharan",
"Shang-Hua Teng"
] | Recent work on learning has yielded a striking result: the learnability of
various problems can be undecidable, or independent of the standard ZFC axioms
of set theory. Furthermore, the learnability of such problems can fail to be a
property of finite character: informally, it cannot be detected by examining
finite projections of the problem.
On the other hand, learning theory abounds with notions of dimension that
characterize learning and consider only finite restrictions of the problem,
i.e., are properties of finite character. How can these results be reconciled?
More precisely, which classes of learning problems are vulnerable to logical
undecidability, and which are within the grasp of finite characterizations?
We demonstrate that the difficulty of supervised learning with metric losses
admits a tight finite characterization. In particular, we prove that the sample
complexity of learning a hypothesis class can be detected by examining its
finite projections. For realizable and agnostic learning with respect to a wide
class of proper loss functions, we demonstrate an exact compactness result: a
class is learnable with a given sample complexity precisely when the same is
true of all its finite projections. For realizable learning with improper loss
functions, we show that exact compactness of sample complexity can fail, and
provide matching upper and lower bounds of a factor of 2 on the extent to which
such sample complexities can differ. We conjecture that larger gaps are
possible for the agnostic case.
At the heart of our technical work is a compactness result concerning
assignments of variables that maintain a class of functions below a target
value, which generalizes Hall's classic matching theorem and may be of
independent interest. | [
"cs.LG",
"cs.CC",
"cs.DS",
"cs.LO",
"stat.ML"
] | false |
2402.10977 | 2024-02-15T18:20:42Z | Generative AI and Process Systems Engineering: The Next Frontier | [
"Benjamin Decardi-Nelson",
"Abdulelah S. Alshehri",
"Akshay Ajagekar",
"Fengqi You"
] | This article explores how emerging generative artificial intelligence (GenAI)
models, such as large language models (LLMs), can enhance solution
methodologies within process systems engineering (PSE). These cutting-edge
GenAI models, particularly foundation models (FMs), which are pre-trained on
extensive, general-purpose datasets, offer versatile adaptability for a broad
range of tasks, including responding to queries, image generation, and complex
decision-making. Given the close relationship between advancements in PSE and
developments in computing and systems technologies, exploring the synergy
between GenAI and PSE is essential. We begin our discussion with a compact
overview of both classic and emerging GenAI models, including FMs, and then
dive into their applications within key PSE domains: synthesis and design,
optimization and integration, and process monitoring and control. In each
domain, we explore how GenAI models could potentially advance PSE
methodologies, providing insights and prospects for each area. Furthermore, the
article identifies and discusses potential challenges in fully leveraging GenAI
within PSE, including multiscale modeling, data requirements, evaluation
metrics and benchmarks, and trust and safety, thereby deepening the discourse
on effective GenAI integration into systems analysis, design, optimization,
operations, monitoring, and control. This paper provides a guide for future
research focused on the applications of emerging GenAI in PSE. | [
"cs.LG",
"cs.AI",
"cs.SY",
"eess.SY",
"math.OC"
] | false |
2402.09698 | 2024-02-15T04:16:59Z | Combining Evidence Across Filtrations | [
"Yo Joong Choe",
"Aaditya Ramdas"
] | In anytime-valid sequential inference, it is known that any admissible
inference procedure must be based on test martingales and their composite
generalization, called e-processes, which are nonnegative processes whose
expectation at any arbitrary stopping time is upper-bounded by one. An
e-process quantifies the accumulated evidence against a composite null
hypothesis over a sequence of outcomes. This paper studies methods for
combining e-processes that are computed using different information sets, i.e.,
filtrations, for a null hypothesis. Even though e-processes constructed on the
same filtration can be combined effortlessly (e.g., by averaging), e-processes
constructed on different filtrations cannot be combined as easily because their
validity in a coarser filtration does not translate to validity in a finer
filtration. We discuss three concrete examples of such e-processes in the
literature: exchangeability tests, independence tests, and tests for evaluating
and comparing forecasts with lags. Our main result establishes that these
e-processes can be lifted into any finer filtration using adjusters, which are
functions that allow betting on the running maximum of the accumulated wealth
(thereby insuring against the loss of evidence). We also develop randomized
adjusters that can improve the power of the resulting sequential inference
procedure. | [
"stat.ME",
"cs.LG",
"math.PR",
"math.ST",
"stat.ML",
"stat.TH"
] | false |
2402.10357 | 2024-02-15T22:59:14Z | Efficient Sampling on Riemannian Manifolds via Langevin MCMC | [
"Xiang Cheng",
"Jingzhao Zhang",
"Suvrit Sra"
] | We study the task of efficiently sampling from a Gibbs distribution $d \pi^*
= e^{-h} d {vol}_g$ over a Riemannian manifold $M$ via (geometric) Langevin
MCMC; this algorithm involves computing exponential maps in random Gaussian
directions and is efficiently implementable in practice. The key to our
analysis of Langevin MCMC is a bound on the discretization error of the
geometric Euler-Murayama scheme, assuming $\nabla h$ is Lipschitz and $M$ has
bounded sectional curvature. Our error bound matches the error of Euclidean
Euler-Murayama in terms of its stepsize dependence. Combined with a contraction
guarantee for the geometric Langevin Diffusion under Kendall-Cranston coupling,
we prove that the Langevin MCMC iterates lie within $\epsilon$-Wasserstein
distance of $\pi^*$ after $\tilde{O}(\epsilon^{-2})$ steps, which matches the
iteration complexity for Euclidean Langevin MCMC. Our results apply in general
settings where $h$ can be nonconvex and $M$ can have negative Ricci curvature.
Under additional assumptions that the Riemannian curvature tensor has bounded
derivatives, and that $\pi^*$ satisfies a $CD(\cdot,\infty)$ condition, we
analyze the stochastic gradient version of Langevin MCMC, and bound its
iteration complexity by $\tilde{O}(\epsilon^{-2})$ as well. | [
"math.ST",
"cs.LG",
"math.PR",
"stat.CO",
"stat.ML",
"stat.TH"
] | false |
2402.10435 | 2024-02-16T03:53:30Z | Dynamic Patch-aware Enrichment Transformer for Occluded Person
Re-Identification | [
"Xin Zhang",
"Keren Fu",
"Qijun Zhao"
] | Person re-identification (re-ID) continues to pose a significant challenge,
particularly in scenarios involving occlusions. Prior approaches aimed at
tackling occlusions have predominantly focused on aligning physical body
features through the utilization of external semantic cues. However, these
methods tend to be intricate and susceptible to noise. To address the
aforementioned challenges, we present an innovative end-to-end solution known
as the Dynamic Patch-aware Enrichment Transformer (DPEFormer). This model
effectively distinguishes human body information from occlusions automatically
and dynamically, eliminating the need for external detectors or precise image
alignment. Specifically, we introduce a dynamic patch token selection module
(DPSM). DPSM utilizes a label-guided proxy token as an intermediary to identify
informative occlusion-free tokens. These tokens are then selected for deriving
subsequent local part features. To facilitate the seamless integration of
global classification features with the finely detailed local features selected
by DPSM, we introduce a novel feature blending module (FBM). FBM enhances
feature representation through the complementary nature of information and the
exploitation of part diversity. Furthermore, to ensure that DPSM and the entire
DPEFormer can effectively learn with only identity labels, we also propose a
Realistic Occlusion Augmentation (ROA) strategy. This strategy leverages the
recent advances in the Segment Anything Model (SAM). As a result, it generates
occlusion images that closely resemble real-world occlusions, greatly enhancing
the subsequent contrastive learning process. Experiments on occluded and
holistic re-ID benchmarks signify a substantial advancement of DPEFormer over
existing state-of-the-art approaches. The code will be made publicly available. | [
"cs.CV"
] | false |
2402.10454 | 2024-02-16T05:16:20Z | Optimizing Skin Lesion Classification via Multimodal Data and Auxiliary
Task Integration | [
"Mahapara Khurshid",
"Mayank Vatsa",
"Richa Singh"
] | The rising global prevalence of skin conditions, some of which can escalate
to life-threatening stages if not timely diagnosed and treated, presents a
significant healthcare challenge. This issue is particularly acute in remote
areas where limited access to healthcare often results in delayed treatment,
allowing skin diseases to advance to more critical stages. One of the primary
challenges in diagnosing skin diseases is their low inter-class variations, as
many exhibit similar visual characteristics, making accurate classification
challenging. This research introduces a novel multimodal method for classifying
skin lesions, integrating smartphone-captured images with essential clinical
and demographic information. This approach mimics the diagnostic process
employed by medical professionals. A distinctive aspect of this method is the
integration of an auxiliary task focused on super-resolution image prediction.
This component plays a crucial role in refining visual details and enhancing
feature extraction, leading to improved differentiation between classes and,
consequently, elevating the overall effectiveness of the model. The
experimental evaluations have been conducted using the PAD-UFES20 dataset,
applying various deep-learning architectures. The results of these experiments
not only demonstrate the effectiveness of the proposed method but also its
potential applicability under-resourced healthcare environments. | [
"cs.CV"
] | false |
2402.10476 | 2024-02-16T06:45:25Z | Spike-EVPR: Deep Spiking Residual Network with Cross-Representation
Aggregation for Event-Based Visual Place Recognition | [
"Chenming Hu",
"Zheng Fang",
"Kuanxu Hou",
"Delei Kong",
"Junjie Jiang",
"Hao Zhuang",
"Mingyuan Sun",
"Xinjie Huang"
] | Event cameras have been successfully applied to visual place recognition
(VPR) tasks by using deep artificial neural networks (ANNs) in recent years.
However, previously proposed deep ANN architectures are often unable to harness
the abundant temporal information presented in event streams. In contrast, deep
spiking networks exhibit more intricate spatiotemporal dynamics and are
inherently well-suited to process sparse asynchronous event streams.
Unfortunately, directly inputting temporal-dense event volumes into the spiking
network introduces excessive time steps, resulting in prohibitively high
training costs for large-scale VPR tasks. To address the aforementioned issues,
we propose a novel deep spiking network architecture called Spike-EVPR for
event-based VPR tasks. First, we introduce two novel event representations
tailored for SNN to fully exploit the spatio-temporal information from the
event streams, and reduce the video memory occupation during training as much
as possible. Then, to exploit the full potential of these two representations,
we construct a Bifurcated Spike Residual Encoder (BSR-Encoder) with powerful
representational capabilities to better extract the high-level features from
the two event representations. Next, we introduce a Shared & Specific
Descriptor Extractor (SSD-Extractor). This module is designed to extract
features shared between the two representations and features specific to each.
Finally, we propose a Cross-Descriptor Aggregation Module (CDA-Module) that
fuses the above three features to generate a refined, robust global descriptor
of the scene. Our experimental results indicate the superior performance of our
Spike-EVPR compared to several existing EVPR pipelines on Brisbane-Event-VPR
and DDD20 datasets, with the average Recall@1 increased by 7.61% on Brisbane
and 13.20% on DDD20. | [
"cs.CV"
] | false |
2402.10491 | 2024-02-16T07:48:35Z | Make a Cheap Scaling: A Self-Cascade Diffusion Model for
Higher-Resolution Adaptation | [
"Lanqing Guo",
"Yingqing He",
"Haoxin Chen",
"Menghan Xia",
"Xiaodong Cun",
"Yufei Wang",
"Siyu Huang",
"Yong Zhang",
"Xintao Wang",
"Qifeng Chen",
"Ying Shan",
"Bihan Wen"
] | Diffusion models have proven to be highly effective in image and video
generation; however, they still face composition challenges when generating
images of varying sizes due to single-scale training data. Adapting large
pre-trained diffusion models for higher resolution demands substantial
computational and optimization resources, yet achieving a generation capability
comparable to low-resolution models remains elusive. This paper proposes a
novel self-cascade diffusion model that leverages the rich knowledge gained
from a well-trained low-resolution model for rapid adaptation to
higher-resolution image and video generation, employing either tuning-free or
cheap upsampler tuning paradigms. Integrating a sequence of multi-scale
upsampler modules, the self-cascade diffusion model can efficiently adapt to a
higher resolution, preserving the original composition and generation
capabilities. We further propose a pivot-guided noise re-schedule strategy to
speed up the inference process and improve local structural details. Compared
to full fine-tuning, our approach achieves a 5X training speed-up and requires
only an additional 0.002M tuning parameters. Extensive experiments demonstrate
that our approach can quickly adapt to higher resolution image and video
synthesis by fine-tuning for just 10k steps, with virtually no additional
inference time. | [
"cs.CV"
] | true |
2402.10520 | 2024-02-16T09:09:16Z | Real-Time Model-Based Quantitative Ultrasound and Radar | [
"Tom Sharon",
"Yonina C. Eldar"
] | Ultrasound and radar signals are highly beneficial for medical imaging as
they are non-invasive and non-ionizing. Traditional imaging techniques have
limitations in terms of contrast and physical interpretation. Quantitative
medical imaging can display various physical properties such as speed of sound,
density, conductivity, and relative permittivity. This makes it useful for a
wider range of applications, including improving cancer detection, diagnosing
fatty liver, and fast stroke imaging. However, current quantitative imaging
techniques that estimate physical properties from received signals, such as
Full Waveform Inversion, are time-consuming and tend to converge to local
minima, making them unsuitable for medical imaging. To address these
challenges, we propose a neural network based on the physical model of wave
propagation, which defines the relationship between the received signals and
physical properties. Our network can reconstruct multiple physical properties
in less than one second for complex and realistic scenarios, using data from
only eight elements. We demonstrate the effectiveness of our approach for both
radar and ultrasound signals. | [
"cs.CV"
] | false |
2402.10534 | 2024-02-16T09:46:20Z | Using Left and Right Brains Together: Towards Vision and Language
Planning | [
"Jun Cen",
"Chenfei Wu",
"Xiao Liu",
"Shengming Yin",
"Yixuan Pei",
"Jinglong Yang",
"Qifeng Chen",
"Nan Duan",
"Jianguo Zhang"
] | Large Language Models (LLMs) and Large Multi-modality Models (LMMs) have
demonstrated remarkable decision masking capabilities on a variety of tasks.
However, they inherently operate planning within the language space, lacking
the vision and spatial imagination ability. In contrast, humans utilize both
left and right hemispheres of the brain for language and visual planning during
the thinking process. Therefore, we introduce a novel vision-language planning
framework in this work to perform concurrent visual and language planning for
tasks with inputs of any form. Our framework incorporates visual planning to
capture intricate environmental details, while language planning enhances the
logical coherence of the overall system. We evaluate the effectiveness of our
framework across vision-language tasks, vision-only tasks, and language-only
tasks. The results demonstrate the superior performance of our approach,
indicating that the integration of visual and language planning yields better
contextually aware task execution. | [
"cs.CV"
] | false |
2402.10595 | 2024-02-16T11:28:50Z | Compact and De-biased Negative Instance Embedding for Multi-Instance
Learning on Whole-Slide Image Classification | [
"Joohyung Lee",
"Heejeong Nam",
"Kwanhyung Lee",
"Sangchul Hahn"
] | Whole-slide image (WSI) classification is a challenging task because 1)
patches from WSI lack annotation, and 2) WSI possesses unnecessary variability,
e.g., stain protocol. Recently, Multiple-Instance Learning (MIL) has made
significant progress, allowing for classification based on slide-level, rather
than patch-level, annotations. However, existing MIL methods ignore that all
patches from normal slides are normal. Using this free annotation, we introduce
a semi-supervision signal to de-bias the inter-slide variability and to capture
the common factors of variation within normal patches. Because our method is
orthogonal to the MIL algorithm, we evaluate our method on top of the recently
proposed MIL algorithms and also compare the performance with other
semi-supervised approaches. We evaluate our method on two public WSI datasets
including Camelyon-16 and TCGA lung cancer and demonstrate that our approach
significantly improves the predictive performance of existing MIL algorithms
and outperforms other semi-supervised algorithms. We release our code at
https://github.com/AITRICS/pathology_mil. | [
"cs.CV"
] | false |
2402.10698 | 2024-02-16T13:59:07Z | Question-Instructed Visual Descriptions for Zero-Shot Video Question
Answering | [
"David Romero",
"Thamar Solorio"
] | We present Q-ViD, a simple approach for video question answering (video QA),
that unlike prior methods, which are based on complex architectures,
computationally expensive pipelines or use closed models like GPTs, Q-ViD
relies on a single instruction-aware open vision-language model (InstructBLIP)
to tackle videoQA using frame descriptions. Specifically, we create captioning
instruction prompts that rely on the target questions about the videos and
leverage InstructBLIP to obtain video frame captions that are useful to the
task at hand. Subsequently, we form descriptions of the whole video using the
question-dependent frame captions, and feed that information, along with a
question-answering prompt, to a large language model (LLM). The LLM is our
reasoning module, and performs the final step of multiple-choice QA. Our simple
Q-ViD framework achieves competitive or even higher performances than current
state of the art models on a diverse range of videoQA benchmarks, including
NExT-QA, STAR, How2QA, TVQA and IntentQA. | [
"cs.CV"
] | false |
2402.10752 | 2024-02-16T15:19:39Z | STF: Spatio-Temporal Fusion Module for Improving Video Object Detection | [
"Noreen Anwar",
"Guillaume-Alexandre Bilodeau",
"Wassim Bouachir"
] | Consecutive frames in a video contain redundancy, but they may also contain
relevant complementary information for the detection task. The objective of our
work is to leverage this complementary information to improve detection.
Therefore, we propose a spatio-temporal fusion framework (STF). We first
introduce multi-frame and single-frame attention modules that allow a neural
network to share feature maps between nearby frames to obtain more robust
object representations. Second, we introduce a dual-frame fusion module that
merges feature maps in a learnable manner to improve them. Our evaluation is
conducted on three different benchmarks including video sequences of moving
road users. The performed experiments demonstrate that the proposed
spatio-temporal fusion module leads to improved detection performance compared
to baseline object detectors. Code is available at
https://github.com/noreenanwar/STF-module | [
"cs.CV"
] | false |
2402.10821 | 2024-02-16T16:47:21Z | Training Class-Imbalanced Diffusion Model Via Overlap Optimization | [
"Divin Yan",
"Lu Qi",
"Vincent Tao Hu",
"Ming-Hsuan Yang",
"Meng Tang"
] | Diffusion models have made significant advances recently in high-quality
image synthesis and related tasks. However, diffusion models trained on
real-world datasets, which often follow long-tailed distributions, yield
inferior fidelity for tail classes. Deep generative models, including diffusion
models, are biased towards classes with abundant training images. To address
the observed appearance overlap between synthesized images of rare classes and
tail classes, we propose a method based on contrastive learning to minimize the
overlap between distributions of synthetic images for different classes. We
show variants of our probabilistic contrastive learning method can be applied
to any class conditional diffusion model. We show significant improvement in
image synthesis using our loss for multiple datasets with long-tailed
distribution. Extensive experimental results demonstrate that the proposed
method can effectively handle imbalanced data for diffusion-based generation
and classification models. Our code and datasets will be publicly available at
https://github.com/yanliang3612/DiffROP. | [
"cs.CV"
] | false |
2402.10847 | 2024-02-16T17:36:56Z | Enhancement-Driven Pretraining for Robust Fingerprint Representation
Learning | [
"Ekta Gavas",
"Kaustubh Olpadkar",
"Anoop Namboodiri"
] | Fingerprint recognition stands as a pivotal component of biometric
technology, with diverse applications from identity verification to advanced
search tools. In this paper, we propose a unique method for deriving robust
fingerprint representations by leveraging enhancement-based pre-training.
Building on the achievements of U-Net-based fingerprint enhancement, our method
employs a specialized encoder to derive representations from fingerprint images
in a self-supervised manner. We further refine these representations, aiming to
enhance the verification capabilities. Our experimental results, tested on
publicly available fingerprint datasets, reveal a marked improvement in
verification performance against established self-supervised training
techniques. Our findings not only highlight the effectiveness of our method but
also pave the way for potential advancements. Crucially, our research indicates
that it is feasible to extract meaningful fingerprint representations from
degraded images without relying on enhanced samples. | [
"cs.CV"
] | false |
2402.10855 | 2024-02-16T17:51:13Z | Control Color: Multimodal Diffusion-based Interactive Image Colorization | [
"Zhexin Liang",
"Zhaochen Li",
"Shangchen Zhou",
"Chongyi Li",
"Chen Change Loy"
] | Despite the existence of numerous colorization methods, several limitations
still exist, such as lack of user interaction, inflexibility in local
colorization, unnatural color rendering, insufficient color variation, and
color overflow. To solve these issues, we introduce Control Color (CtrlColor),
a multi-modal colorization method that leverages the pre-trained Stable
Diffusion (SD) model, offering promising capabilities in highly controllable
interactive image colorization. While several diffusion-based methods have been
proposed, supporting colorization in multiple modalities remains non-trivial.
In this study, we aim to tackle both unconditional and conditional image
colorization (text prompts, strokes, exemplars) and address color overflow and
incorrect color within a unified framework. Specifically, we present an
effective way to encode user strokes to enable precise local color manipulation
and employ a practical way to constrain the color distribution similar to
exemplars. Apart from accepting text prompts as conditions, these designs add
versatility to our approach. We also introduce a novel module based on
self-attention and a content-guided deformable autoencoder to address the
long-standing issues of color overflow and inaccurate coloring. Extensive
comparisons show that our model outperforms state-of-the-art image colorization
methods both qualitatively and quantitatively. | [
"cs.CV"
] | false |
2402.10896 | 2024-02-16T18:54:47Z | PaLM2-VAdapter: Progressively Aligned Language Model Makes a Strong
Vision-language Adapter | [
"Junfei Xiao",
"Zheng Xu",
"Alan Yuille",
"Shen Yan",
"Boyu Wang"
] | This paper demonstrates that a progressively aligned language model can
effectively bridge frozen vision encoders and large language models (LLMs).
While the fundamental architecture and pre-training methods of vision encoders
and LLMs have been extensively studied, the architecture and training strategy
of vision-language adapters vary significantly across recent works. Our
research undertakes a thorough exploration of the state-of-the-art perceiver
resampler architecture and builds a strong baseline. However, we observe that
the vision-language alignment with perceiver resampler exhibits slow
convergence and limited scalability with a lack of direct supervision. To
address this issue, we propose PaLM2-VAdapter, employing a progressively
aligned language model as the vision-language adapter. Compared to the strong
baseline with perceiver resampler, our method empirically shows faster
convergence, higher performance, and stronger scalability. Extensive
experiments across various Visual Question Answering (VQA) and captioning tasks
on both images and videos demonstrate that our model exhibits state-of-the-art
visual understanding and multi-modal reasoning capabilities. Notably, our
method achieves these advancements with 30~70% fewer parameters than the
state-of-the-art large vision-language models, marking a significant efficiency
improvement. | [
"cs.CV"
] | true |
2402.11083 | 2024-02-16T21:17:42Z | VQAttack: Transferable Adversarial Attacks on Visual Question Answering
via Pre-trained Models | [
"Ziyi Yin",
"Muchao Ye",
"Tianrong Zhang",
"Jiaqi Wang",
"Han Liu",
"Jinghui Chen",
"Ting Wang",
"Fenglong Ma"
] | Visual Question Answering (VQA) is a fundamental task in computer vision and
natural language process fields. Although the ``pre-training & finetuning''
learning paradigm significantly improves the VQA performance, the adversarial
robustness of such a learning paradigm has not been explored. In this paper, we
delve into a new problem: using a pre-trained multimodal source model to create
adversarial image-text pairs and then transferring them to attack the target
VQA models. Correspondingly, we propose a novel VQAttack model, which can
iteratively generate both image and text perturbations with the designed
modules: the large language model (LLM)-enhanced image attack and the
cross-modal joint attack module. At each iteration, the LLM-enhanced image
attack module first optimizes the latent representation-based loss to generate
feature-level image perturbations. Then it incorporates an LLM to further
enhance the image perturbations by optimizing the designed masked answer
anti-recovery loss. The cross-modal joint attack module will be triggered at a
specific iteration, which updates the image and text perturbations
sequentially. Notably, the text perturbation updates are based on both the
learned gradients in the word embedding space and word synonym-based
substitution. Experimental results on two VQA datasets with five validated
models demonstrate the effectiveness of the proposed VQAttack in the
transferable attack setting, compared with state-of-the-art baselines. This
work reveals a significant blind spot in the ``pre-training & fine-tuning''
paradigm on VQA tasks. Source codes will be released. | [
"cs.CV"
] | false |
2402.11095 | 2024-02-16T21:48:17Z | GIM: Learning Generalizable Image Matcher From Internet Videos | [
"Xuelun Shen",
"Zhipeng Cai",
"Wei Yin",
"Matthias Müller",
"Zijun Li",
"Kaixuan Wang",
"Xiaozhi Chen",
"Cheng Wang"
] | Image matching is a fundamental computer vision problem. While learning-based
methods achieve state-of-the-art performance on existing benchmarks, they
generalize poorly to in-the-wild images. Such methods typically need to train
separate models for different scene types and are impractical when the scene
type is unknown in advance. One of the underlying problems is the limited
scalability of existing data construction pipelines, which limits the diversity
of standard image matching datasets. To address this problem, we propose GIM, a
self-training framework for learning a single generalizable model based on any
image matching architecture using internet videos, an abundant and diverse data
source. Given an architecture, GIM first trains it on standard domain-specific
datasets and then combines it with complementary matching methods to create
dense labels on nearby frames of novel videos. These labels are filtered by
robust fitting, and then enhanced by propagating them to distant frames. The
final model is trained on propagated data with strong augmentations. We also
propose ZEB, the first zero-shot evaluation benchmark for image matching. By
mixing data from diverse domains, ZEB can thoroughly assess the cross-domain
generalization performance of different methods. Applying GIM consistently
improves the zero-shot performance of 3 state-of-the-art image matching
architectures; with 50 hours of YouTube videos, the relative zero-shot
performance improves by 8.4%-18.1%. GIM also enables generalization to extreme
cross-domain data such as Bird Eye View (BEV) images of projected 3D point
clouds (Fig. 1(c)). More importantly, our single zero-shot model consistently
outperforms domain-specific baselines when evaluated on downstream tasks
inherent to their respective domains. The video presentation is available at
https://www.youtube.com/watch?v=FU_MJLD8LeY. | [
"cs.CV"
] | false |
2402.10376 | 2024-02-16T00:04:36Z | Interpreting CLIP with Sparse Linear Concept Embeddings (SpLiCE) | [
"Usha Bhalla",
"Alex Oesterling",
"Suraj Srinivas",
"Flavio P. Calmon",
"Himabindu Lakkaraju"
] | CLIP embeddings have demonstrated remarkable performance across a wide range
of computer vision tasks. However, these high-dimensional, dense vector
representations are not easily interpretable, restricting their usefulness in
downstream applications that require transparency. In this work, we empirically
show that CLIP's latent space is highly structured, and consequently that CLIP
representations can be decomposed into their underlying semantic components. We
leverage this understanding to propose a novel method, Sparse Linear Concept
Embeddings (SpLiCE), for transforming CLIP representations into sparse linear
combinations of human-interpretable concepts. Distinct from previous work,
SpLiCE does not require concept labels and can be applied post hoc. Through
extensive experimentation with multiple real-world datasets, we validate that
the representations output by SpLiCE can explain and even replace traditional
dense CLIP representations, maintaining equivalent downstream performance while
significantly improving their interpretability. We also demonstrate several use
cases of SpLiCE representations including detecting spurious correlations,
model editing, and quantifying semantic shifts in datasets. | [
"cs.LG",
"cs.CV"
] | false |
2402.10404 | 2024-02-16T02:12:20Z | Explaining generative diffusion models via visual analysis for
interpretable decision-making process | [
"Ji-Hoon Park",
"Yeong-Joon Ju",
"Seong-Whan Lee"
] | Diffusion models have demonstrated remarkable performance in generation
tasks. Nevertheless, explaining the diffusion process remains challenging due
to it being a sequence of denoising noisy images that are difficult for experts
to interpret. To address this issue, we propose the three research questions to
interpret the diffusion process from the perspective of the visual concepts
generated by the model and the region where the model attends in each time
step. We devise tools for visualizing the diffusion process and answering the
aforementioned research questions to render the diffusion process
human-understandable. We show how the output is progressively generated in the
diffusion process by explaining the level of denoising and highlighting
relationships to foundational visual concepts at each time step through the
results of experiments with various visual analyses using the tools. Throughout
the training of the diffusion model, the model learns diverse visual concepts
corresponding to each time-step, enabling the model to predict varying levels
of visual concepts at different stages. We substantiate our tools using Area
Under Cover (AUC) score, correlation quantification, and cross-attention
mapping. Our findings provide insights into the diffusion process and pave the
way for further research into explainable diffusion mechanisms. | [
"cs.CV",
"cs.AI",
"68T01"
] | false |
2402.10478 | 2024-02-16T06:57:03Z | CodaMal: Contrastive Domain Adaptation for Malaria Detection in Low-Cost
Microscopes | [
"Ishan Rajendrakumar Dave",
"Tristan de Blegiers",
"Chen Chen",
"Mubarak Shah"
] | Malaria is a major health issue worldwide, and its diagnosis requires
scalable solutions that can work effectively with low-cost microscopes (LCM).
Deep learning-based methods have shown success in computer-aided diagnosis from
microscopic images. However, these methods need annotated images that show
cells affected by malaria parasites and their life stages. Annotating images
from LCM significantly increases the burden on medical experts compared to
annotating images from high-cost microscopes (HCM). For this reason, a
practical solution would be trained on HCM images which should generalize well
on LCM images during testing. While earlier methods adopted a multi-stage
learning process, they did not offer an end-to-end approach. In this work, we
present an end-to-end learning framework, named CodaMal (Contrastive Domain
Adpation for Malaria). In order to bridge the gap between HCM (training) and
LCM (testing), we propose a domain adaptive contrastive loss. It reduces the
domain shift by promoting similarity between the representations of HCM and its
corresponding LCM image, without imposing an additional annotation burden. In
addition, the training objective includes object detection objectives with
carefully designed augmentations, ensuring the accurate detection of malaria
parasites. On the publicly available large-scale M5-dataset, our proposed
method shows a significant improvement of 16% over the state-of-the-art methods
in terms of the mean average precision metric (mAP), provides 21x speed up
during inference, and requires only half learnable parameters than the prior
methods. Our code is publicly available. | [
"cs.CV",
"cs.LG"
] | false |
2402.10483 | 2024-02-16T07:13:24Z | GaussianHair: Hair Modeling and Rendering with Light-aware Gaussians | [
"Haimin Luo",
"Min Ouyang",
"Zijun Zhao",
"Suyi Jiang",
"Longwen Zhang",
"Qixuan Zhang",
"Wei Yang",
"Lan Xu",
"Jingyi Yu"
] | Hairstyle reflects culture and ethnicity at first glance. In the digital era,
various realistic human hairstyles are also critical to high-fidelity digital
human assets for beauty and inclusivity. Yet, realistic hair modeling and
real-time rendering for animation is a formidable challenge due to its sheer
number of strands, complicated structures of geometry, and sophisticated
interaction with light. This paper presents GaussianHair, a novel explicit hair
representation. It enables comprehensive modeling of hair geometry and
appearance from images, fostering innovative illumination effects and dynamic
animation capabilities. At the heart of GaussianHair is the novel concept of
representing each hair strand as a sequence of connected cylindrical 3D
Gaussian primitives. This approach not only retains the hair's geometric
structure and appearance but also allows for efficient rasterization onto a 2D
image plane, facilitating differentiable volumetric rendering. We further
enhance this model with the "GaussianHair Scattering Model", adept at
recreating the slender structure of hair strands and accurately capturing their
local diffuse color in uniform lighting. Through extensive experiments, we
substantiate that GaussianHair achieves breakthroughs in both geometric and
appearance fidelity, transcending the limitations encountered in
state-of-the-art methods for hair reconstruction. Beyond representation,
GaussianHair extends to support editing, relighting, and dynamic rendering of
hair, offering seamless integration with conventional CG pipeline workflows.
Complementing these advancements, we have compiled an extensive dataset of real
human hair, each with meticulously detailed strand geometry, to propel further
research in this field. | [
"cs.GR",
"cs.CV"
] | false |
2402.10665 | 2024-02-16T13:14:12Z | Selective Prediction for Semantic Segmentation using Post-Hoc Confidence
Estimation and Its Performance under Distribution Shift | [
"Bruno Laboissiere Camargos Borges",
"Bruno Machado Pacheco",
"Danilo Silva"
] | Semantic segmentation plays a crucial role in various computer vision
applications, yet its efficacy is often hindered by the lack of high-quality
labeled data. To address this challenge, a common strategy is to leverage
models trained on data from different populations, such as publicly available
datasets. This approach, however, leads to the distribution shift problem,
presenting a reduced performance on the population of interest. In scenarios
where model errors can have significant consequences, selective prediction
methods offer a means to mitigate risks and reduce reliance on expert
supervision. This paper investigates selective prediction for semantic
segmentation in low-resource settings, thus focusing on post-hoc confidence
estimators applied to pre-trained models operating under distribution shift. We
propose a novel image-level confidence measure tailored for semantic
segmentation and demonstrate its effectiveness through experiments on three
medical imaging tasks. Our findings show that post-hoc confidence estimators
offer a cost-effective approach to reducing the impacts of distribution shift. | [
"cs.LG",
"cs.CV"
] | false |
2402.10717 | 2024-02-16T14:19:33Z | BioFusionNet: Deep Learning-Based Survival Risk Stratification in ER+
Breast Cancer Through Multifeature and Multimodal Data Fusion | [
"Raktim Kumar Mondol",
"Ewan K. A. Millar",
"Arcot Sowmya",
"Erik Meijering"
] | Breast cancer is a significant health concern affecting millions of women
worldwide. Accurate survival risk stratification plays a crucial role in
guiding personalised treatment decisions and improving patient outcomes. Here
we present BioFusionNet, a deep learning framework that fuses image-derived
features with genetic and clinical data to achieve a holistic patient profile
and perform survival risk stratification of ER+ breast cancer patients. We
employ multiple self-supervised feature extractors, namely DINO and MoCoV3,
pretrained on histopathology patches to capture detailed histopathological
image features. We then utilise a variational autoencoder (VAE) to fuse these
features, and harness the latent space of the VAE to feed into a self-attention
network, generating patient-level features. Next, we develop a
co-dual-cross-attention mechanism to combine the histopathological features
with genetic data, enabling the model to capture the interplay between them.
Additionally, clinical data is incorporated using a feed-forward network (FFN),
further enhancing predictive performance and achieving comprehensive multimodal
feature integration. Furthermore, we introduce a weighted Cox loss function,
specifically designed to handle imbalanced survival data, which is a common
challenge in the field. The proposed model achieves a mean concordance index
(C-index) of 0.77 and a time-dependent area under the curve (AUC) of 0.84,
outperforming state-of-the-art methods. It predicts risk (high versus low) with
prognostic significance for overall survival (OS) in univariate analysis
(HR=2.99, 95% CI: 1.88--4.78, p<0.005), and maintains independent significance
in multivariate analysis incorporating standard clinicopathological variables
(HR=2.91, 95% CI: 1.80--4.68, p<0.005). The proposed method not only improves
model performance but also addresses a critical gap in handling imbalanced
data. | [
"cs.CV",
"cs.AI"
] | false |
2402.10728 | 2024-02-16T14:44:40Z | Semi-weakly-supervised neural network training for medical image
registration | [
"Yiwen Li",
"Yunguan Fu",
"Iani J. M. B. Gayo",
"Qianye Yang",
"Zhe Min",
"Shaheer U. Saeed",
"Wen Yan",
"Yipei Wang",
"J. Alison Noble",
"Mark Emberton",
"Matthew J. Clarkson",
"Dean C. Barratt",
"Victor A. Prisacariu",
"Yipeng Hu"
] | For training registration networks, weak supervision from segmented
corresponding regions-of-interest (ROIs) have been proven effective for (a)
supplementing unsupervised methods, and (b) being used independently in
registration tasks in which unsupervised losses are unavailable or ineffective.
This correspondence-informing supervision entails cost in annotation that
requires significant specialised effort. This paper describes a
semi-weakly-supervised registration pipeline that improves the model
performance, when only a small corresponding-ROI-labelled dataset is available,
by exploiting unlabelled image pairs. We examine two types of augmentation
methods by perturbation on network weights and image resampling, such that
consistency-based unsupervised losses can be applied on unlabelled data. The
novel WarpDDF and RegCut approaches are proposed to allow commutative
perturbation between an image pair and the predicted spatial transformation
(i.e. respective input and output of registration networks), distinct from
existing perturbation methods for classification or segmentation. Experiments
using 589 male pelvic MR images, labelled with eight anatomical ROIs, show the
improvement in registration performance and the ablated contributions from the
individual strategies. Furthermore, this study attempts to construct one of the
first computational atlases for pelvic structures, enabled by registering
inter-subject MRs, and quantifies the significant differences due to the
proposed semi-weak supervision with a discussion on the potential clinical use
of example atlas-derived statistics. | [
"eess.IV",
"cs.CV"
] | false |
2402.10776 | 2024-02-16T15:58:45Z | In-Vivo Hyperspectral Human Brain Image Database for Brain Cancer
Detection | [
"H. Fabelo",
"S. Ortega",
"A. Szolna",
"D. Bulters",
"J. F. Pineiro",
"S. Kabwama",
"A. Shanahan",
"H. Bulstrode",
"S. Bisshopp",
"B. R. Kiran",
"D. Ravi",
"R. Lazcano",
"D. Madronal",
"C. Sosa",
"C. Espino",
"M. Marquez",
"M. De la Luz Plaza",
"R. Camacho",
"D. Carrera",
"M. Hernandez",
"G. M. Callico",
"J. Morera",
"B. Stanciulescu",
"G. Z. Yang",
"R. Salvador",
"E. Juarez",
"C. Sanz",
"R. Sarmiento"
] | The use of hyperspectral imaging for medical applications is becoming more
common in recent years. One of the main obstacles that researchers find when
developing hyperspectral algorithms for medical applications is the lack of
specific, publicly available, and hyperspectral medical data. The work
described in this paper was developed within the framework of the European
project HELICoiD (HypErspectraL Imaging Cancer Detection), which had as a main
goal the application of hyperspectral imaging to the delineation of brain
tumors in real-time during neurosurgical operations. In this paper, the
methodology followed to generate the first hyperspectral database of in-vivo
human brain tissues is presented. Data was acquired employing a customized
hyperspectral acquisition system capable of capturing information in the Visual
and Near InfraRed (VNIR) range from 400 to 1000 nm. Repeatability was assessed
for the cases where two images of the same scene were captured consecutively.
The analysis reveals that the system works more efficiently in the spectral
range between 450 and 900 nm. A total of 36 hyperspectral images from 22
different patients were obtained. From these data, more than 300 000 spectral
signatures were labeled employing a semi-automatic methodology based on the
spectral angle mapper algorithm. Four different classes were defined: normal
tissue, tumor tissue, blood vessel, and background elements. All the
hyperspectral data has been made available in a public repository. | [
"eess.IV",
"cs.CV"
] | false |
2402.10798 | 2024-02-16T16:21:15Z | VATr++: Choose Your Words Wisely for Handwritten Text Generation | [
"Bram Vanherle",
"Vittorio Pippi",
"Silvia Cascianelli",
"Nick Michiels",
"Frank Van Reeth",
"Rita Cucchiara"
] | Styled Handwritten Text Generation (HTG) has received significant attention
in recent years, propelled by the success of learning-based solutions employing
GANs, Transformers, and, preliminarily, Diffusion Models. Despite this surge in
interest, there remains a critical yet understudied aspect - the impact of the
input, both visual and textual, on the HTG model training and its subsequent
influence on performance. This study delves deeper into a cutting-edge
Styled-HTG approach, proposing strategies for input preparation and training
regularization that allow the model to achieve better performance and
generalize better. These aspects are validated through extensive analysis on
several different settings and datasets. Moreover, in this work, we go beyond
performance optimization and address a significant hurdle in HTG research - the
lack of a standardized evaluation protocol. In particular, we propose a
standardization of the evaluation protocol for HTG and conduct a comprehensive
benchmarking of existing approaches. By doing so, we aim to establish a
foundation for fair and meaningful comparisons between HTG strategies,
fostering progress in the field. | [
"cs.CV",
"cs.AI"
] | false |
2402.10814 | 2024-02-16T16:37:48Z | Associative Memories in the Feature Space | [
"Tommaso Salvatori",
"Beren Millidge",
"Yuhang Song",
"Rafal Bogacz",
"Thomas Lukasiewicz"
] | An autoassociative memory model is a function that, given a set of data
points, takes as input an arbitrary vector and outputs the most similar data
point from the memorized set. However, popular memory models fail to retrieve
images even when the corruption is mild and easy to detect for a human
evaluator. This is because similarities are evaluated in the raw pixel space,
which does not contain any semantic information about the images. This problem
can be easily solved by computing \emph{similarities} in an embedding space
instead of the pixel space. We show that an effective way of computing such
embeddings is via a network pretrained with a contrastive loss. As the
dimension of embedding spaces is often significantly smaller than the pixel
space, we also have a faster computation of similarity scores. We test this
method on complex datasets such as CIFAR10 and STL10. An additional drawback of
current models is the need of storing the whole dataset in the pixel space,
which is often extremely large. We relax this condition and propose a class of
memory models that only stores low-dimensional semantic embeddings, and uses
them to retrieve similar, but not identical, memories. We demonstrate a proof
of concept of this method on a simple task on the MNIST dataset. | [
"cs.LG",
"cs.CV"
] | false |
2402.10865 | 2024-02-16T18:01:43Z | Multi-Model 3D Registration: Finding Multiple Moving Objects in
Cluttered Point Clouds | [
"David Jin",
"Sushrut Karmalkar",
"Harry Zhang",
"Luca Carlone"
] | We investigate a variation of the 3D registration problem, named multi-model
3D registration. In the multi-model registration problem, we are given two
point clouds picturing a set of objects at different poses (and possibly
including points belonging to the background) and we want to simultaneously
reconstruct how all objects moved between the two point clouds. This setup
generalizes standard 3D registration where one wants to reconstruct a single
pose, e.g., the motion of the sensor picturing a static scene. Moreover, it
provides a mathematically grounded formulation for relevant robotics
applications, e.g., where a depth sensor onboard a robot perceives a dynamic
scene and has the goal of estimating its own motion (from the static portion of
the scene) while simultaneously recovering the motion of all dynamic objects.
We assume a correspondence-based setup where we have putative matches between
the two point clouds and consider the practical case where these
correspondences are plagued with outliers. We then propose a simple approach
based on Expectation-Maximization (EM) and establish theoretical conditions
under which the EM approach converges to the ground truth. We evaluate the
approach in simulated and real datasets ranging from table-top scenes to
self-driving scenarios and demonstrate its effectiveness when combined with
state-of-the-art scene flow methods to establish dense correspondences. | [
"cs.RO",
"cs.CV"
] | false |
2402.10882 | 2024-02-16T18:36:36Z | Universal Prompt Optimizer for Safe Text-to-Image Generation | [
"Zongyu Wu",
"Hongcheng Gao",
"Yueze Wang",
"Xiang Zhang",
"Suhang Wang"
] | Text-to-Image (T2I) models have shown great performance in generating images
based on textual prompts. However, these models are vulnerable to unsafe input
to generate unsafe content like sexual, harassment and illegal-activity images.
Existing studies based on image checker, model fine-tuning and embedding
blocking are impractical in real-world applications. Hence, \textit{we propose
the first universal prompt optimizer for safe T2I generation in black-box
scenario}. We first construct a dataset consisting of toxic-clean prompt pairs
by GPT-3.5 Turbo. To guide the optimizer to have the ability of converting
toxic prompt to clean prompt while preserving semantic information, we design a
novel reward function measuring toxicity and text alignment of generated images
and train the optimizer through Proximal Policy Optimization. Experiments show
that our approach can effectively reduce the likelihood of various T2I models
in generating inappropriate images, with no significant impact on text
alignment. It is also flexible to be combined with methods to achieve better
performance. | [
"cs.CV",
"cs.CL"
] | false |
2402.10887 | 2024-02-16T18:43:39Z | Weak-Mamba-UNet: Visual Mamba Makes CNN and ViT Work Better for
Scribble-based Medical Image Segmentation | [
"Ziyang Wang",
"Chao Ma"
] | Medical image segmentation is increasingly reliant on deep learning
techniques, yet the promising performance often come with high annotation
costs. This paper introduces Weak-Mamba-UNet, an innovative weakly-supervised
learning (WSL) framework that leverages the capabilities of Convolutional
Neural Network (CNN), Vision Transformer (ViT), and the cutting-edge Visual
Mamba (VMamba) architecture for medical image segmentation, especially when
dealing with scribble-based annotations. The proposed WSL strategy incorporates
three distinct architecture but same symmetrical encoder-decoder networks: a
CNN-based UNet for detailed local feature extraction, a Swin Transformer-based
SwinUNet for comprehensive global context understanding, and a VMamba-based
Mamba-UNet for efficient long-range dependency modeling. The key concept of
this framework is a collaborative and cross-supervisory mechanism that employs
pseudo labels to facilitate iterative learning and refinement across the
networks. The effectiveness of Weak-Mamba-UNet is validated on a publicly
available MRI cardiac segmentation dataset with processed scribble annotations,
where it surpasses the performance of a similar WSL framework utilizing only
UNet or SwinUNet. This highlights its potential in scenarios with sparse or
imprecise annotations. The source code is made publicly accessible. | [
"eess.IV",
"cs.CV"
] | false |
2402.10894 | 2024-02-16T18:51:42Z | Fusion of Diffusion Weighted MRI and Clinical Data for Predicting
Functional Outcome after Acute Ischemic Stroke with Deep Contrastive Learning | [
"Chia-Ling Tsai",
"Hui-Yun Su",
"Shen-Feng Sung",
"Wei-Yang Lin",
"Ying-Ying Su",
"Tzu-Hsien Yang",
"Man-Lin Mai"
] | Stroke is a common disabling neurological condition that affects about
one-quarter of the adult population over age 25; more than half of patients
still have poor outcomes, such as permanent functional dependence or even
death, after the onset of acute stroke. The aim of this study is to investigate
the efficacy of diffusion-weighted MRI modalities combining with structured
health profile on predicting the functional outcome to facilitate early
intervention. A deep fusion learning network is proposed with two-stage
training: the first stage focuses on cross-modality representation learning and
the second stage on classification. Supervised contrastive learning is
exploited to learn discriminative features that separate the two classes of
patients from embeddings of individual modalities and from the fused multimodal
embedding. The network takes as the input DWI and ADC images, and structured
health profile data. The outcome is the prediction of the patient needing
long-term care at 3 months after the onset of stroke. Trained and evaluated
with a dataset of 3297 patients, our proposed fusion model achieves 0.87, 0.80
and 80.45% for AUC, F1-score and accuracy, respectively, outperforming existing
models that consolidate both imaging and structured data in the medical domain.
If trained with comprehensive clinical variables, including NIHSS and
comorbidities, the gain from images on making accurate prediction is not
considered substantial, but significant. However, diffusion-weighted MRI can
replace NIHSS to achieve comparable level of accuracy combining with other
readily available clinical variables for better generalization. | [
"cs.CV",
"cs.LG"
] | false |
2402.11036 | 2024-02-16T19:29:43Z | Occlusion Resilient 3D Human Pose Estimation | [
"Soumava Kumar Roy",
"Ilia Badanin",
"Sina Honari",
"Pascal Fua"
] | Occlusions remain one of the key challenges in 3D body pose estimation from
single-camera video sequences. Temporal consistency has been extensively used
to mitigate their impact but the existing algorithms in the literature do not
explicitly model them.
Here, we apply this by representing the deforming body as a spatio-temporal
graph. We then introduce a refinement network that performs graph convolutions
over this graph to output 3D poses. To ensure robustness to occlusions, we
train this network with a set of binary masks that we use to disable some of
the edges as in drop-out techniques.
In effect, we simulate the fact that some joints can be hidden for periods of
time and train the network to be immune to that. We demonstrate the
effectiveness of this approach compared to state-of-the-art techniques that
infer poses from single-camera sequences. | [
"cs.CV",
"cs.LG"
] | false |
2402.11058 | 2024-02-16T20:14:47Z | II-MMR: Identifying and Improving Multi-modal Multi-hop Reasoning in
Visual Question Answering | [
"Jihyung Kil",
"Farideh Tavazoee",
"Dongyeop Kang",
"Joo-Kyung Kim"
] | Visual Question Answering (VQA) often involves diverse reasoning scenarios
across Vision and Language (V&L). Most prior VQA studies, however, have merely
focused on assessing the model's overall accuracy without evaluating it on
different reasoning cases. Furthermore, some recent works observe that
conventional Chain-of-Thought (CoT) prompting fails to generate effective
reasoning for VQA, especially for complex scenarios requiring multi-hop
reasoning. In this paper, we propose II-MMR, a novel idea to identify and
improve multi-modal multi-hop reasoning in VQA. In specific, II-MMR takes a VQA
question with an image and finds a reasoning path to reach its answer using two
novel language promptings: (i) answer prediction-guided CoT prompt, or (ii)
knowledge triplet-guided prompt. II-MMR then analyzes this path to identify
different reasoning cases in current VQA benchmarks by estimating how many hops
and what types (i.e., visual or beyond-visual) of reasoning are required to
answer the question. On popular benchmarks including GQA and A-OKVQA, II-MMR
observes that most of their VQA questions are easy to answer, simply demanding
"single-hop" reasoning, whereas only a few questions require "multi-hop"
reasoning. Moreover, while the recent V&L model struggles with such complex
multi-hop reasoning questions even using the traditional CoT method, II-MMR
shows its effectiveness across all reasoning cases in both zero-shot and
fine-tuning settings. | [
"cs.CV",
"cs.CL"
] | false |
2402.11093 | 2024-02-16T21:39:28Z | Modular Graph Extraction for Handwritten Circuit Diagram Images | [
"Johannes Bayer",
"Leo van Waveren",
"Andreas Dengel"
] | As digitization in engineering progressed, circuit diagrams (also referred to
as schematics) are typically developed and maintained in computer-aided
engineering (CAE) systems, thus allowing for automated verification, simulation
and further processing in downstream engineering steps. However, apart from
printed legacy schematics, hand-drawn circuit diagrams are still used today in
the educational domain, where they serve as an easily accessible mean for
trainees and students to learn drawing this type of diagrams. Furthermore,
hand-drawn schematics are typically used in examinations due to legal
constraints. In order to harness the capabilities of digital circuit
representations, automated means for extracting the electrical graph from
raster graphics are required.
While respective approaches have been proposed in literature, they are
typically conducted on small or non-disclosed datasets. This paper describes a
modular end-to-end solution on a larger, public dataset, in which approaches
for the individual sub-tasks are evaluated to form a new baseline. These
sub-tasks include object detection (for electrical symbols and texts), binary
segmentation (drafter's stroke vs. background), handwritten character
recognition and orientation regression for electrical symbols and texts.
Furthermore, computer-vision graph assembly and rectification algorithms are
presented. All methods are integrated in a publicly available prototype. | [
"cs.CV",
"cs.LG"
] | false |
2402.10403 | 2024-02-16T02:01:24Z | Polyhedral Complex Derivation from Piecewise Trilinear Networks | [
"Jin-Hwa Kim"
] | Recent advancements in visualizing deep neural networks provide insights into
their structures and mesh extraction from Continuous Piecewise Affine (CPWA)
functions. Meanwhile, developments in neural surface representation learning
incorporate non-linear positional encoding, addressing issues like spectral
bias; however, this poses challenges in applying mesh extraction techniques
based on CPWA functions. Focusing on trilinear interpolating methods as
positional encoding, we present theoretical insights and an analytical mesh
extraction, showing the transformation of hypersurfaces to flat planes within
the trilinear region under the eikonal constraint. Moreover, we introduce a
method for approximating intersecting points among three hypersurfaces
contributing to broader applications. We empirically validate correctness and
parsimony through chamfer distance and efficiency, and angular distance, while
examining the correlation between the eikonal loss and the planarity of the
hypersurfaces. | [
"cs.LG",
"cs.AI",
"cs.CV",
"cs.GR"
] | false |
2402.10425 | 2024-02-16T03:22:58Z | DABS-LS: Deep Atlas-Based Segmentation Using Regional Level Set
Self-Supervision | [
"Hannah G. Mason",
"Jack H. Noble"
] | Cochlear implants (CIs) are neural prosthetics used to treat patients with
severe-to-profound hearing loss. Patient-specific modeling of CI stimulation of
the auditory nerve fiber (ANFs) can help audiologists improve the CI
programming. These models require localization of the ANFs relative to
surrounding anatomy and the CI. Localization is challenging because the ANFs
are so small they are not directly visible in clinical imaging. In this work,
we hypothesize the position of the ANFs can be accurately inferred from the
location of the internal auditory canal (IAC), which has high contrast in CT,
since the ANFs pass through this canal between the cochlea and the brain.
Inspired by VoxelMorph, in this paper we propose a deep atlas-based IAC
segmentation network. We create a single atlas in which the IAC and ANFs are
pre-localized. Our network is trained to produce deformation fields (DFs)
mapping coordinates from the atlas to new target volumes and that accurately
segment the IAC. We hypothesize that DFs that accurately segment the IAC in
target images will also facilitate accurate atlas-based localization of the
ANFs. As opposed to VoxelMorph, which aims to produce DFs that accurately
register the entire volume, our novel contribution is an entirely
self-supervised training scheme that aims to produce DFs that accurately
segment the target structure. This self-supervision is facilitated using a
regional level set (LS) inspired loss function. We call our method Deep Atlas
Based Segmentation using Level Sets (DABS-LS). Results show that DABS-LS
outperforms VoxelMorph for IAC segmentation. Tests with publicly available
datasets for trachea and kidney segmentation also show significant improvement
in segmentation accuracy, demonstrating the generalizability of the method. | [
"eess.IV",
"cs.CV",
"cs.LG"
] | false |
2402.10470 | 2024-02-16T06:22:44Z | Theoretical Understanding of Learning from Adversarial Perturbations | [
"Soichiro Kumano",
"Hiroshi Kera",
"Toshihiko Yamasaki"
] | It is not fully understood why adversarial examples can deceive neural
networks and transfer between different networks. To elucidate this, several
studies have hypothesized that adversarial perturbations, while appearing as
noises, contain class features. This is supported by empirical evidence showing
that networks trained on mislabeled adversarial examples can still generalize
well to correctly labeled test samples. However, a theoretical understanding of
how perturbations include class features and contribute to generalization is
limited. In this study, we provide a theoretical framework for understanding
learning from perturbations using a one-hidden-layer network trained on
mutually orthogonal samples. Our results highlight that various adversarial
perturbations, even perturbations of a few pixels, contain sufficient class
features for generalization. Moreover, we reveal that the decision boundary
when learning from perturbations matches that from standard samples except for
specific regions under mild conditions. The code is available at
https://github.com/s-kumano/learning-from-adversarial-perturbations. | [
"cs.LG",
"cs.CV",
"stat.ML"
] | false |
2402.10553 | 2024-02-16T10:35:01Z | A novel integrated industrial approach with cobots in the age of
industry 4.0 through conversational interaction and computer vision | [
"Andrea Pazienza",
"Nicola Macchiarulo",
"Felice Vitulano",
"Antonio Fiorentini",
"Marco Cammisa",
"Leonardo Rigutini",
"Ernesto Di Iorio",
"Achille Globo",
"Antonio Trevisi"
] | From robots that replace workers to robots that serve as helpful colleagues,
the field of robotic automation is experiencing a new trend that represents a
huge challenge for component manufacturers. The contribution starts from an
innovative vision that sees an ever closer collaboration between Cobot, able to
do a specific physical job with precision, the AI world, able to analyze
information and support the decision-making process, and the man able to have a
strategic vision of the future. | [
"cs.RO",
"cs.CL",
"cs.CV",
"cs.LG"
] | false |
2402.10580 | 2024-02-16T11:09:16Z | Efficient Multi-task Uncertainties for Joint Semantic Segmentation and
Monocular Depth Estimation | [
"Steven Landgraf",
"Markus Hillemann",
"Theodor Kapler",
"Markus Ulrich"
] | Quantifying the predictive uncertainty emerged as a possible solution to
common challenges like overconfidence or lack of explainability and robustness
of deep neural networks, albeit one that is often computationally expensive.
Many real-world applications are multi-modal in nature and hence benefit from
multi-task learning. In autonomous driving, for example, the joint solution of
semantic segmentation and monocular depth estimation has proven to be valuable.
In this work, we first combine different uncertainty quantification methods
with joint semantic segmentation and monocular depth estimation and evaluate
how they perform in comparison to each other. Additionally, we reveal the
benefits of multi-task learning with regard to the uncertainty quality compared
to solving both tasks separately. Based on these insights, we introduce
EMUFormer, a novel student-teacher distillation approach for joint semantic
segmentation and monocular depth estimation as well as efficient multi-task
uncertainty quantification. By implicitly leveraging the predictive
uncertainties of the teacher, EMUFormer achieves new state-of-the-art results
on Cityscapes and NYUv2 and additionally estimates high-quality predictive
uncertainties for both tasks that are comparable or superior to a Deep Ensemble
despite being an order of magnitude more efficient. | [
"cs.CV",
"cs.AI",
"cs.LG"
] | false |
2402.10609 | 2024-02-16T11:54:34Z | U$^2$MRPD: Unsupervised undersampled MRI reconstruction by prompting a
large latent diffusion model | [
"Ziqi Gao",
"S. Kevin Zhou"
] | Implicit visual knowledge in a large latent diffusion model (LLDM)
pre-trained on natural images is rich and hypothetically universal to natural
and medical images. To test this hypothesis, we introduce a novel framework for
Unsupervised Undersampled MRI Reconstruction by Prompting a pre-trained large
latent Diffusion model ( U$^2$MRPD). Existing data-driven, supervised
undersampled MRI reconstruction networks are typically of limited
generalizability and adaptability toward diverse data acquisition scenarios;
yet U$^2$MRPD supports image-specific MRI reconstruction by prompting an LLDM
with an MRSampler tailored for complex-valued MRI images. With any
single-source or diverse-source MRI dataset, U$^2$MRPD's performance is further
boosted by an MRAdapter while keeping the generative image priors intact.
Experiments on multiple datasets show that U$^2$MRPD achieves comparable or
better performance than supervised and MRI diffusion methods on in-domain
datasets while demonstrating the best generalizability on out-of-domain
datasets. To the best of our knowledge, U$^2$MRPD is the {\bf first}
unsupervised method that demonstrates the universal prowess of a LLDM, %trained
on magnitude-only natural images in medical imaging, attaining the best
adaptability for both MRI database-free and database-available scenarios and
generalizability towards out-of-domain data. | [
"eess.IV",
"cs.CV",
"cs.LG"
] | false |
2402.10747 | 2024-02-16T15:13:30Z | Fully Differentiable Lagrangian Convolutional Neural Network for
Continuity-Consistent Physics-Informed Precipitation Nowcasting | [
"Peter Pavlík",
"Martin Výboh",
"Anna Bou Ezzeddine",
"Viera Rozinajová"
] | This paper presents a convolutional neural network model for precipitation
nowcasting that combines data-driven learning with physics-informed domain
knowledge. We propose LUPIN, a Lagrangian Double U-Net for Physics-Informed
Nowcasting, that draws from existing extrapolation-based nowcasting methods and
implements the Lagrangian coordinate system transformation of the data in a
fully differentiable and GPU-accelerated manner to allow for real-time
end-to-end training and inference. Based on our evaluation, LUPIN matches and
exceeds the performance of the chosen benchmark, opening the door for other
Lagrangian machine learning models. | [
"cs.LG",
"cs.AI",
"cs.CV",
"I.2.1; J.2"
] | false |
2402.10851 | 2024-02-16T17:44:11Z | HistoSegCap: Capsules for Weakly-Supervised Semantic Segmentation of
Histological Tissue Type in Whole Slide Images | [
"Mobina Mansoori",
"Sajjad Shahabodini",
"Jamshid Abouei",
"Arash Mohammadi",
"Konstantinos N. Plataniotis"
] | Digital pathology involves converting physical tissue slides into
high-resolution Whole Slide Images (WSIs), which pathologists analyze for
disease-affected tissues. However, large histology slides with numerous
microscopic fields pose challenges for visual search. To aid pathologists,
Computer Aided Diagnosis (CAD) systems offer visual assistance in efficiently
examining WSIs and identifying diagnostically relevant regions. This paper
presents a novel histopathological image analysis method employing Weakly
Supervised Semantic Segmentation (WSSS) based on Capsule Networks, the first
such application. The proposed model is evaluated using the Atlas of Digital
Pathology (ADP) dataset and its performance is compared with other
histopathological semantic segmentation methodologies. The findings underscore
the potential of Capsule Networks in enhancing the precision and efficiency of
histopathological image analysis. Experimental results show that the proposed
model outperforms traditional methods in terms of accuracy and the mean
Intersection-over-Union (mIoU) metric. | [
"eess.IV",
"cs.CV",
"cs.LG"
] | false |
2402.10884 | 2024-02-16T18:42:08Z | Multi-modal preference alignment remedies regression of visual
instruction tuning on language model | [
"Shengzhi Li",
"Rongyu Lin",
"Shichao Pei"
] | In production, multi-modal large language models (MLLMs) are expected to
support multi-turn queries of interchanging image and text modalities. However,
the current MLLMs trained with visual-question-answering (VQA) datasets could
suffer from degradation, as VQA datasets lack the diversity and complexity of
the original text instruction datasets which the underlying language model had
been trained with. To address this challenging degradation, we first collect a
lightweight (6k entries) VQA preference dataset where answers were annotated by
Gemini for 5 quality metrics in a granular fashion, and investigate standard
Supervised Fine-tuning, rejection sampling, Direct Preference Optimization
(DPO), and SteerLM. Our findings indicate that the with DPO we are able to
surpass instruction-following capabilities of the language model, achieving a
6.73 score on MT-Bench, compared to Vicuna's 6.57 and LLaVA's 5.99 despite
small data scale. This enhancement in textual instruction proficiency
correlates with boosted visual instruction performance (+4.9\% on MM-Vet, +6\%
on LLaVA-Bench), with minimal alignment tax on visual knowledge benchmarks
compared to previous RLHF approach. In conclusion, we propose a
distillation-based multi-modal alignment model with fine-grained annotations on
a small dataset that reconciles the textual and visual performance of MLLMs,
restoring and boosting language capability after visual instruction tuning. | [
"cs.CL",
"cs.AI",
"cs.CV",
"cs.LG"
] | false |
2402.11089 | 2024-02-16T21:32:27Z | The Male CEO and the Female Assistant: Probing Gender Biases in
Text-To-Image Models Through Paired Stereotype Test | [
"Yixin Wan",
"Kai-Wei Chang"
] | Recent large-scale Text-To-Image (T2I) models such as DALLE-3 demonstrate
great potential in new applications, but also face unprecedented fairness
challenges. Prior studies revealed gender biases in single-person image
generation, but T2I model applications might require portraying two or more
people simultaneously. Potential biases in this setting remain unexplored,
leading to fairness-related risks in usage. To study these underlying facets of
gender biases in T2I models, we propose a novel Paired Stereotype Test (PST)
bias evaluation framework. PST prompts the model to generate two individuals in
the same image. They are described with two social identities that are
stereotypically associated with the opposite gender. Biases can then be
measured by the level of conformation to gender stereotypes in generated
images. Using PST, we evaluate DALLE-3 from 2 perspectives: biases in gendered
occupation and biases in organizational power. Despite seemingly fair or even
anti-stereotype single-person generations, PST still unveils gendered
occupational and power associations. Moreover, compared to single-person
settings, DALLE-3 generates noticeably more masculine figures under PST for
individuals with male-stereotypical identities. PST is therefore effective in
revealing underlying gender biases in DALLE-3 that single-person settings
cannot capture. Our findings reveal the complicated patterns of gender biases
in modern T2I models, further highlighting the critical fairness challenges in
multimodal generative systems. | [
"cs.CV",
"cs.AI",
"cs.CY"
] | false |
2402.11120 | 2024-02-16T22:48:38Z | DART: A Principled Approach to Adversarially Robust Unsupervised Domain
Adaptation | [
"Yunjuan Wang",
"Hussein Hazimeh",
"Natalia Ponomareva",
"Alexey Kurakin",
"Ibrahim Hammoud",
"Raman Arora"
] | Distribution shifts and adversarial examples are two major challenges for
deploying machine learning models. While these challenges have been studied
individually, their combination is an important topic that remains relatively
under-explored. In this work, we study the problem of adversarial robustness
under a common setting of distribution shift - unsupervised domain adaptation
(UDA). Specifically, given a labeled source domain $D_S$ and an unlabeled
target domain $D_T$ with related but different distributions, the goal is to
obtain an adversarially robust model for $D_T$. The absence of target domain
labels poses a unique challenge, as conventional adversarial robustness
defenses cannot be directly applied to $D_T$. To address this challenge, we
first establish a generalization bound for the adversarial target loss, which
consists of (i) terms related to the loss on the data, and (ii) a measure of
worst-case domain divergence. Motivated by this bound, we develop a novel
unified defense framework called Divergence Aware adveRsarial Training (DART),
which can be used in conjunction with a variety of standard UDA methods; e.g.,
DANN [Ganin and Lempitsky, 2015]. DART is applicable to general threat models,
including the popular $\ell_p$-norm model, and does not require heuristic
regularizers or architectural changes. We also release DomainRobust: a testbed
for evaluating robustness of UDA models to adversarial attacks. DomainRobust
consists of 4 multi-domain benchmark datasets (with 46 source-target pairs) and
7 meta-algorithms with a total of 11 variants. Our large-scale experiments
demonstrate that on average, DART significantly enhances model robustness on
all benchmarks compared to the state of the art, while maintaining competitive
standard accuracy. The relative improvement in robustness from DART reaches up
to 29.2% on the source-target domain pairs considered. | [
"cs.LG",
"cs.CV",
"stat.ML"
] | false |